Skip to content

feat; add anirena support#1366

Open
HichamLL04 wants to merge 33 commits into
Audionut:masterfrom
HichamLL04:feat_Anirena
Open

feat; add anirena support#1366
HichamLL04 wants to merge 33 commits into
Audionut:masterfrom
HichamLL04:feat_Anirena

Conversation

@HichamLL04
Copy link
Copy Markdown

@HichamLL04 HichamLL04 commented May 12, 2026

Ref #1338

Summary by CodeRabbit

  • New Features

    • Added AniRena tracker support with full upload flow, anime linking, duplicate detection, language normalization, and rich markdown descriptions.
  • Improvements

    • Uploads now handle authentication token rotation and automatic retry on auth failures.
    • Torrent creation accepts single or multiple announce URLs, sets announce-list, and defaults torrents to private.
  • Chores

    • Added AniRena configuration options and registered the tracker for API-based uploads.

Review Change Stack

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds ANIRENA tracker support: generalized announce-list handling in COMMON.create_torrent_for_upload, a new ANIRENA tracker class with auth/upload/search/helpers, example-config entries, and registration in trackersetup for API-based uploads.

Changes

ANIRENA Tracker Implementation

Layer / File(s) Summary
Announce URL flexibility in COMMON
src/trackers/COMMON.py
create_torrent_for_upload accepts announce_url as `str
ANIRENA tracker initialization, auth, and upload
src/trackers/ANIRENA.py
Adds ANIRENA class that loads API key from config, implements token handling and upload() which creates the torrent with AniRena announce URLs, encodes the file, assembles payload (category/sub-category/languages/description/anime ID), POSTs to /api/v1/torrents, rotates tokens via X-New-Token, updates meta['tracker_status'], and retries on HTTP 401.
ANIRENA metadata and content helpers
src/trackers/ANIRENA.py
Implements _canonicalize_languages(), get_anime_id() (search/link), get_category()/get_sub_category() taxonomy logic, get_languages() BCP 47 normalization and hardsub prompting, get_description() markdown assembly, and search_existing() duplicate lookup via /api/v1/torrents/search.
Configuration and tracker registration
data/example-config.py, src/trackersetup.py
Example config updated with ANIRENA block (link_dir_name, api_key), ANIRENA imported and added to tracker_class_map, and ANIRENA added to api_trackers.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • Audionut

Poem

🐰 A tracker hops into the glen,
Tokens fetched and torrents penned,
Languages tidy, descriptions bright,
AniRena readies every night,
The rabbit cheers — upload takes flight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Title check ✅ Passed The title 'feat; add anirena support' clearly identifies the main change: adding support for the ANIRENA tracker. It directly relates to the changeset which adds ANIRENA tracker implementation across multiple files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

Thanks for taking the time to contribute to this project. Upload Assistant is currently in a complete rewrite, and no new development is being conducted on this python source at this time.

If you have come this far, please feel free to leave open, any pull requests regarding new sites being added to the source, as these can serve as the baseline for later conversion.

If your pull request relates to a critical bug, this will be addressed in this code base, and a new release published as needed.

If your pull request only addresses a quite minor bug, it is not likely to be addressed in this code base.

Details for the new code base will follow at a later date.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/trackers/ANIRENA.py`:
- Around line 368-383: The duplicate-search currently swallows non-200 responses
and returns an empty dupes list; change the logic in the ANIRENA duplicate
search method (the block building the dupes list and the surrounding try/except
that references dupes, self.base_url and console) so that if
response.status_code != 200 you log an error including response.status_code and
response.text via console.print (or process logger) and perform a single retry
before failing; if the retry also fails, raise a RuntimeError (or return a
distinct failure value such as None) instead of returning an empty list so
callers don’t treat API failures as “no duplicates.” Ensure the raised error or
None is documented/handled upstream.
- Around line 132-137: The current recursive retry in ANIRENA.upload (when
response.status_code == 401) can recurse indefinitely; change it to a bounded,
non-recursive retry: add a retry counter (either a new optional parameter like
retry_count with default 0 or an instance field such as self._auth_retry) and a
MAX_RETRIES constant, reset self.token to None on 401, then perform an iterative
retry loop or re-call the request only while retry_count < MAX_RETRIES
(incrementing the counter each attempt) and return a failure or raise after max
retries instead of calling upload() recursively; update references in upload and
any callers to use the new parameter/field and ensure the console message
includes attempt number for clarity.

In `@src/trackers/COMMON.py`:
- Around line 155-169: The code in COMMON.py assigns announce URLs from
announce_url and raw_announce lists but accesses [0] without checking for empty
lists; update the branches that handle list values (the announce_url list branch
and the raw_announce list branch) to first check if the list is non-empty and
raise the same ValueError("No announce URL found for tracker {tracker}...") (or
fall back to the non-list logic) when empty, and only then set
new_torrent.metainfo['announce'] and new_torrent.metainfo['announce-list'];
ensure the same empty-list guard is applied when constructing [[url] for url in
...] to avoid IndexError and keep behavior consistent with the non-list/empty
announce handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a2c7f737-53f7-4c97-a3ce-88f1c2be60e2

📥 Commits

Reviewing files that changed from the base of the PR and between f553c35 and 71123f8.

📒 Files selected for processing (4)
  • data/example-config.py
  • src/trackers/ANIRENA.py
  • src/trackers/COMMON.py
  • src/trackersetup.py

Comment thread src/trackers/ANIRENA.py
Comment thread src/trackers/ANIRENA.py
Comment thread src/trackers/COMMON.py
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/trackers/ANIRENA.py`:
- Around line 220-237: The code assumes meta['audio_languages'] and
meta['subtitle_languages'] are lists of full names and that "japanese" is
spelled out; instead, ensure you canonicalize inputs in the AniRena metadata
builder by: treating scalar strings as single-element lists (handle None, "",
and non-list types), normalizing each entry to lowercase, mapping common
aliases/codes (e.g. "ja", "jpn", "jp") to a single canonical token like
"japanese", trimming/ignoring empty values, and deduplicating the resulting
lists before the existing logic that builds audio_langs/sub_langs and checks for
raw vs sub-audio; apply the same normalization to hardsub_languages and mirror
these fixes in the other similar block referenced (lines 243-283) so all
language checks use the canonicalized lists.
- Around line 225-236: The code in the ANIRENA language-classification branch
uses meta.get('hardsub_languages', []) and returns 'raw' when no soft subs and
hardsub_languages is empty, but in unattended runs hardsub_languages is simply
unset so this mislabels hardsubbed releases as raw; update the branch in the
function using meta (the block referencing audio_langs, sub_langs and
hardsub_langs) to treat a missing hardsub_languages differently from an explicit
empty list—i.e., do not return 'raw' unless hardsub_languages is present and
explicitly empty (or another explicit flag indicates no hardsubs); instead skip
classification or return None/unknown when hardsub_languages is absent; apply
the same change to the duplicated logic around the 260-271 block that checks
hardsub_languages.

In `@src/trackers/COMMON.py`:
- Around line 154-162: The direct-string branch currently assigns announce_url
verbatim and can accept whitespace-only values; update the branch handling
announce_url (the code that sets new_torrent.metainfo['announce']) to strip the
string, validate it is non-empty after strip (raise ValueError with the same
style message referencing tracker if empty), then set
new_torrent.metainfo['announce'] to the stripped value and
new_torrent.metainfo['announce-list'] to [[stripped_value]] to mirror the list
branch behavior; reference the announce_url variable and new_torrent.metainfo
assignments to locate where to apply this change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f0f9a250-c147-4026-b936-eccc2a3a6eeb

📥 Commits

Reviewing files that changed from the base of the PR and between 71123f8 and aecc424.

📒 Files selected for processing (2)
  • src/trackers/ANIRENA.py
  • src/trackers/COMMON.py

Comment thread src/trackers/ANIRENA.py Outdated
Comment thread src/trackers/ANIRENA.py Outdated
Comment thread src/trackers/COMMON.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/trackers/ANIRENA.py (1)

255-263: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Hardsub handling still only works reliably on the interactive path.

Lines 262-263 treat a missing hardsub_languages key as “no hardsubs”, and Lines 267-294 never merge pre-populated meta['hardsub_languages'] into languages. That means unattended or pre-filled hardsubbed uploads can still be marked raw or sent without the subtitle language in the payload.

Suggested fix
     def get_sub_category(self, meta: dict[str, Any]) -> str:
         if meta.get('anime'):
             audio_langs = self._canonicalize_languages(meta.get('audio_languages'))
             
             if 'japanese' in audio_langs and len(audio_langs) == 1:
                 # If it's only Japanese audio, check subs
                 sub_langs = self._canonicalize_languages(meta.get('subtitle_languages'))
                 
                 # Check for hardsubs if no soft subs
                 hardsub_langs = self._canonicalize_languages(meta.get('hardsub_languages'))
+                hardsub_known = 'hardsub_languages' in meta
                 
-                if not sub_langs and not hardsub_langs:
+                if not sub_langs and hardsub_known and not hardsub_langs:
                     return 'raw'
             return 'sub-audio'
         return ''

     def get_languages(self, meta: dict[str, Any]) -> list[str]:
         langs = set()
         # Collect languages from audio and subtitles
         audio_langs = self._canonicalize_languages(meta.get('audio_languages'))
         sub_langs = self._canonicalize_languages(meta.get('subtitle_languages'))
+        hardsub_langs = self._canonicalize_languages(meta.get('hardsub_languages'))
             
-        all_langs = audio_langs + sub_langs
+        all_langs = audio_langs + sub_langs + hardsub_langs
         for lang_name in all_langs:
             try:
                 # Use langcodes to find the best BCP 47 match
                 lang = langcodes.find(lang_name)
                 if lang and lang.is_valid():
                     langs.add(lang.to_tag())
             except Exception:
                 pass
         
         # If no soft subtitles detected, ask about hardsubs
-        if not sub_langs and not meta.get('unattended'):
+        if not sub_langs and not hardsub_langs and not meta.get('unattended'):
             from rich.prompt import Confirm, Prompt
             if Confirm.ask(f"[{self.tracker}] [yellow]No soft subtitles detected.[/yellow] Does this release include Hardsubs?"):
                 hardsub_lang = Prompt.ask(f"[{self.tracker}] Please enter the Hardsub language code (e.g., 'en', 'es')")

Also applies to: 267-294

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/trackers/ANIRENA.py` around lines 255 - 263, Summary: hardsub_languages
is treated as missing and not merged into languages, causing pre-populated
hardsub info to be ignored and items to be misclassified as 'raw'. Fix: in the
ANIRENA code path around the check that uses
self._canonicalize_languages(meta.get('hardsub_languages')), ensure you treat a
missing key as empty list but also merge any returned hardsub_langs into the
overall languages list used for payloads (e.g., combine sub_langs and
hardsub_langs into languages) before deciding to return 'raw' or building the
upload payload; update the conditional that returns 'raw' to check for both
sub_langs and hardsub_langs being empty (not None) and ensure
meta['hardsub_languages'] is propagated into the languages/payload construction
so pre-filled hardsub data is respected outside the interactive path.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/trackers/ANIRENA.py`:
- Around line 189-190: The current check allows any TV/MOVIE upload into AniRena
lookups; update the conditional logic so AniRena series lookups only run for
anime uploads by requiring meta.get('anime') to be truthy before calling
get_anime_id and before performing the automatic-linking logic that handles
single-result or exact-title matches (the block around the search-result
handling at lines ~216-223). Concretely, modify the early return condition that
currently reads if meta.get('category') not in ('TV', 'MOVIE') and not
meta.get('anime'): return None to ensure that even TV/MOVIE entries are skipped
unless meta.get('anime') is true, and add the same meta.get('anime') guard
around the auto-linking/search-result branch so get_anime_id/search-result
auto-link only executes for confirmed anime uploads.

---

Duplicate comments:
In `@src/trackers/ANIRENA.py`:
- Around line 255-263: Summary: hardsub_languages is treated as missing and not
merged into languages, causing pre-populated hardsub info to be ignored and
items to be misclassified as 'raw'. Fix: in the ANIRENA code path around the
check that uses self._canonicalize_languages(meta.get('hardsub_languages')),
ensure you treat a missing key as empty list but also merge any returned
hardsub_langs into the overall languages list used for payloads (e.g., combine
sub_langs and hardsub_langs into languages) before deciding to return 'raw' or
building the upload payload; update the conditional that returns 'raw' to check
for both sub_langs and hardsub_langs being empty (not None) and ensure
meta['hardsub_languages'] is propagated into the languages/payload construction
so pre-filled hardsub data is respected outside the interactive path.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a201adad-138a-40d1-a121-cc4040863021

📥 Commits

Reviewing files that changed from the base of the PR and between aecc424 and 418c058.

📒 Files selected for processing (1)
  • src/trackers/ANIRENA.py

Comment thread src/trackers/ANIRENA.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/trackers/ANIRENA.py (1)

289-312: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

User-entered hardsub and fallback language codes are not canonicalized to BCP 47.

Audio/subtitle languages flow through langcodes.find(...).to_tag() (lines 282-284), but the hardsub branch at line 294 (langs.add(hardsub_lang.strip())) and the fallback loop at lines 309-312 push raw user input straight into langs. If a user enters "english", "Spanish", or "jp", those strings end up in the languages payload sent to AniRena rather than valid BCP 47 tags. The earlier canonicalization fix covered the meta-derived branches but not these interactive paths.

Suggested fix
-                if hardsub_lang:
-                    langs.add(hardsub_lang.strip())
+                if hardsub_lang:
+                    try:
+                        lang = langcodes.find(hardsub_lang.strip())
+                        if lang and lang.is_valid():
+                            langs.add(lang.to_tag())
+                    except Exception:
+                        pass
                     # Store it in meta so get_sub_category can see it
                     if 'hardsub_languages' not in meta:
                         meta['hardsub_languages'] = []
                     meta['hardsub_languages'].append(hardsub_lang.strip())
@@
-            if lang_input:
-                for l in lang_input.split(','):
-                    l = l.strip()
-                    if l:
-                        langs.add(l)
+            if lang_input:
+                for entry in lang_input.split(','):
+                    entry = entry.strip()
+                    if not entry:
+                        continue
+                    try:
+                        lang = langcodes.find(entry)
+                        if lang and lang.is_valid():
+                            langs.add(lang.to_tag())
+                            continue
+                    except Exception:
+                        pass
+                    # Fall back to treating it as an already-valid tag
+                    langs.add(entry)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/trackers/ANIRENA.py` around lines 289 - 312, User-entered hardsub and
fallback language codes aren’t being canonicalized before adding to langs, so
convert the interactive inputs the same way you do elsewhere (using
langcodes.find(...).to_tag() or the existing canonicalization helper) before
adding to langs and before appending to meta['hardsub_languages']; update the
hardsub branch (hardsub_lang from Prompt.ask) and the fallback loop (lang_input
split loop) to strip the input, run it through langcodes.find(...).to_tag()
(handling lookup failures by skipping or logging) and then add the resulting BCP
47 tag to langs and meta as appropriate.
🧹 Nitpick comments (2)
src/trackers/ANIRENA.py (2)

132-141: ⚡ Quick win

Bounded 401 retry recreates torrent and re-prompts on every attempt.

The retry counter resolves the unbounded recursion, but each recursive call into upload() re-runs common.create_torrent_for_upload(...) (line 59), re-encodes the .torrent (line 75), and re-invokes get_languages() / get_anime_id() — both of which can prompt the user in interactive mode. A user who hits a transient 401 will be asked about hardsubs/series selection two or three times.

Consider refactoring so only the POST is retried (e.g., refresh token, then re-POST url with the existing data/headers) instead of recursing into upload(). This also avoids needlessly mutating the torrent file on retries.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/trackers/ANIRENA.py` around lines 132 - 141, The 401 retry branch in
upload() currently re-calls upload() recursively which re-runs
common.create_torrent_for_upload(...), re-encodes the .torrent and re-invokes
interactive helpers like get_languages() and get_anime_id(); change it to
refresh the token and retry only the POST request using the already-built
data/headers/body instead of recursing. Concretely: in upload() capture the
prepared payload (torrent bytes, data dict and headers) before the network call,
on a 401 set self.token = None and obtain a new token (reuse whatever
token-refresh logic you have), then re-run only the HTTP POST to url with the
preserved payload and headers (updating the Authorization header) up to the same
retry limit; do not call common.create_torrent_for_upload, get_languages,
get_anime_id or re-encode the torrent during retries.

151-182: Use langcodes.Language.get() with fallback to find() for robustness.

langcodes.find() indeed only accepts language names, not ISO codes—it will raise LookupError on inputs like "ja" or "jpn", causing the except Exception fallback to silently pass through the raw token. Currently, meta['audio_languages'] and meta['subtitle_languages'] are populated exclusively with language names from MediaInfo's %Language/String% output (e.g. "English", "Japanese"), and prep.py explicitly validates against ISO codes. However, to guard against future input paths or manual metadata overrides that might inject codes, replace the current find()-only logic with:

lang = langcodes.Language.get(l_strip) or langcodes.find(l_strip)

This accepts both codes and names, maintaining the current expected behavior while preventing silent misclassification if codes ever appear upstream.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/trackers/ANIRENA.py` around lines 151 - 182, In _canonicalize_languages,
replace the current langcodes.find-only lookup with a robust lookup that first
attempts langcodes.Language.get(l_strip) and falls back to
langcodes.find(l_strip); if the obtained lang object exists and is_valid(),
append lang.language_name().lower(), otherwise append the original
l_strip.lower(); keep the existing exception handling and the deduplication
logic so behavior is unchanged for name inputs but also accepts ISO codes
without raising LookupError.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Duplicate comments:
In `@src/trackers/ANIRENA.py`:
- Around line 289-312: User-entered hardsub and fallback language codes aren’t
being canonicalized before adding to langs, so convert the interactive inputs
the same way you do elsewhere (using langcodes.find(...).to_tag() or the
existing canonicalization helper) before adding to langs and before appending to
meta['hardsub_languages']; update the hardsub branch (hardsub_lang from
Prompt.ask) and the fallback loop (lang_input split loop) to strip the input,
run it through langcodes.find(...).to_tag() (handling lookup failures by
skipping or logging) and then add the resulting BCP 47 tag to langs and meta as
appropriate.

---

Nitpick comments:
In `@src/trackers/ANIRENA.py`:
- Around line 132-141: The 401 retry branch in upload() currently re-calls
upload() recursively which re-runs common.create_torrent_for_upload(...),
re-encodes the .torrent and re-invokes interactive helpers like get_languages()
and get_anime_id(); change it to refresh the token and retry only the POST
request using the already-built data/headers/body instead of recursing.
Concretely: in upload() capture the prepared payload (torrent bytes, data dict
and headers) before the network call, on a 401 set self.token = None and obtain
a new token (reuse whatever token-refresh logic you have), then re-run only the
HTTP POST to url with the preserved payload and headers (updating the
Authorization header) up to the same retry limit; do not call
common.create_torrent_for_upload, get_languages, get_anime_id or re-encode the
torrent during retries.
- Around line 151-182: In _canonicalize_languages, replace the current
langcodes.find-only lookup with a robust lookup that first attempts
langcodes.Language.get(l_strip) and falls back to langcodes.find(l_strip); if
the obtained lang object exists and is_valid(), append
lang.language_name().lower(), otherwise append the original l_strip.lower();
keep the existing exception handling and the deduplication logic so behavior is
unchanged for name inputs but also accepts ISO codes without raising
LookupError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 519ba7cd-4d86-4c97-8547-b3afada06991

📥 Commits

Reviewing files that changed from the base of the PR and between 418c058 and 17f0f67.

📒 Files selected for processing (2)
  • src/trackers/ANIRENA.py
  • src/trackers/COMMON.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/trackers/COMMON.py

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/trackers/ANIRENA.py`:
- Around line 409-417: The code directly indexes meta['title'] when building the
search_query, which raises KeyError if title is missing; change this to use
meta.get('title') with a fallback to meta.get('name') (e.g. search_query =
meta.get('title') or meta.get('name') or ''), then apply the existing cleanup
(re.sub(...).strip()) only if the resulting search_query is non-empty and handle
empty query case appropriately before constructing the data dict used by the
duplicate check in ANIRENA.py (the search_query variable and the later
duplicate-check logic that swallows exceptions).
- Around line 217-235: In get_anime_id, harden parsing of the AniRena search
results by validating each result before accessing fields: when iterating
results (variable results / loop variable res), skip any entry where
res.get('id') is missing or not a str/int, and use res.get('title', '') and safe
string coercion for the unattended equality check against search_query to avoid
None; likewise guard accesses to res.get('season_year') and res.get('season')
when printing the interactive list (use sensible fallbacks like "" or
"unknown"). Also, when returning results[0]['id'] in the unattended
single-result path, use results[0].get('id') and if missing return None; and
improve the exception handling in get_anime_id to include the caught exception
message in the error log to aid diagnostics.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 63f41762-b5ca-4de5-b056-a9fb0961b25c

📥 Commits

Reviewing files that changed from the base of the PR and between 17f0f67 and 54c4186.

📒 Files selected for processing (1)
  • src/trackers/ANIRENA.py

Comment thread src/trackers/ANIRENA.py Outdated
Comment thread src/trackers/ANIRENA.py
@HichamLL04 HichamLL04 changed the title Feat anirena feat; add anirena support May 13, 2026
…eline

- Implement get_name() for orchestrator compatibility
- Fix tracker_status initialization to prevent KeyErrors
- Honor --skip-dupe-check (meta['dupe']) in search_existing
- Forward --keywords and --personalrelease flags to API payload
- Update --debug mode to skip HTTP requests for consistency
- Consolidate User-Agent and auth headers via _get_headers helper
- Improve error reporting with 'data error:' prefixes for better detection
…rove --sdc feedback

- Update _get_headers to use browser User-Agent for public endpoints (fixing anime search 403)
- Add explicit console logging when dupe search is skipped via --sdc flag
- (Reverted global changes to trackerstatus.py to maintain isolation)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant