⚡ Optimize duplicate link check in HTML parsing#198
⚡ Optimize duplicate link check in HTML parsing#198google-labs-jules[bot] wants to merge 1 commit intomasterfrom
Conversation
Replaces the O(N^2) duplicate link check in `HTML_to_LinkTable` with a hash table based approach (O(N)). This significantly improves performance when parsing pages with many links. The implementation uses a simple open-addressing hash set to track seen links during the recursive traversal. The behavior regarding link name truncation (to MAX_FILENAME_LEN) and trailing slash handling is preserved to match existing logic.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|


LinkHashSetstruct and helper functions (insert, contains, resize, free) insrc/link.c.HTML_to_LinkTabletoHTML_to_LinkTable_recursivetaking the hash set as an argument.HTML_to_LinkTablewrapper to initialize and populate the hash set with existing links before parsing.PR created automatically by Jules for task 7906684312957111953 started by @fangfufu