-
Notifications
You must be signed in to change notification settings - Fork 10
added rate limiting #221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: berickson/202504-process-metrics
Are you sure you want to change the base?
added rate limiting #221
Conversation
|
This is merging into a branch (dashboard updates) right now since it was easier for my environment to work on this way. Once that other branch gets merged in, this could get merged into master. |
| async def check_rate_limit(user_id): | ||
| '''Reusable rate limiter with service-specific settings''' | ||
| # TODO fetch from pmss/define appropriate window | ||
| max_requests = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something more descriptive.
max_user_requests_per_window
|
|
||
|
|
||
| def create_rate_limiter(service_name): | ||
| '''Factory function for rate limiters with closure over service name''' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That repeats the function name. I would describe what the rate limiter does (e.g. max # of requests per window). Basically, autogen docs should tell me what this does.
| now = time.time() | ||
|
|
||
| # Initialize user/service tracking | ||
| key = f'rate_limit:{service_name}:{user_id}' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
limiter_key
| RATE_LIMITERS[key] = collections.deque() | ||
|
|
||
| # Expire old requests | ||
| timestamps = RATE_LIMITERS[key] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
request_timestamps, maybe? or prior_requests_timestamps?
|
|
||
|
|
||
| def rate_limited(service_name): | ||
| '''Decorator for async functions needing rate limiting''' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should clearly document what kinds of functions this can wrap, and the protocol. Things like this:
if 'runtime' not in kwargs:
raise TypeError(f'`{func.__name__}` requires `runtime` keyword argument for checking rate limits.')
Should be in the docstring. Basically, I want to know how I use this.
|
|
||
| runtime = kwargs['runtime'] | ||
|
|
||
| check_limit = create_rate_limiter(service_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check_rate_limit would be slightly nicer.
| user = await learning_observer.auth.get_active_user(request) | ||
| user_id = user[learning_observer.constants.USER_ID] | ||
| if not await check_limit(user_id): | ||
| raise PermissionError(f'Rate limit exceeded for {service_name} service') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually a bit confused when I'd want this behavior. It'd be helpful to know when this is used.
Seems like what I want:
- Queue up requests
- If they go obsolete (e.g. user navigates away), drop them
- If they don't, let the user know we're throttling and throttle them
Simply failing seems like it might be annoying to the user.
|
I did a review. The code is fine on a code level, but I'm concerned on an algorithm level. I think what we want is:
|
Added a generic rate limiting decorator.
TODO