-
Notifications
You must be signed in to change notification settings - Fork 4
Blowing up Memory and needing to restart computer #21
Description
I am often having an issue specifically with Quantops node using the checkpoint node when I get to 'Load Quantized Model' where it just blows up the total commited memory usage in taskman and then crashes python and my entire system becomes 'unstable' some background services die and comfy wont even run again until restart.
I've already posted details which can be read here if need be: Comfy-Org/ComfyUI#13115
In short though, I have the wheels, comfy is up to date, on correct torch, python etc, it has generally been working very well for me but sometimes indeed when I may have forgotten to unload the cache or whenever using a certain workflow(I was trying a v2v workflow which I guess required more resources it happened even with a lot of special arguments active to optimize for my system) and a video extender workflow.. I digress.
Is this something anyone else has experienced? I just thought I'd bring it attention to the dev if this is something that can easily be resolved or some tips as to how to avoid it if possible. I've enabled disable cuda malloc, cache-none, low-vram (I'm on 8gb vram 32gb ram).
I will note that when I first tried running INT8 models WITHOUT installing the wheel, it did something similar to the memory spiking but it went away when the wheel was installed so is it something to do with that? Also this issue is only occuring when using the INT8tensormixed LTX 2.3 model.
Error in pic below.
Thanks.