-
Notifications
You must be signed in to change notification settings - Fork 4
Description
I am just sharing info regarding the problem I have been facing when trying to infer a semantic segmentation model in large 2D images (>600 MB). BiaPy released an error as presented bellow:
/tmp/tmp8z1ohrf7: line 3: 16 Killed python3 -u /installations/BiaPy/main.py --config /BiaPy_files/input.yaml --result_dir /F/Lab/MouseMusclePOGLUT1/data/predictions/all_predictions_30012026_modelv2 --name input_modified --run_id 1 --dist_backend gloo
ERROR conda.cli.main_run:execute(127): conda run python3 -u /installations/BiaPy/main.py --config /BiaPy_files/input.yaml --result_dir /F/Lab/MouseMusclePOGLUT1/data/predictions/all_predictions_30012026_modelv2 --name input_modified --run_id 1 --dist_backend gloo failed. (See above for error)
I double-checked all the BiaPy settings and the resources of my computer. Everything was fine. After a very long chat with chatGPT we realised Docker was causing a bottleneck through its internal configuration file: ".wslconfig", accessed through: C:\Users<YOUR_USERNAME>.wslconfig
There, the memory parameter was only allocating 6GB, and there was no other way to visualise it in Docker but by accessing this file. There, I just modified the file by increasing the memo and the processors:
memory=64GB
processors=16
swap=32GB
right after I had to type in the Windows powershell->wsl --shutdown
and restart Docker. Once restarted, you can verify everything is fine by executing within the Docker image:
free -h
nproc
I hope this issue could help others! Best wishes.