Skip to content

jhrudden/Tumor-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Brain Cancer Segmentation

Datasets

The datasets used in this project are available on Kaggle:

Setup Instructions

Step 1: Install Dependencies

Install the necessary Python packages using the requirements.txt file:

pip install -r requirements.txt

Alternatively, you can create a conda environment using the following commands:

conda env create -f environment.yaml
conda activate tumor-segmentation

Step 2: Setting Up Pre-commit Hooks

Our project uses pre-commit hooks to ensure the cleanliness and consistency of Jupyter notebooks by automatically stripping outputs before they are committed. This step helps maintain a clean git history and minimizes "diff noise."

After installing the project dependencies, activate the pre-commit hooks by running the following command:

pre-commit install

This command sets up the hooks based on our project's .pre-commit-config.yaml configuration and needs to be run only once.

This current hook cleans the Jupyter notebooks before they are committed.

Step 3: Setup Environment Variables

To create a base configuration for the project, run the following command:

cp config/env_local.env .env

This will create a .env file in the root dir of the project. However, to actually run training and testing scripts, you will need to fill in the values in the .env file.

Step 4: Kaggle API Authentication

Follow the instructions to set up your Kaggle API credentials. You can find the Kaggle API authentication instructions in the Kaggle API Documentation.

Step 5: Download Datasets

Refer to the notebooks/downloading_datasets.ipynb notebook for step-by-step instructions on using the Kaggle API to download the datasets required for this project. The datasets will be downloaded to the ./datasets folder, which is configured to be ignored by git.

Loading Datasets

For an example of how to load the classification or segmentation datasets, see the notebooks/classification_dataloader_example.ipynb and notebooks/segmentation_dataloader_example.ipynb, respectively.

Run Experiments:

Semantic Segmentation

See src/scripts/train_segmentation.py for logic related to running segmentation experiments. For more info run the following from the root directory to see available training configurations:

python -m src.scripts.train_segmentation --help

About

This repo is my trial by fire introduction into computer vision, featuring practical experiments, code, and key insights.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors