Modular Inferencing Utilities#171
Open
alexander-huang-commits wants to merge 9 commits intoCambridgeCIA:mainfrom
Open
Modular Inferencing Utilities#171alexander-huang-commits wants to merge 9 commits intoCambridgeCIA:mainfrom
alexander-huang-commits wants to merge 9 commits intoCambridgeCIA:mainfrom
Conversation
using inferencing scripts
added credit for file that I didn't write
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary:
This is a modular way to inference with one script across different experiment types and initializations (FBP, SIRT, and whatever initializer you would like to add). Modular in the sense that when you add new metrics or neural network models, you just update the list at the top of the inferencing script.
Changes Made:
-Added a statistics file that prompts for models and metrics to compare them with using ANOVA
-Added a generalized inferencing script with better instructions in the top of the file (saves specified metrics to csv and/or images of best/worst performance)
-Added a script to save npy images of all models available in ALL_MODELS folder
-Added a parameter counting script to see size of NN
-Added CSV file tree for each experiment type
How to use:
There are readme's in each folder explaining the file structure and usage of each script. But in general, when a new model is made: