Skip to content

Adding bone sets#289

Open
Brehana-Naidu wants to merge 7 commits intomainfrom
adding-bone-sets
Open

Adding bone sets#289
Brehana-Naidu wants to merge 7 commits intomainfrom
adding-bone-sets

Conversation

@Brehana-Naidu
Copy link
Collaborator

Pull Request Summary

Closes #219

Refactorbony_pelvis for backward compatibility

  • Reword generic comments to not mention pelvis specifically
  • Add template URL construction: template_{bonesetId}.json
  • Add comprehensive test documentation for multi-boneset support
  • Maintain SSRF prevention with isValidBoneId() validation

Allows new bone sets to be easily added by following the GitHub directory structure convention without code changes.

PR Checklist

  • Project builds and runs
  • Tests and linters pass
  • Any related documentation has been updated, including JSDoc comments or docstrings

- Reword generic comments to not mention pelvis specifically
- Add template URL construction: template_{bonesetId}.json
- Add comprehensive test documentation for multi-boneset support
- Maintain SSRF prevention with isValidBoneId() validation

Allows new bone sets to be easily added by following the GitHub directory structure convention without code changes.
Copy link
Collaborator

@leandrumartin leandrumartin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I like the approach you took to configuring the boneset URLs, I seem to have accidentally misled some of the direction with my issue description. Sorry for the confusion. I've clarified in my comment on server.js.

Also, it seems like this branch may have been branched off of your other pull request instead of main—there are changes in data_extraction/ here, which seem like they're from that PR. Those shouldn't be committed into this branch. In general, whenever you make a branch, it should be branched off of whichever branch you want to make a pull request into. So since this branch has a pull request into main, this should have been branched off of main instead of any other branch. There are two ways to accomplish this:

  • Option 1:
    1. Check out main (git checkout main)
    2. Update main (git pull or git pull origin main)
    3. Branch from main: (git checkout -b <new-branch-name> or git branch <new-branch-name> && git checkout <new-branch-name>)
  • Option 2:
    1. On issue page on GitHub, go to "Development" in the sidebar and create a new branch from there
    2. Locally, fetch from remote (git fetch)
    3. Checkout newly created branch (git checkout <branch-name>)

*.pyc

# XML files
*.xml No newline at end of file
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this added?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this removed? This file still seems useful and as far as I'm aware has no bearing on the issue being resolved.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing in the __pycache__ folder should be committed.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I do appreciate the work on this, a test suite is not necessary at this time. At this stage it would slow us down, and I've set up work in other issues to set up tests. We can get rid of this test suite for now.

I realize that my wording of the issue descriptions probably made it sound like server unit tests were necessary, so I'm sorry for the lack of clarity on that.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the bones will be stored in the DataPelvis/ folder in the database, so that does not need to vary for all of the endpoints here. While I do like the approach you took with having the endpoints take in an extra optional argument for the boneset, that is unnecessary. If an endpoint gets a sub-bone, for example, it will pull it from the same folder no matter which boneset the sub-bone is a member of.

What we're really looking for here is just a refactoring of the endpoints where the DEFAULT_BONESET_ID is being used. My IDE tells me there are two. And in those, it looks like they're trying to get all of the data from what is currently the only boneset available, by grabbing the data from that one JSON file. Therefore, in order to support adding more bonesets in the future, instead of grabbing only that one JSON file, they should loop through data in all of the JSON files in the boneset/ directory in the database. And the DEFAULT_BONESET_ID const would have to be refactored out and removed so that it is no longer hardcoded that it's the only boneset we have. Only the endpoints where the BONESET_JSON_URL const is being used would have to be refactored this way.

I'm sorry if the issue description wasn't clear–I understand how it may have caused confusion here. I've rewritten the issue description a bit to clarify what needs to be done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Refactor in preparation for adding new bone sets

3 participants