Conversation
|
|
||
| ## Training Data | ||
|
|
||
| V2 uses data out of [NudeNet_v1](https://archive.org/details/NudeNet_classifier_dataset_v1) |
There was a problem hiding this comment.
I'd rather link to the project website. Or maybe both?
http://bpraneeth.com/projects/nudenet
|
|
||
| public required init() { | ||
| guard let model = try? VNCoreMLModel(for: NSFW().model) else { | ||
| guard let model = try? VNCoreMLModel(for: NSFW(configuration: MLModelConfiguration()).model) else { |
There was a problem hiding this comment.
I see mostly refactoring work, and this new configuration parameter.
What did I miss, I thought this release is about fixing the Xcode 13 trouble?
There was a problem hiding this comment.
This release improves the dataset and makes it iOS agnostic so it can also be run on macOS.
There was a problem hiding this comment.
Oh, so the tests now run on macOS and therefore don't fail? We still don't know why they fail on iOS though? Are you sure the model works at runtime all the time then?
There was a problem hiding this comment.
@fabianehlert shared an Apple Forums link with me once that showed, that Machine Learning on iOS 15 Simulators is broken right now
|
|
||
| private func didDetectNSFW(confidence: Float) { | ||
| if confidence > 0.8 { | ||
| if confidence > 0.9 { |
There was a problem hiding this comment.
How did you arrive at 0.9 here? Experimental? I feel like that's still a bit on the maybe-side and would increase to 0.95.
There was a problem hiding this comment.
Well it depends on the content of course. And for the camera demo I think it works quite great, because most people won't gonna strip completely naked to try it out :D
Upon some "personal testing" I found this version of the model to be suboptimal at detecting male genitals. My guess is that the training data set is biased, because whenever there is a male genital, there is also a woman in the training images.
So I will try it out some more and try to improve it for that use case
There was a problem hiding this comment.
I think we should! That's the use case we're most interested in after all, no?
There was a problem hiding this comment.
Yep, that's why I tested it ;)
There was a problem hiding this comment.
I totally agree with @leberwurstsaft – it is the use case for us.
A super small sample of the linked training data also left me with the impression that the model could have a weakness in that regard.
Also, in the nudity files there seem to be a lot of textual overlays in the corners. The model might have a tendency to flag stuff as nude if there is text in a corner. But that shouldn’t be big of a problem as long as it doesn’t influence the amount of false-negatives too.
|
I've tried v 2.0 but doesn't work. It gives me every time 0% |
|
Is this project still under development? |
Uh oh!
There was an error while loading. Please reload this page.