You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A Comparison Between Human Visual Perception Under Object Segmentation and Recognition with State of the Art Deep Neural Networks.
Abstract
This work compares the attention of deep convolutional neural networks and the human visual perception system in classifying objects. In the proposed research, diagnostic regions for the human visual system and famous deep convolutional neural networks have been calculated; these regions are the most salient areas of each image, leading to accurate classification. They presumably have more meanings (than other regions) for each system, respectively.
We computed the diagnostic features of each image in each category with five convolutional networks, (VGG16, ResNet50, EfficientNetb0, AlexNet and DenseNet-169); five saliency models (GBVS, Itti, Signature, Simpsal and Spectral); and finally, with the human visual perception system under a designed behavioral task.
Have a look at following visual results for each section.
Results
On Deep CNNs
VGG16
ResNet-50
DenseNet-169
AlexNet
EfficientNet-b0
On Saliency Models (will be added)
GBVS
Itti-Koch
Signature
Simpsal
Spectral
Publications, Conferences and Presentations
You can also follow and check our project in Open Science Framework here.