Skip to content

Evaluation always produces mAP of 0.0 when using backbones other than Resnet50 #647

Open
@jpxrc

Description

@jpxrc

First and foremost, thank you for the awesome package! The dataset I am using is of satellite images consisting of 29 different classes. I have been able to train and evaluate a retinanet model on this dataset using the default 'densenet50' backbone on a subset of the 29 classes.

However, when I switch over to training and evaluating a model with a different backbone network such as 'densenet121', all of the mAP scores for each class is zero. I'm not receiving any issues when training (I am also using the random-transform flag for each epoch), or when converting the model (I also supply the --backbone='densenet121' flag) and it converts successfully. I can also see that losses being optimized during training so it's definitely detecting and classifying the objects in the images.

I even tried using the original resnet50 model trained on a subset of classes to see if it would pick up those classes on the full dataset with 29 classes and it still produces an output of zero. I looked at the validation_annotations.csv file for both cases and the formatting is identical so I don't think it has to do with the annotation files.

I have attached the validation_annotations.csv file, the classes.csv file (converted to .txt files in order to attach them here)
common_classes.txt
common_validation_annotations.txt

Any ideas what could be going on?

EDIT: I just did a comparison of a Resnet50 model and Densenet121 model both trained on the same dataset that I know for sure works and the problem is definetely with the densenet121 implementation because the Resnet50 model is producing output during evaluation.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions