Skip to content

Fine-tuned SGGen model mAP result #204

@narchitect

Description

@narchitect

Hello everyone,

I hope you can provide some insights on a matter we've been grappling with. We've been working with the pretrained Faster R-CNN model provided in this repository, attempting to fine-tune it for our specific dataset. However, due to the necessity of removing bbox layers when training SGGen, our bbox detection layers end up being trained solely on our dataset, without the benefit of pretrained values. Consequently, our mAP (mean Average Precision) struggles to exceed 10%.

Just to provide some context, our dataset comprises 377 similar images and includes 23 different classes, which, admittedly, doesn't make for an ideal scenario.

As a result, we've observed that the best mAP we could achieve using the SGGen model from this repository is approximately 25%. Given the challenges posed by our less-than-optimal data quality, we believe that achieving an mAP of 12% in fine-tuned models that require bbox detection, like SGGen, is the best we can realistically hope for.

Now, I'd like to reach out to the community to ask if anyone has experience fine-tuning SGGen models and whether they've achieved mAP values higher than 25%. We're particularly interested in understanding if a 10% mAP should be considered acceptable in this context.

Thank you in advance for sharing your insights and experiences. We look forward to your valuable input!

image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions