Skip to content

Conversation

@Bhavya1604
Copy link
Contributor

Hi @henrykironde , @bw4sz and @jveitchmichaelis
This Draft PR contains a working prototype script that implements the "predict and delete" strategy to handle the double-counting of objects in overlapping images. I've adapted the core logic from the DoubleCounting repo and integrated it

I tested the workflow on a dataset with 70-80% overlap using the "left-hand" strategy for clear visualization. The blue boxes are all initial predictions, while the pink boxes are the final, unique predictions for that image.

output :
image

we can observe that the top predictions are in pink which are unique (new for that image) which indicates that the code is able to identify the overlap and is working fine. there were a total of 401 predictions of which 194 where detected to be unique.

This PR contains the standalone DoubleCounting.py script for review. Before I start integrating this into the main DeepForest library, I would greatly appreciate your feedback

@bw4sz
Copy link
Collaborator

bw4sz commented Aug 7, 2025

Thanks for your thoughts here. Your example is somewhat difficult to follow because there are alot of boxes, can you use the deepforest bird model and the data here https://github.com/weecology/DoubleCounting/tree/main/tests/data/birds.

I think that if we merge this we would need a seperate pip install since the dependencies are heavy compared to the rest of the repo. So I imagine something like

pip install deepforest[double_counting] I think this would be an extra the .toml

    # pyproject.toml
    [project]
    name = "my_package"
    version = "0.1.0"

    [project.optional-dependencies]
    subpackage_extra = [
        "dependency_for_subpackage_extra_1",
        "dependency_for_subpackage_extra_2",
    ]

Then we would need to collect several other datasets to try to get a handle on how well it generalizes and what parameters are sensitive. These parameters would need to go in the hydra config. The general workflow would be something like

  1. Make a function that takes in a list of images.
  2. use predict_tile on each image
  3. Run double counting
  4. Produce visualizations
  5. Return a results object of unique data

All of this is the module you attached, but would need a integration into deepforest.main()

plus a documentation page with examples.

Roadmap

  • example with existing test data
  • gather new examples to assess parameter sensitivity
  • package extra install for unique dependencies
  • documentation page
  • integrate a function with deepforest main (unique_predictions_images in your module, maybe call it deepforest.main.predict_unique

@bw4sz bw4sz added Feature Request New feature or request dependencies Pull requests that update a dependency file Ideas for Machine Learning! These are machine learning ideas and papers that could be useful for DeepForest models. High level. To be documented Please wait until this issue is fully documented in order to best contribute. labels Aug 7, 2025
@Bhavya1604
Copy link
Contributor Author

@bw4sz Thank you again for the detailed feedback and the clear roadmap. The plan for optional dependencies and integrating the feature as predict_unique makes perfect sense.

As you suggested, I've rerun the workflow on the bird dataset to provide a clearer example of the result.
image

I'll start testing it with different data and let you know the sensitive parameters.

@bw4sz
Copy link
Collaborator

bw4sz commented Aug 18, 2025

great, let me know if you need help.

@Bhavya1604
Copy link
Contributor Author

@bw4sz i could only fine few overlapping dataset from kaggle and github. the rest all data i found was not in order or there where arial images but they where of different places without overlaps so where can i find more datasets to test this . like are there any keywords which would help me or any place i could search for these type of datasets.

@bw4sz
Copy link
Collaborator

bw4sz commented Oct 8, 2025

We have a number of datasets, let me look into this today. Can you look at that roadmap above and summarize which pieces are completed, which you plan to do, and which I can help you with. This is great stuff and I have some time this week to assist in review and get it over the finish line. Thanks for the contribution!

@Bhavya1604
Copy link
Contributor Author

Sorry for the Past two months i haven't been able to focus on this as i was caught up with lots of things.

Now I have much time so I will start with the documentation and creating the extra package install today and once i have the data i will test and observe the sensitive parameters.

I'll constantly give updates on where I am stuck and what's completed

@bw4sz
Copy link
Collaborator

bw4sz commented Oct 10, 2025 via email

@codecov
Copy link

codecov bot commented Oct 12, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 87.61%. Comparing base (e20f94f) to head (830c135).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1098      +/-   ##
==========================================
+ Coverage   87.43%   87.61%   +0.18%     
==========================================
  Files          20       20              
  Lines        2538     2544       +6     
==========================================
+ Hits         2219     2229      +10     
+ Misses        319      315       -4     
Flag Coverage Δ
unittests 87.61% <ø> (+0.18%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Bhavya1604
Copy link
Contributor Author

Bhavya1604 commented Oct 16, 2025

@bw4sz I’ve moved the code to evaluate.py and main.py, and added documentation along with separate dependencies.
What’s left is adding pytest coverage (which I’ll do once this structure is confirmed) and testing for sensitive parameters.
It might not be exactly as you intended, so I’ll make adjustments as needed.
I’m also a bit unsure about how I added the separate dependencies so please let me know if any changes are needed there.

@Bhavya1604
Copy link
Contributor Author

Bhavya1604 commented Oct 28, 2025

@bw4sz How will you share the testing data with me?

@Bhavya1604
Copy link
Contributor Author

@bw4sz i hope you are doing well
I wanted to know is you have reviewed this latest commit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file Feature Request New feature or request Ideas for Machine Learning! These are machine learning ideas and papers that could be useful for DeepForest models. High level. To be documented Please wait until this issue is fully documented in order to best contribute.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants