Skip to content

Conversation

@geertvandeweyer
Copy link

This PR contains:

  • fixed implementation of the tensor-batch-size (was not used)
  • offload more code to CPU, preventing repeated np to tensor conversion of the batch during prediction
  • implemented a queue to always have prepped batches ready for analysis
  • a fix where invitae implementation crashed on missing predictions for some variants
  • a dockerfile & dockerhub entry based on this version

Updated benchmarks show ~ 2.5x performance gain over the current version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants