Simple example created by a group at the "Next generation bioimage analysis workflows hackathon".
- Git pull this repository
- cd into the repository folder
- Modify the
data/input_params.yamlfile to point to the absolute input image path - Run the following command: If you have a conda environment with nextflow installed, you can run the following command:
pixi run nextflow run . -params-file data/input_params_local.yaml -profile condaor run with docker:
pixi run nextflow run . -params-file data/input_params_local.yaml -profile docker-
Explore what nf-core gives us for specifying inputs and outputs
-
Explore storing versioning file as in nf-core. Update: putting all versioning logging file at the root for now.
-
Create a github-repo for the below code
-
Create a minimal workflow in Nextflow that uses OME-Zarr
- Process 1: Create new gaussian blurred ome-zarr image
- Process 2: Segment image
- Process 3: Measure segment shape features
-
Should the input be only one scale? Or multiple?
-
How to handle the multi-scales for the outputs?
-
Root ome-zarr + subfolder strings as input. Numpy/dask object as Image/labels data.
-
label image stored within the same ome-zarr file as an image
-
'Hacked' nextflow IO to allow for reading/writing valid OME-Zarr files
-
Where/how to store the table?
-
A more tightly connected image visualisation tool?
-
Integration Fractal tasks into Nextflow
-
Bonus
- Process only a part of an image
- Use a Fractal task as one of the Nextflow processes
- Important tools and libraries require arrays of specific input dimensionality to operate on, thus we need convenient APIs and implementations that allow us to subset OME-Zarr.
- Very few (none?) of the current tools natively work on multi-resolution input, thus be able to specify which resolution level to work on is important and not well supported by the python libraries that we found.
- YOLO
- SAM
- Cellpose
- Is it really RGB?
- elastix
- skimage ?!
- ImgLib2 ?!