Skip to content

Conversation

@briossant
Copy link

@briossant briossant commented Oct 30, 2025

This Draft Pull Request introduces an interactive voxel annotation feature, allowing users to perform manual segmentation by painting directly onto volumetric layers. This implementation is based on the proposal in Issue #851 and incorporates the feedback from @jbms.

Key Changes & Architectural Overview

Following the discussion, this implementation has been significantly revised from the initial prototype:

  1. Integration with Existing Layers: Instead of a new vox layer type, the voxel editing functionality is now integrated directly into ImageUserLayer and SegmentationUserLayer via a UserLayerWithVoxelEditingMixin. This mixin adds a new "Draw" tab in the UI.
draw tab
  1. New Tool System: The Brush and Flood Fill tools are implemented as toggleable LayerTools, while the Picker tool is a one-shot tool. All integrate with Neuroglancer's new tool system. The drawing action is bound to Ctrl + Left Click.

  2. Optimistic Preview for Compressed Chunks: To provide immediate visual feedback and solve the performance problem with compressed chunks, edits are now rendered through an optimistic preview layer.

  • When a user paints, edits are first applied to an InMemoryVolumeChunkSource.
  • This preview source is rendered by a second instance of the layer's primary RenderLayer (e.g., ImageRenderLayer or SegmentationRenderLayer). This ensures the preview perfectly matches the user's existing shader and display settings.
  • The base data chunk is not modified on the frontend, avoiding the need to decompress/recompress it.

Data-flow

sequenceDiagram
participant User
participant Tool as VoxelBrushTool
participant ControllerFE as VoxelEditController (FE)
participant EditSourceFE as OverlayChunkSource (FE)
participant BaseSourceFE as VolumeChunkSource (FE)
participant ControllerBE as VoxelEditController (BE)
participant BaseSourceBE as VolumeChunkSource (BE)

    User->>Tool: Mouse Down/Drag
    Tool->>ControllerFE: paintBrushWithShape(mouse, ...)
    ControllerFE->>ControllerFE: Calculates affected voxels and chunks

    ControllerFE->>EditSourceFE: applyLocalEdits(chunkKeys, ...)
    activate EditSourceFE
    EditSourceFE->>EditSourceFE: Modifies its own in-memory chunk data
    note over EditSourceFE: This chunk's texture is re-uploaded to the GPU
    deactivate EditSourceFE

    ControllerFE->>ControllerBE: commitEdits(edits, ...) [RPC]

    activate ControllerBE
    ControllerBE->>ControllerBE: Debounces and batches edits
    ControllerBE->>BaseSourceBE: applyEdits(chunkKeys, ...)
    activate BaseSourceBE
    BaseSourceBE-->>ControllerBE: Returns VoxelChange (for undo stack)
    deactivate BaseSourceBE
    ControllerBE->>ControllerFE: callChunkReload(chunkKeys) [RPC]
    activate ControllerFE
    ControllerFE->>BaseSourceFE: invalidateChunks(chunkKeys)
    note over BaseSourceFE: BaseSourceFE re-fetches chunk with the now-permanent edit.
    ControllerFE->>EditSourceFE: clearOptimisticChunk(chunkKeys)
    deactivate ControllerFE

    ControllerBE->>ControllerBE: Pushes change to Undo Stack & enqueues for downsampling
    deactivate ControllerBE

    loop Downsampling & Reload Cascade
        ControllerBE->>ControllerBE: downsampleStep(chunkKeys)
        ControllerBE->>ControllerFE: callChunkReload(chunkKeys) [RPC]
        activate ControllerFE
        ControllerFE->>BaseSourceFE: invalidateChunks(chunkKeys)
        note over BaseSourceFE: BaseSourceFE re-fetches chunk with the now-permanent edit.
        ControllerFE->>EditSourceFE: clearOptimisticChunk(chunkKeys)
        deactivate ControllerFE
    end
Loading
  1. Writable source selection Aside every activated volume sub-sources of a writable datasource, an additional checkbox lets the user mark the sub-source as writable, then neuroglancer will try to write in it.
writable source

5. Dataset creation To complete Neuroglancer's writing capabilities, a dataset metadata creation/initialization feature was introduced.

The workflow is triggered when a user provides a URL to a data source that does not resolve:
image

Neuroglancer recognizes the potential intent to create a new dataset and prompts the user:
image

Finally, the user is able to access dataset creation form:
image

Data sources & Kvstores

Currently, there is a very limited set of supported data sources and kvstores, which are:

  • datasources:
    • zarr v2 and v3 with codecs: raw, gzip, blosc
  • kvstores:
    • s3+http(s): can be used with a local s3 bucket (e.g. minio) or with anonymous s3 urls
    • opfs: in-browser storage, also used for local development at some point, the relevancy can be discussed.
    • ssa+https: a kvstore linked to an in development project, which is a stateless (thanks to OAuth 2.0) worker providing signed urls to read/write in s3 stores

This lack of support is the first limitation of the current implementation.

Open Questions & Future Work

This PR focuses on establishing the core architecture. Several larger topics from the original discussion are noted here as future work:

  • Efficient Low-Resolution Drawing: As discussed, efficient, multi-resolution drawing with upsampling is a complex challenge that requires a new data format.
  • 3D Drawing Tools: As suggested by @fcollman, 3D-specific tools like interpolation between slices are out of scope for this initial PR but could be a valuable direction for future work.

Checklist

  • Completed the todo list found in src/voxel_annotations/TODOs.md
  • [ ] Added support to more (every?) datasources and kvstores
  • Signed the CLA.

Edits

  • updated zarr support
  • added "5. Dataset creation" section
  • strikethrough what's no longer part of this PR

@google-cla
Copy link

google-cla bot commented Oct 30, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@jbms
Copy link
Collaborator

jbms commented Nov 3, 2025

Can you complete the CLA?

@jbms
Copy link
Collaborator

jbms commented Nov 3, 2025

The brush hover outline (circle where the mouse pointer is) seems to go away in some cases when changing the zoom level.

@briossant briossant force-pushed the feature/voxel-annotation branch from 2d5359d to 9d15526 Compare November 10, 2025 09:51
@briossant
Copy link
Author

briossant commented Nov 10, 2025

I need to rewrite history, my commits are signed with the wrong email, I will open a new PR I made a mistake with this force push nevermind its fixed

- Introduced a new dummy `MultiscaleVolumeChunkSource`.
- Added `VoxelAnnotationRenderLayer` for voxel annotation rendering.
- Implemented `VoxUserLayer` with dummy data source and rendering.
- Added tools and logs for voxel layer interactions and debugging.
- Documented voxel annotation specification and implementation details.
- Added a backend `VoxDummyChunkSource` that generates a checkerboard pattern for voxel annotations.
- Implemented frontend `VoxDummyChunkSource` with RPC pairing to the backend.
- Updated documentation with details on chunk source architecture and implementation.
…s to corruped the chunk after the usage of the tool. Added a front end buffer which is the only drawing storage for now. Added user settings to set the voxel_annotation layer scale and bounds. Added a second empty source to DummyMultiscaleVolumeChunkSource to prevent crashs when zoomed out too much
…lobal one (there where a missing convertion) ; add a primitive brush tool
…r remote workflows, label creation, and new drawing tools
@briossant
Copy link
Author

I currently see two paths to solve this security flaw:

  1. If the auth mechanism is permissive enough, maybe we could make it generate a unique token to prove that the creator of the link actually has write access.
  2. We keep the annoying prompt but delay it until there is an actual drawing action. This serves a double purpose as a warning to ensure the user knows they are about to draw to this specific datasource. We could reduce the annoyance by only prompting on links that haven't been opened before and/or adding a "do not ask again for this datasource" option, both stored in local storage.

I think option 2 is better as it reduces dependency on specific external systems, but we can still search for other alternatives.

@jbms
Copy link
Collaborator

jbms commented Nov 20, 2025

I currently see two paths to solve this security flaw:

  1. If the auth mechanism is permissive enough, maybe we could make it generate a unique token to prove that the creator of the link actually has write access.
  2. We keep the annoying prompt but delay it until there is an actual drawing action. This serves a double purpose as a warning to ensure the user knows they are about to draw to this specific datasource. We could reduce the annoyance by only prompting on links that haven't been opened before and/or adding a "do not ask again for this datasource" option, both stored in local storage.

Yes, I think only prompting when there is actually a write action is a good idea. I'm not sure that "do not ask again for this datasource" is a good idea because users will be tempted to use it for convenience but that effectively means: "allow anyone to trick me into corrupting this datasource". In particular, just because I have edited one datasource in Neuroglancer doesn't mean I want some random link sent to me to also accidentally edit it.

One possibility would be to key any persistent allowlist on the entire Neuroglancer state, i.e. a set of (full_neuroglancer_viewer_state_hash, datasource_url) tuples, but where any allowed datasources propagate automatically when the changes to the Neuroglancer state are made locally, and in particular adding a datasource or enabling writing on it interactively would automatically grant permission. Possibly some properties of the viewer state could be excluded though that introduces complications and I'm not sure how much benefit it provides. Somehow we'd have to avoid filling up local storage with allowed entries due to the constantly changing Neuroglancer state.

Potentially we could improve on this by having some mechanism to say: "allow writing to this datasource for any link created by this user". That would require some way to sign the Neuroglancer state with a per-user private key. The private key could just be generated by Neuroglancer and stored in browser local storage, though it might be better if it was tied to some signin mechanism so that it could be shared across browser profiles and machines.

In any case these strategies add a lot of complexity so as a starting point we can just always prompt on first write to a datasource.

I think option 2 is better as it reduces dependency on specific external systems, but we can still search for other alternatives.

briossant and others added 15 commits November 21, 2025 11:55
… setVoxelPaintValue and transformGlobalToVoxelNormal
…cking mechanism for the preview renderlayer instead of recreating it in the VoxelEditContext
},
);

export async function proxyWrite(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is currently unused. It was required for the dataset creation feature, which has been removed from this PR.

@briossant briossant marked this pull request as ready for review November 27, 2025 19:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants