Skip to content

Conversation

@vipnydav
Copy link
Contributor

@vipnydav vipnydav commented Nov 5, 2025

Description:

This change modifies the FUSE connection to support both vectored and non-vectored reads within the same mounted filesystem.

Previously, vectored reads were controlled by the UseVectoredRead flag in MountConfig. When enabled, the file system was expected to return data in ReadFileOp.Data, and ReadFileOp.Dst would be nil. This global setting made it difficult to support mixed read patterns efficiently.

With this change:

  • The UseVectoredRead flag is now a no-op, maintained for backward compatibility.
  • The read logic now inspects ReadFileOp.Data. If it is populated, a vectored read is performed. Otherwise, the logic falls back to using ReadFileOp.Dst.

This allows a file system to choose the appropriate read method on a per-read basis, enabling optimizations like zero-copy for buffered reads while still allowing direct reads for other cases.

This change is motivated by the need to implement zero-copy reads for the Buffered Reader in GCSFuse, without regressing the performance of the existing GCSReader.

More context in bug: https://b.corp.google.com/issues/458015680

to.Dst = inMsg.GetFree(int(in.Size))
}
// Use part of the incoming message storage as the read buffer.
to.Dst = inMsg.GetFree(int(in.Size))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this part of your change performance-neutral? Going by the comment, we’re now “allocating” (reserving?) more buffers.

cc @vitalif who contributed vectored read support back in 2022

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be performance-neutral because inMsg.GetFree() doesn't actually allocate. It merely makes the free space that was already allocated in the incoming message available for use by the output message.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. So let me confirm the facts to make sure we’re on the same page:

  1. Each buffer (InMessage) is MaxWriteSize (1 MB) + one page (4096 bytes) in size.
  2. When vectored read support is enabled, we currently stop using the 1 MB buffer.
  3. This means we’re “wasting” 1 MB of RAM, but only for read requests — write requests will fill that 1 MB.

Is that accurate?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is correct.

mount_config.go Outdated
// Vectored read allows file systems to avoid memory copying overhead if
// the data is already in memory when they return it to FUSE.
// When turned on, ReadFileOp.Dst is always nil and the FS must return data
// being read from the file as a list of slices in ReadFileOp.Data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we expand on this description a bit? The vectored read flags is a bit of a misnomer, it's actual meaning was "the file system allocates its own buffers when serving reads". Originally, op.Dst was set to the free space in the incoming message (since reads requests are small, and we always allocate MaxPages+1 for incoming message, there are MaxPages free pages in a read request). That unused space is made available to the file system as op.Dst.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the comment.

@stapelberg stapelberg merged commit d2942a9 into jacobsa:master Nov 7, 2025
3 checks passed
@vipnydav vipnydav deleted the remove-vectored-read-flag branch November 18, 2025 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants