Skip to content

discuss about timing of getChunk() #64

@ahfuzhang

Description

@ahfuzhang

each bucket limit the memory as 1/512 of total.
eg: I need 512MB cache, but each bucket only 1 MB.
if many keys goes to one bucket, many data will over write when chunks full.

I don't have any data to show this will happen.
but for logic, this will be better: don't limit bucket memory to 1/512 of total.

file fastcache.go, func (b *bucket) Set, line 335:

if chunkIdxNew >= uint64(len(b.chunks)) {  // len(b.chunks) here could use all buckets used chunks count

I wish I said clear with Chinglish. :-)
thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions