Skip to content

Conversation

@mflatt
Copy link
Contributor

@mflatt mflatt commented Nov 12, 2025

Here's a program that should not need too much memory at its peak. The program allocates a 4 MB string and the appends to it 1000 times, but each appended string is immediately discarded. Nevertheless, running the problem on a 64-bit platform before this PR will hit peak memory use of 1.5 GB or so:

(let ([s (make-string (expt 2 20))]) ; 4 MB
  (let loop ([i 1000])
    (unless (fx= i 0)
      (black-box (string-append s "x"))
      (loop (fx- i 1)))))

The problem is that the implementation of string-append performs each 4 MB allocation as an atomic-seeming kernel step, including a copy via memcpy — as opposed to a loop in Scheme where the trap register would be decremented every time around the loop. In other words, a large amount of work is done, but it is treated as effectively constant for the purposes of deciding when to fire interrupts, including GC interrupts. The vector-append operation does not use memcpy, but it uses a hand-code loop that (before this PR) similarly dis not adjust the trap register. Operations that don't allocate, such as bytevector-fill! won't create GC trouble, but infrequent timer-interrupt checking can interfere with using timers/engines for coroutines.

So, for operations that are atomic from the perspective of interrupts but that may work on large objects, such as vector-append, the change here adjusts the trap counter proportional to work done. That way, interrupts are dispatched in a more timely manner, especially GC interrupts.

(The change to "7.ms" is unrelated. Wrapping that test with its smaller list size in a loop could provoke a failure before these changes.)

There should be a runtime cost, but it is small. The string-append function turns out to sometimes run a little faster on small strings, but that's because because memcpy is now called via an __atomic foreign procedure. I've observed a slowdown as large as 10% for fast operations like (#3%vector-set/copy '#(1) 0 1) on x86_64, but the same example has 0% difference on AArch64, and generally the differences are in the noise.

Unsafe list operations like #3%length or #3%memq have the same issue, but since it's only the unsafe versions of those functions (safe versions of length and memq are normal Scheme code), and since those operations tend not to have an overall length straightforwardly available (except in the case of a #3%length result), there's no attempt to adjust those here.

Setting aside unsafe list operations, I'm not sure this commit covers all relevant operations. The "4.ms" changes show the ones that I found.

For operations that are atomic from the perspective of interrupts but
that may work on large objects, such as `vector-append`, adjust the
trap counter proportional to work done. That way, interrupts are
dispatched in a more timely manner, especially GC interrupts.

The change to "7.ms" is unrelated; wrapping that test with its smaller
list size in a loop could provoke a failure befere these changes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant