Skip to content

Conversation

@kettanaito
Copy link
Member

@kettanaito kettanaito commented Sep 5, 2025

Changes

  • Refactors worker context to use rettime for WorkerChannel and drop context.events altogether.
  • Adds a failing test for in-flight requests after calling worker.stop().
  • Removes the MOCK_DEACTIVATE event altogether. When the client calls worker.stop(), that doesn't affect the worker in any way. The state of mocking being disabled is entirely client-side state and so it's kept on the client (context.isMockingEnabled). The worker still handles the requests even for disabled clients, but such requests always get the PASSTHROUGH handling instruction.
  • Removes the RESPONSE worker event listener on the client after calling worker.stop(). The REQUEST event listener MUST remain since the worker now handles requests for turned-off clients as well.
  • ✅ Solution: The fetch listener now creates a timestamp of when it's intercepted a request. The client sets its own timestamp when worker.stop() has been called. The REQUEST listener compares the two timestamps to see if requests happened before the worker has been stopped.

This still isn't a fool-proof solution as we implicitly rely on the speed at which the service worker dispatches the fetch event. This is our technical limitation since MSW doesn't keep track of requests client-side (that's the whole thing).

Roadmap

  • Fix existing tests after the WorkerChannel refactor.
  • Finish the in-flight test suite.
  • The first test is still failing (double-check).

@kettanaito
Copy link
Member Author

On unregistering the worker

While I understand that worker.start() registers the worker, some might expect worker.stop() to unregister it, but that is not the case and never has been. In fact, unregistering the worker subjects the user to all sorts of unforeseen issues as workers are odd beasts.

All the worker.stop() was ever designed to do what to turn off mocks for the current client. So it has always been about client-side state. Bringing that state to the worker was a mistake (MOCK_DEACTIVATE).

Conclusion

Even when the mocks are stopped, the worker MUST still be responsible for network control, but each outgoing request will always receive the PASSTHROUGH handling scenario from the client, ignoring the handler lookup altogether.

@pkg-pr-new
Copy link

pkg-pr-new bot commented Sep 9, 2025

Open in StackBlitz

npm i https://pkg.pr.new/msw@2578

commit: 0617b26

@kettanaito kettanaito marked this pull request as ready for review September 9, 2025 17:46
@kettanaito kettanaito force-pushed the fix/in-flight-request-after-stop branch from 66daf1c to d6205ab Compare September 9, 2025 17:49
@kettanaito kettanaito merged commit 97cf4c7 into main Sep 10, 2025
46 checks passed
@kettanaito kettanaito deleted the fix/in-flight-request-after-stop branch September 10, 2025 17:01
@kettanaito
Copy link
Member Author

Released: v2.11.2 🎉

This has been released in v2.11.2!

Make sure to always update to the latest version (npm i msw@latest) to get the newest features and bug fixes.


Predictable release automation by @ossjs/release.

@hugomrdias
Copy link

This release broke our tests, some kind of timing issue. Full test suite fails but if i run just the test file that uses msw it passes.

@Turbo87
Copy link

Turbo87 commented Sep 11, 2025

FWIW we're using @sinonjs/fake-timers in our test suite to ensure stable snapshots and it looks like the introduction of these timestamps is causing issues for us too. It seems that MSW essentially just stopped working completely, most likely due to the timestamps mismatching because of the fake timer usage.

@kettanaito
Copy link
Member Author

@hugomrdias, are you using fake timers in your tests, too? If not, can you think of something related to these changes that your test might be doing?

Thanks for letting me know. You can pin the previous version while I resolve this issue.

Can someone please open an issue for this, please?

@kettanaito
Copy link
Member Author

kettanaito commented Sep 11, 2025

There's also a solid chance that I will conclude that this isn't an issue. Spend this time investigating how you can improve your test suite by ideally dropping fake timers. You need them extremely rarely in browser tests. Thanks.

@Turbo87
Copy link

Turbo87 commented Sep 11, 2025

Spend this time investigating how you can improve your test suite by ideally dropping fake timers. You need them extremely rarely in browser tests.

we are using https://percy.io/ for visual regression testing and quite a few of our snapshoted pages contain relative times like "3 days ago" but also absolute times. without fake timers these snapshots couldn't be stable.

You can pin the previous version while I resolve this issue.

the way I've seen this resolved in other places in the past was by saving a reference to Date (or whatever API is needed) at the top-level scope of the module, so that you have a stable reference that won't be replaced later once the fake timers become active.

Can someone please open an issue for this, please?

done: #2581

@hugomrdias
Copy link

No, I'm not using fake timers

@kettanaito
Copy link
Member Author

the way I've seen this resolved in other places in the past was by saving a reference to Date (or whatever API is needed) at the top-level scope of the module,

This is nice but if you happen to mock timers before importing msw, we will store a mocked reference. It's a common import order issue and cannot be used reliably as a solution, sadly.

I would love to discuss this more in the issue you opened. Thanks!

@hugomrdias
Copy link

My current guess is that .stop from one file triggers after the .start from another file

@dompuiu
Copy link

dompuiu commented Sep 12, 2025

I am having a similar issue with the one described by @hugomrdias. I am not using fake timers.

I have a Vitest test suite. One file with 2 tests has started failing after we upgraded to 2.11.2. If I run each test independently, everything works. When I run both tests, then one of the is failing. Seems the last test is always failing (I changed their order in the file and always the last one fails).

It also seems that I just receive the mocked response in the first test. Once I call worker.stop() at the end of one test, the next worker.start() that I call in the next test is not returning mock responses anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Request made immediately after calling stop on the service worker doesn't always passthrough properly

5 participants