Replies: 6 comments 5 replies
-
|
The best way to improve CPU performance is to use web workers. In the event of a CSP violation, you should be able to use zip.configure({
deflate: ["./path/to/z-worker.js"],
inflate: ["./path/to/z-worker.js"]
});
...I can confirm that Internally, zip.js relies entirely on Web Streams (e.g. Otherwise, you can also write your own |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for your swift response! I hope you do not mind me asking follow-up questions. :-) I tried to use But calling still causes a CSP violation:
Did I implement your hint correctly? It could very well be that this is a limitation of the WebExtension framework itself, but before digging into that rabbit hole, I wanted to make sure I used your hint correctly.
Is that a limitation of the UInt8Array itself, or of your So if the "file" I want to add to a zip is larger than 4GB, I should be able to split it up into multiple UInt8Arrays not larger than 2GB, create a blob and then use your I will analyze the streaming pointers, thanks for those! |
Beta Was this translation helpful? Give feedback.
-
|
You have to use the Regarding the max. size of |
Beta Was this translation helpful? Give feedback.
-
|
After browsing your source, I found this solution to be working: Adding web_accessible_resources does not seem necessary. My benchmarks indicate, that adding the UintArrays is slower with webWorkers enabled, but exporting the blob is faster:
Do you have any insides on this? |
Beta Was this translation helpful? Give feedback.
-
|
Thanks, but I am still digesting all these options and from my own tests, I will need a few days before I can get back to this. I will provide a test extension for you to try out. |
Beta Was this translation helpful? Give feedback.
-
That is probably true, that benchmark step also includes generating the data, before adding it. I will clean up my code and make a few more tests and compare it with your example. But I probably need till the end of the week. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am a Thunderbird WebExtension developer and I am using your library to import/export messages as a zip file.
For import, I use the
FS()method, because I do not have to load the entire file into memory, just to know what is inside, which is a huge speed boost. I assume that is the fastest way?For export, I currently also use the
FS()method:Using
addUint8Arrayis faster than usingaddBlob()(probably because of the extra overhead to create the blob).Is this the fastest way? Is using
FS()for writes the best option? Asking because thezipFs.exportBlob()also needs a considerable amount of time.I also would be interested if you have experiences with streaming the zip. What I can see is that downloading large zips to the local filesystem needs a lot of free system memory to hold the entire zip. The WebExtension downloads API however can download data in chunks: large files are not downloaded entirely and then written to the local filesystem, but written as soon as the data chunk is downloaded (there is a "parts" file in your download folder while downloading).
It would be great if we could send chunks of the zip as we add files to it, so exporting 5 GB of messages would not need 5 GB of free system memory. Would you have any pointers for me regarding this topic?
Beta Was this translation helpful? Give feedback.
All reactions