Skip to content

Conversation

@nuald
Copy link
Collaborator

@nuald nuald commented Oct 10, 2019

I know that you have problems with accessing to the test platform you've used previously and/or don't have time for the updates, so I did the full update and re-tested everything on my home server. Please feel free to communicate with me regarding additional updates and tests if needed. The details regarding the update are below.

Updated the toolchain (addressed tickets #129, #133, #158, #159, #164, #170, #179).

Removed Ruby Topaz and Rubinous as these are not the active projects anymore (the latter has some activity, but it neither can be properly compiled nor compatible with the used Ruby code).

Fixed memory consumption calculation to include children processes (e.g. for Scala as it invokes the actual JVM process as a child).

Minor tweaks to BF tests (normalizing data types and allocations if possible).

Applied changes from PR #157, #167, #177, #175.

Added relevant warming up for .NET projects (issue #154), however it barely changed the results.

Replaced D Mir GLAS (not supported anymore) with D lubeck (another linear algebra library also utilizing D Mir) test.

…ya#158, kostya#159, kostya#164, kostya#170, kostya#179).

Removed Ruby Topaz and Rubinous as these are not the active projects anymore (the latter has some activity,
but it neither can be properly compiled nor compatible with the used Ruby code).

Fixed memory consumption calculation to include children processes (e.g. for Scala as it invokes the actual JVM process as a child).

Minor tweaks to BF tests (normalizing data types and allocations if possible).

Applied changes from PR kostya#157, kostya#167, kostya#177, kostya#175.

Added relevant warming up for .NET projects (issue kostya#154), however it barely changed the results.

Replaced D Mir GLAS (not supported anymore) with D lubeck (another linear algebra library also utilizing D Mir) test.
@kostya
Copy link
Owner

kostya commented Oct 10, 2019

Wow, that huge work, thanks, i will look at this shortly. Maybe you have something like Dockerfile for easy to install all this languages?

@nuald
Copy link
Collaborator Author

nuald commented Oct 10, 2019

I wish I could utilize Docker, but unfortunately virtualized environments make the benchmarks misleading (for example, SSE2 optimizations may not work properly). I have my own benchmark for web performance (https://github.com/nuald/simple-web-benchmark), and you could see there that WSL (semi-virtualized Linux in Windows) gives completely different results comparing with actual Linux (you could note that average response times in WSL are almost the same for all languages, but on the actual hardware the average values are different).

It would be nice to have a tool that uses something like Dockerfile to create an actual bootable Linux ISO, but I couldn't find any project alive for that. However, if you think that Docker-based benchmarks could give us some valuable metrics, I can take a look to it. Please let me know what do you think.

@nuald
Copy link
Collaborator Author

nuald commented Oct 10, 2019

Another option could be creating some installation script that would fetch and update the packages, but some languages used in the benchmark are not trivial to install (like V8's D8 - it requires building from sources with the special toolchain installed). I guess nobody cares for D8 as nodejs is mature and fast enough, but I didn't want to reduce the number of languages. However, it's still an option - the majority of the languages are installed by either download/extract or "apt install X" commands.

@kostya
Copy link
Owner

kostya commented Oct 10, 2019

In Windows you may be would have difference, but in linux docker should transparently run process on host OS just with some kind of hidden namespace. I think this would be ok for this benchmark. Anyway this can be added later.

@nuald
Copy link
Collaborator Author

nuald commented Oct 10, 2019

Good to know, I wasn't aware that Docker uses LXC (unfortunately, GNU/Linux is not my main OS so I have some knowledge gaps). I'll start working on the Dockerfile.

inline int get() { return tape[pos]; }
inline void inc(int x) { tape[pos] += x; }
inline void move(int x) { pos += x; while (pos >= tape.size()) tape.push_back(0); }
inline void move(int x) { pos += x; while (pos >= tape.size()) tape.resize(2 * tape.size()); }
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like little hack, i prefer minimum hacks

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I totally agree regarding minimum hacks, but that's the common denominator used across the majority of bf benchmarks, I've just applied it for the consistency to avoid any unfair advantages. There are few points regarding it:

  • I wouldn't say it's a hack, but rather a standard approach as array doubling gives one amortized copy per insertion (please see https://ece.uwaterloo.ca/~dwharder/aads/Algorithms/Array_resizing/ if you want additional details);
  • The change barely affects anything as the max size of the tape is 8 elements (I've conducted an experiment with the fixed size of the tape, it doesn't change results at all), so the resize operation will be called 8 times in the worst case (comparing with ~500 million times of inc() calls for the bench.b input).
  • Internally, push_back (and other similar operation in various languages) use array doubling anyway (for example, please see https://github.com/google/libcxx/blob/master/include/vector#L2385 - push_back indirectly uses __recommend method to determine the best size for the new allocation and it's either doubled or memory aligned of the previous size).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but still push_back is a public api, its readable and understable call (i'm crystal coder in mind :) ). It should be language care to optimize it as fast as possible, not user. In other implementations, it was users PRs where i also prefer for simplify things.

@kostya
Copy link
Owner

kostya commented Oct 10, 2019

All looks ok, thanks, i added you to project, feel free to merge, and close issues.

@nuald nuald requested a review from kostya October 10, 2019 15:05
@nuald
Copy link
Collaborator Author

nuald commented Oct 10, 2019

Sorry, I doesn't seem that I have the access rights to either merge the PR or close the issues.

@kostya
Copy link
Owner

kostya commented Oct 10, 2019

seems you need to accept it on https://github.com/kostya/benchmarks/invitations

@nuald nuald merged commit ea2c79e into kostya:master Oct 10, 2019
@nuald nuald deleted the update branch October 10, 2019 23:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants