pijul theoretical performance has been advertised since as far as I can recall. Now that
pijul-0.10.1 has landed, I believe it is a good time for caring about how our changes affect them. Therefore, in addition to the awesome work @pmeunier has done about test coverage, we should write and run benchmarks to be able to compare two pijul versions in terms of speed (and potentially memory usage).
In a long-term perspective, we should hit a situation where a change which impact too much pijul performance should be deeply motivated.
Any idea about how to proceed?
well, I think I read that rust doesn’t have good native coverage for benchmarks yet (though it’s planned), so it might be too soon for very fine grained testing (I mean, testing a particular part of the code).
If this is correct, maybe for now we could just test the critical areas of the cli, like
pijul diff, maybe in python which is very well equipped for this kind of things. Something like I did with my two tests mentioned a few days ago, but maybe using something more refined like
timeit. I don’t know if it’s possible to measure ram consumption — but as of now it’s critical that we find a way to do so, and to stop the test automatically before it crashes everything (one of my two tests freezes any computer because it quickly asks at least 8gb ram).