pijul theoretical performance has been advertised since as far as I can recall. Now that
pijul-0.10.1 has landed, I believe it is a good time for caring about how our changes affect them. Therefore, in addition to the awesome work @pmeunier has done about test coverage, we should write and run benchmarks to be able to compare two pijul versions in terms of speed (and potentially memory usage).
In a long-term perspective, we should hit a situation where a change which impact too much pijul performance should be deeply motivated.
Any idea about how to proceed?