New release: 1.0.0-alpha.1

Wow, what a week! Thanks to intensive testing by a number of people, I’m happy to announce our next release, 1.0.0-alpha.1.
A subtle bug was discovered in libpijul, which suggested an important simplification in the math, but unfortunately, this meant that the change format needed to be changed.

All repositories on the Nest have been converted (and will start working with the new release in a few minutes, once the deployment ends).

If you have a local repository, you can convert it automatically with pijul upgrade, which will produce a .pijul.old directory, which can safely be deleted.

Note that this is not a 1:1 conversion, since the bug I fixed was related to the fact that changes were not recording all the information they needed to make sure that unrecord was the actual reverse of apply. This comes from the brand new algorithm, for which apply has been tested on large repositories, but unrecord + apply cycles not so much.

There are still bugs in pijul, quite a few of them, but we’re solving them fast, and I’m confident that Pijul will finally become stable soon.

I want to thank everybody who contributed to the testing, the code, and the enthusiasm (it is very welcome, especially after years of work). I’ll recover the list of contributors for the old Pijul 0.12 as soon as I can, and publish it in the new repository.

10 Likes

I’d like to note this may have added a libclang dependency for me. Though I just upgraded my system recently, so maybe this is not new and was just deleted from my system somehow.

Since this update touches on unrecord I have a question about it. Is it normal that after an unrecord the change is still stored in .pijul? (I’d understand if it’s on purpose and is later garbage collected) Also, unrecord doesn’t make pijul forget about a file. (It still shows up for pijul ls) In general it does not seem possible for pijul to forget about a file. Is this function planned?

P.S.
pijul credit correctly errors with File "file" not in repository after that file is unrecorded, so not sure what ls is doing.

The clang dependency is not new, it is due to zstd.

It’s on purpose, but is not yet GCed, and I agree a pijul cleanup would be nice to have.

That’s right, I miss it too. I’ve tried to reduce the number of commands in this release, but I agree that’s a fundamental one.

This made me wonder how many commands git has. Turns out it’s about an order of magnitude more than I thought haha. This has never been a problem for me though. I think most people just stick to a small selection of commands (for example the short list shown by git help) and only use some of the other ones when they have to based off some tutorial on how to do a very particular thing, and only to get out of a hairy situation. In fact, I don’t think i have even used every command shown in the git help list, but I’ve been thankful that some of the ones not in that list existed when I needed them.

As many as you need: https://git-man-page-generator.lokaltog.net/

More seriously, one of my goals here is to make the onboarding as smooth as possible, and I believe the number of commands is a big help: if you know you just have to read 20-30 pages of manual to master the tool (vs 200), you can quickly gain confidence that you will fully master the tool (who can say they fully master all commands of Git?).

1 Like

lmao nice site.

Hey, fair enough. I do really enjoy the fact that I can and have read the full Go and Lua specs, for example. Being able to hold the entire model of something in your head is a definite plus. To extend the analogy: that didn’t require me to know the fine minutia of their garbage collectors, just like with pijul, you shouldn’t need to know the implementation details of sanakirja.

On the other hand, if I do want to some of the finer details of the Go memory model, or read further about Lua coroutines (pdf), I can find it, and that’s good. So really it’s just a matter of “how much separation is enough”. Should it be a different binary, or is it OK to just corral it’s documentation? “For experts only.” I think at the very least we can agree that git does not separate enough, and that pijul would benefit from separating more. I think we are in the same page there.

To draw another comparison, have you not mastered Go unless you have: read the spec, read all extra documents on Go’s memory and concurrency management, know every single package in the stdlib by heart, know how to write functions for its assembler, etc.? I think not. Or at least I don’t think that hidden breadth of knowledge makes Go “a complicated language.” It just makes it rich because the only thing you need to understand the semantics of any program should be the spec. However, the spec wouldn’t be enough to be a master either. That kind of knowledge is more along the lines of what can be found in Effective Go and is only really found through experience and community. So I think having a simple and powerful core is more important than the total size of the project. git doesn’t make enough distinction about what is the core, and what is extraneous. To master git is more a matter of workflow than knowing how to use git-symbolic-ref. I think if pijul makes this distinction properly, it won’t actually matter if you can eventually do direct graph manipulation in the command line or crazy things like that.

There is a model for pijul, which I hope you have in your head or on paper somewhere, that tells a story about how pijul “works”. By “works” I mean, for example, a “function call” “works” by “transferring data and control” to a “function”. This level of abstraction. (I wanted to do my homework here about the “theoretical model of patches”, but links about it in the blog are broken.) I think that’s very very important for the project. Of course, it also matters that’s it is actually implementable. They are equally important. One is displacement, the other is force, and together they make work.

For example, what would be the definition of a “change”? Perhaps it’s a function of a diff between a subset of the current working directory, and the snapshot created from all patches in a channel? Perhaps more accurately, it’s that diff plus it’s set of dependencies on objects in the graph that snapshot was made form? What is the well defined finite set of operations that user can perform on changes and their well defined output? In other words, what is their interface. Whatever the answer to these questions, I think these basic things need to be clear, and the UI will probably just fall straight out of that. Other things like the concept of “alive” and “dead” edges are probably (hopefully) not relevant to this model, (although they are interesting) as much as they are relevant to implementation.

I think if we can have this, not only can pijul be ready for 1.0, but will have a very real possibility of easily surpassing git, who doesn’t answer these questions that well, but answers them enough and is flexible enough that it mostly doesn’t matter. The possibility of pijul answering these questions comprehensibly is I think a big reason of why people like the darcs developers were looking intently at pijul.

I also want to say that we shouldn’t be to surprised or worried if as we go on we realize many of the reasons that git is the way that it is and end up making the same choice. I predict we will still end up with a lot less (in a good way), and I don’t think pijul’s power come from being that different to git, but making some concepts that we already see in Git behave properly. For example, better merges, better branching, safe cherry-picking, etc. Pijul is not here to revolutionize the add/commit loop. (Is it?)

Dunno how I got into all that, but there it is.

Was this command removed? I’m getting No such subcommand: "upgrade" :frowning:

It was removed indeed, because it never worked.

I still have repositories generated by version 0.12.2 of pijul. When version 1 eventually (hopefully) comes out of alpha and becomes stable, will there be a way to migrate/upgrade repositories from the older version of pijul?

I don’t have plans for that myself, maybe others do. Also, there are no current plans to change the featureset between now and 1.0, the only changes will be relatively minor and just bugfixes. The current version is already quite stable, we’re just being cautious not to announce stability too soon.

If you were using 0.12, which has always been advertised as highly experimental, the current versions are way more stable and usable than 0.12. The underlying storage engine (Sanakirja) is more rigorously tested (because it is much faster and consumes less disk space, so it could be tested at a larger scale).

One thing you may do though, is to write a script to replay your 0.12 history, importing patches at each step of the replay.