It’s been a busy few days recently. There are a few things worth talking about.
New Documentation Framework
The first is that I’ve decided to switch gears in terms of how to present the documentation for Rutilus. Previously, we were using a custom solution by using React to make a Single Page Application (SPA) for the documentation. React makes doing UIs easier which is nice, and SPAs provide good performance, so these were pluses. This was probably better than just writing HTML and CSS by hand. However, when you roll your own solution, it’s often hard to cover every base like relying on a framework might provide for you.
We had a few issues. The most prominent of them was just how much effort had to go into styling it to make it look nice. Things like how to make lists appear properly. What happens when there’s too many links in the nav list? It would bleed over into the other section. It didn’t look very good once we started adding a lot of documentation. Plus, it wasn’t mobile friendly. To make it mobile friendly would have required rewriting the CSS that went into it. This would be too much work at this point.
Writing documentation for it was also very slow. I had to understand JSX (React’s HTML/JS hybrid) to be able to write documentation (and therefore, so would any future Rutilus maintainers). The thing had to be compiled using Webpack, so every time I wanted to see a change I made in the browser, I had to wait at least 5 seconds. This hurts productivity.
I investigated using static site generators. The premise of these frameworks is amusing, but it makes sense for things like documentation. In a world of dynamic sites, these frameworks allow you to write plain text, or often something like markdown, and they then parse it and convert it to a series of HTML, CSS, and sometimes JS files. They boil your text down into a static website. But for things like documentation, this is all you need! You don’t need a database. I found one in particular called Mkdocs. It’s relatively new, only existing for about two years as of today, but it has a polished feel, a 1.0.0+ release, and a lot of activity on GitHub. These are the positive signs I look for when it comes to choosing a library upon which to rely. In fact, the team behind it even fixed a bug already that I already found and reported. See the next section of my blog about that. 😛
Deleting My System Root
So this is probably the funniest and most destructive thing I’ve ever done. Long story short, I deleted everything I could on my computer without sudo privileges, starting from my system root. This wiped my home folder. I was listening to YouTube at the time, and when I did it, my sound cut out, my Unity shell disappeared, my Windows key stopped bringing up the shell, the file explorer stopped functioning… I basically broke the matrix. But at least I didn’t lose any work, since I have a habit of pushing to remote very frequently as I work, and only doing my work using Git.
Here’s how I did it:
- Mkdocs has a build command. It’s “mkdocs build”.
- If you run the build command with the “–clean”, it will delete everything in the build directory before doing a build.
- You can modify the configuration file for your Mkdocs project by changing its build directory.
These all make sense, but the way I used them was just by pure luck a very bad way of using them.
- I changed my build directory to “/”, since I wanted to build to the *project* root.
- I ran “mkdocs build –clean”.
Mkdocs proceeded to delete everything it could from my system root, just as I had instructed it to do.
The community behind Mkdocs has proven to be a healthy one. I reported this to them via a GitHub issue, mostly as a warning against doing something as stupid as I just did. I didn’t really consider it a bug. I just wanted to provide some advice. They considered it a bug. And within a day of it being reported, they had issued a release to fix the bug. Yay open source!
At least I’ve gotten plenty of experience setting up Linux systems since I end up reformatting so often. I plan to not tell my tools to delete my system root again in the future.
It’s a big difference from other programming languages I’ve used in the past. I actually encountered a Reddit thread on the Ruby on Rails subreddit, which I frequent, where this concept still boggles people:
Node I/O code is async by default, made sync explicitly (by using promises etc). Rails I/O code is sync by default, made async explicitly (by using threads or libraries providing futures and promises).
An amusing aspect of promises is that in Rails, we use them to enable async programming, but in Node, we use them to write sync code more easily.
- Invoking a function causes a frame to be added to the *stack*.
- Functions have the ability to allocate memory on the *heap*.
And here’s where this starts to differ from traditional programming languages:
- The event loop continuously watches the stack, and when the stack is empty, it grabs the message at the front of the queue, which will cause a function to be invoked, causing a frame to be added to the stack.
- The APIs include a feature to add a message to the queue after waiting a certain amount of time.
When you put all that together, you get an explanation of why callbacks exist, how they work. You’re calling a function and passing a function that will be run at some point in the future. The asynchronous function you called (for example, Node’s fs.readDir), has access to the APIs, and will eventually call a function like setTimeout, which adds a message to the queue. Voila. Non-blocking. Reading that directory doesn’t block because the code that runs when the read is complete is going to be represented by a message that isn’t added to the queue until the read is complete. And therefore that code to run won’t even run unless the stack is empty, so higher priority work (already on the stack) will complete first.
If this sounded overwhelming, I suggest watching that YouTube video. It’s very enlightening.