The Quartz content management system that powers twoprops.net
is lean and fast. Still, it wants to delete all the content from the web site and then rebuild it for every change. That means the site is unavailable (will usually return a 404 error) if someone happens to try to access it withing a few seconds after I make a change.
With dozens (dozens!) of regular users, this could happen several times in a century. Actually, it happens to me a lot, as I’m often wanting to see changes right after I make them.
There’s a simple fix that I’ve been meaning to make for many months now. I can let Quartz manage its own folder of static web files, and then just update the actual directory that the web server uses after Quartz is all done. It’s a trivial script (which doesn’t mean that I didn’t manage to wipe out the site while creating it):
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
rsync -av --checksum \
/home/twoprops/quartz/public/ \
/var/www/twoprops
Using this system, updates to the site will mean that only the changed page(s) are unavailable and only for a fraction of a second while rsync
is updating the changed files.
—2p
addendum 20250709@18:29
I can also just incorporate the rsync
command into my Quartz auto-update script:
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
LTIME=0
site="twoprops.net"
dir="/home/twoprops/quartz"
webdir="/var/www/twoprops"
while true
do
ATIME=`stat -c %Z $dir/content`
if [[ "$ATIME" != "$LTIME" ]]
then
echo
echo "-- $(date +%Y%m%d@%H:%M) $site --"
cd $dir
npx quartz build
#### selectively update the web directory
rsync -av --checksum $dir/public/ $webdir
LTIME=$ATIME
fi
sleep 5
done