I still use cloud services for sure, but I don’t trust them anymore with my data. My Music library was absolutely crushed by Apple Music. Also, data is leaking everywhere, and what is supposed to be trusted shouldn’t be (I’m looking at you, Apple, and your dubious encryption claims.)
That said. Let’s talk about backups. Due to some rave reviews here and there on the internet, I decided to use restic as my backup solution. I want to back up my important files to a USB-connected disk and regularly upload them to an object-storage service for an easy and cheap off-site solution. I just needed a way to automatize its use.
First, a little bit of background:
/dev/mapper/backup_crypt /backup ext4 defaults,noauto,user,x-systemd.automount,x-systemd.requires=systemd-cryptsetup@backup_crypt.service 0 2
To back up, I initially wrote a script that was run through some cronjobs, but it had the awful tendency to crash. I suspect the mix of systemd automount and restic wasn’t successful, but I did not investigate in that direction.
Instead, I searched for a way to have systemd manage this fun zoo. That way, the unit could depend on the disk being mounted and ensure that the /backup
mount would be already available before the backup process. Plus, having a learning opportunity always makes things more fun for me. Instead of going through the analytical route, I went through the learning one.
As always when it comes to system administration, the Arch Wiki delivers. Systemd offers a Timers primitive that can replace crons. It works a bit differently, and it’s interesting.
#!/bin/bash set -e -o pipefail DIR="/etc/backup" BACKUP=$1 if [ -z $BACKUP ] then >&2 echo "Please define backup name through the BACKUP environment variable." exit 1 fi BACKUP_RETENTION_DAYS=7 BACKUP_RETENTION_WEEKS=4 BACKUP_RETENTION_MONTHS=6 BACKUP_RETENTION_YEARS=2 BACKUP_ENV="${DIR}/${BACKUP}.env" BACKUP_FILES="${DIR}/${BACKUP}.files" BACKUP_IEXCLUDE="${DIR}/${BACKUP}.iexclude" BACKUP_OPTS="" source /etc/backup/${BACKUP}.env if [ ! -f "${BACKUP_ENV}" ] then >&2 echo "[$BACKUP] ${BACKUP_ENV} does not exist. Exiting." exit 1 fi if [ ! -f "${BACKUP_FILES}" ] then >&2 echo "[$BACKUP] ${BACKUP_FILES} does not exist. Exiting." exit 1 fi if [ -f $BACKUP_IEXCLUDE ] then BACKUP_OPTS="--iexclude-file $BACKUP_IEXCLUDE ${OPTS}" fi if [ ! -d $RESTIC_REPOSITORY ] then restic init & wait $! fi # Remove locks in case other stale processes kept them in restic unlock & wait $! # Do the actual backup restic backup \ --exclude-caches \ $BACKUP_OPTS \ --files-from $BACKUP_FILES & wait $! # Removing old snapshots restic forget \ --verbose \ --prune \ --keep-daily $BACKUP_RETENTION_DAYS \ --keep-weekly $BACKUP_RETENTION_WEEKS \ --keep-monthly $BACKUP_RETENTION_MONTHS \ --keep-yearly $BACKUP_RETENTION_YEARS & wait $! # Verifying data integrity restic check --read-data & wait $! >&2 echo "[$BACKUP] Backup done"
The above script is executed like that: backup.sh name
. It looks for different files allowing to configure restic execution (environment variables, etc.). Once it has found everything, it runs the backup and data verification process.
Now, it is needed to create a template unit file for systemd:
[Unit] Description=Backup with Restic (%i) # Do not run in case another backup is already running Conflicts=backup@.service # This service requires the backup disk to be mounted Requires=backup.automount [Service] Type=simple Nice=10 ExecStart=/etc/backup/backup.sh %i Environment="HOME=/root" [Install] WantedBy=multi-user.target
[Unit] Description=Backup Nas Configuration [Timer] # Run everyday at 2:15am OnCalendar=*-*-* 2:15:00 # My assumption: This should allow the job to start later if it wasn't started due to a conflict Persistent=True [Install] WantedBy=timers.target
Now we run:
systemctl daemon-reload systemctl enable backup@nas-config.timer systemctl start backup@nas-config.timer
And voila! It’s ready. Now we can repeat the same solution for sending backup to an off-site location. Also, the good thing is, it is possible to have execution logs with journalctl -f -u backup@nas-config.timer
(epa-file-enable)
Otherwise, the returned value of (auth-source-user-and-password)
was (nil nil nil)
. Why? Because apparently, the epa
package wasn’t enabled. Why? I don’t know…
And the more you know…
]]>I switched to using the OpenWRT firmware on my router last year (yes, it was during a lockdown) and never went back to using something else. The OpenWRT team has managed to produce qualitative software that is both very powerful, yet usable. Unlike MicroTik that is hardly approachable by a beginner, OpenWRT found the right balance. Let me be clear, though: network engineering is something I dread as I have a shallow understanding of the networking stack.
While I was using OpenWRT for a year, I still had the classic all-in-one network architecture where all my devices (computers, friends’ phones, IoT devices, …) were using the same network. As IoT devices are prone to hacking, and as we’re never quite sure of our friends’ computer hygiene, it is better to isolate as many devices as possible. I want to sleep at night by knowing my lightbulb won’t hack my NAS because I gave my WiFi password to someone. Yes, that sounds ridiculous, but if the past decade has taught me something, it’s that anything can happen, especially if it is stupid.
How long could it take me to improve my network? A few hours top? Of course no, Dunning-Kruger were right again, and I’m too proud to reveal the number of hours I spent doing that. If that was a walk in the park, I wouldn’t write a cathartic article for that.
This article is for you, the beautiful stranger from the future who had the same idea as me and found this article by nervously scouring the web. I hope this will help you. I hope this will save you some time.
After a 1-minute design sprint while sipping my morning coffee, I ended up wanting to create 3 different networks. The first for trusted devices, the second for guest devices, and the third for IoT devices.
The rules are simple:
I wanted to experiment with several OpenVPN tunnels for different reasons. The first one is protecting my privacy from potentially spying IoT devices. Now that I can afford the luxury, I prefer to hide my real country. It’s not much, but it’s always a plus. I could have used the same VPN for guests, but having an IP in a different country brings some side effects. I didn’t want guests to have the Google homepage in a foreign language.
This setup is not perfect security-wise nor privacy-wise, and I might tighten it in the future once I get better at maintaining it.
By following this rich article by Sven Kiljan (hi Sven! Thanks!), I could setup VLANs, OpenVPN, and had a working version of my network pretty quickly. Unfortunately, this setup quickly showed its limit as the way the OpenVPN routing is handled breaks the possibility for my trusted network to communicate with my IoT network. I spent a ridiculous amount of time trying to understand how to route VLANs together. I tried a lot of things and couldn’t find information on the web. There’s something magic when somehow you’re not googling the right stuff that will bring you the solution. When you don’t know what you don’t know, things can take a long time.
If you want multiple VPN tunnels and if you’re going to route traffic from some subnets to different VPN based on some arbitrary rules, don’t start looking at ip route add
, nor ip route get
because it is vertiginous.
Here is the grammar for the ip route
command.
# ip route help Usage: ip route { list | flush } SELECTOR ip route save SELECTOR ip route restore ip route showdump ip route get [ ROUTE_GET_FLAGS ] ADDRESS [ from ADDRESS iif STRING ] [ oif STRING ] [ tos TOS ] [ mark NUMBER ] [ vrf NAME ] [ uid NUMBER ] [ ipproto PROTOCOL ] [ sport NUMBER ] [ dport NUMBER ] ip route { add | del | change | append | replace } ROUTE SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ] [ table TABLE_ID ] [ vrf NAME ] [ proto RTPROTO ] [ type TYPE ] [ scope SCOPE ] ROUTE := NODE_SPEC [ INFO_SPEC ] NODE_SPEC := [ TYPE ] PREFIX [ tos TOS ] [ table TABLE_ID ] [ proto RTPROTO ] [ scope SCOPE ] [ metric METRIC ] [ ttl-propagate { enabled | disabled } ] INFO_SPEC := { NH | nhid ID } OPTIONS FLAGS [ nexthop NH ]... NH := [ encap ENCAPTYPE ENCAPHDR ] [ via [ FAMILY ] ADDRESS ] [ dev STRING ] [ weight NUMBER ] NHFLAGS FAMILY := [ inet | inet6 | mpls | bridge | link ] OPTIONS := FLAGS [ mtu NUMBER ] [ advmss NUMBER ] [ as [ to ] ADDRESS ] [ rtt TIME ] [ rttvar TIME ] [ reordering NUMBER ] [ window NUMBER ] [ cwnd NUMBER ] [ initcwnd NUMBER ] [ ssthresh NUMBER ] [ realms REALM ] [ src ADDRESS ] [ rto_min TIME ] [ hoplimit NUMBER ] [ initrwnd NUMBER ] [ features FEATURES ] [ quickack BOOL ] [ congctl NAME ] [ pref PREF ] [ expires TIME ] [ fastopen_no_cookie BOOL ] TYPE := { unicast | local | broadcast | multicast | throw | unreachable | prohibit | blackhole | nat } TABLE_ID := [ local | main | default | all | NUMBER ] SCOPE := [ host | link | global | NUMBER ] NHFLAGS := [ onlink | pervasive ] RTPROTO := [ kernel | boot | static | NUMBER ] PREF := [ low | medium | high ] TIME := NUMBER[s|ms] BOOL := [1|0] FEATURES := ecn ENCAPTYPE := [ mpls | ip | ip6 | seg6 | seg6local | rpl ] ENCAPHDR := [ MPLSLABEL | SEG6HDR ] SEG6HDR := [ mode SEGMODE ] segs ADDR1,ADDRi,ADDRn [hmac HMACKEYID] [cleanup] SEGMODE := [ encap | inline ] ROUTE_GET_FLAGS := [ fibmatch ]
This help message is not helpful. It’s a giant fuck you to all beginners. I don’t say I took it personally, but this is a reminder of how arid and hostile Linux can be. It’s fucking Arrakis.
I was trying to understand routes, tables, and the poor life-choices that drove me in this situation when I did a u-turn and ultimately decided to ask for help online. A stranger (who apparently survived Arrakis’ desert) pointed me to VPN Policy Based Routing. It was the solution! The name sounds pompous and complicated like AWS IAM Policies. But not this one. This one is fine. I promise. I was finally able to have access the IoT subnet from the trusted subnet. I had to switch from Sven’s OpenVPN way of doing things to ProtonVPN’s way.
My second most significant hurdle was HomeKit. Oh boy.
First, HomeKit uses mDNS
, also called dns-sd
, also called Bonjour
, also called avahi
, depending on your age, technical level, platform, and use. The principle is “simple”. UDP packets are broadcasted to multicast IP 224.0.0.251
on port 5353
. Devices answer on the same port. Besides configuring the firewall for data to move freely from one zone to another, UDP multicast stops at the subnet (to prevent DDoS or something like that). Packets need to be relayed to other subnets. Otherwise, the only thing mDNS
software will see are things located in the same subnet. This is precisely what I didn’t want to have.
I spent a few hours trying to find the right way of relaying these packets. I tried on my router to build software from source until I discovered that iptables
can forward UDP packets. I was again fearful of trying, but I finally found that forwarding packets is a core feature of avahi
, the Linux mDNS
daemon! It’s called reflector
. What a nice name for a feature! Did I lose my time? No. Each and every learning opportunity is a blessing.
# /etc/avahi/avahi-daemon.conf [reflector] # Enable relaying packets over all local interfaces enable-reflector=yes reflect-ipv=no
At this point, I only had to open ports 80
, 443
, 51827
, and 52934
for HomeKit devices to be available from my trusted devices. Some of these ports might be superfluous, but I need to keep some tidying work for future lockdowns.
I now have 3 isolated SSIDs and I can use the IoT devices that I enjoy.
]]> Error in private config: config.el, (void-function auth-source-user-and-password)
Being an emacs noob, I was a bit puzzled. From what I understood, auth-source is a default package, and this function should be available. I joined the Doom Discord server where I found help in just a few minutes.
From what I understand, Doom optimizes Emacs a lot by lazy loading packages. This is why it is so quick to launch (and a joy to use). In my config.el
file, I was calling the auth-source-user-and-password
directly. This was the source of the problem. When configuring packages in Doom, make sure you use hooks such as after!
which will run the code once a package has loaded. In my case, it looked like that in the end:
(after! auth-source (setq foobar-credentials (auth-source-user-and-password "foobar")) )
This solved the error at launch and let me use the credential in my configuration.
]]>
Scrolling through the infinity of what has become Bandcamp Daily, I discovered a remarkable album of psychedelic rock from a band called Shadow Show. This trio from Detroit delivers a solid 10 tracks album. Kate Derringer (also bass player of the trio) did a great work on the mix (as well as on the bass!). It sounds like it comes straight from the 60s but with a cleaner touch. Vocals harmonics are especially great as well. I can’t wait to hear the vinyl.
]]>Doom Emacs is insanely great. It takes a bit of time to get used to it but it’s a very rewarding thing to do. Org Mode is simply great and deserves to have more user.
]]>RITUALS is the culmination of more than 6 years of artistic work for Flore. It is also the culmination of 6 years of hard work on POLAAR. It is impossible to dissociate these two endeavors. During all these years, Flore has relentlessly poured her energy in both – after all, the labels’ inaugural release was her own RITUAL Part. 1. As a friend, I am so proud of what she achieves, and the consistency she boast since 20 years.
Her album is a treat and I am so glad we release it on our own imprint.
]]>No introduction is the best kind of introduction, especially when I feel lazy.
Cheers,
Marc
]]>Hey there. What’s up? Doing good? I’ve been on a hiatus but hope to be back regularly posting links.
Enjoy,
Marc
Image Credit & Copyright: Data – Steve Milne & Barry Wilson, Processing – Steve Milne
]]>Ingrédients, pour 220ml de sauce:
Préparation:
In his Whiskey’n’Weed interview (which you should listen to, by the way), Elon Musk said something that struck me:
“A company is essentially a cybernetic collective made of people and machines.”
It’s a very particular way to look at the world. Of course it’s possible that we’re already merging with the machine. Of course we might already be cyborgs. We might just not be conscious of it. One question is of importance: By merging with the machine and creating a cybernetic collective, what happens to our individuality?
Cheers,
Marc
Image Credit: The Blue Horsehead Nebula in Infrared by WISE, IRSA, NASA; Processing & Copyright : Francesco Antonucci
]]>Bitcoin is a rabbit-hole. It’s impossible not to fall into it once you start getting interested in this technology. Medium is the message, and trying to understand the impact of Bitcoin (or cryptocurrencies at large) in the world is daunting. What is money? What is privacy? What are states? Why should we organize this way when we have this kind of tech? Can we be sure that it’s a good thing for us? Well, we don’t know, and we can’t for sure. It took telephone 75 years to reach 50 million users. Radio took 38 years, TV 13, Internet 4, Facebook 2, and Pokemon Go 19 days. Social experiments gets distributed to the public quicker, especially now that we don’t rely on custom-made devices to access them. Will hyper-bitcoinization happen, like maximalists predict?
Happy reading,
Marc
Image Credit & Copyright: The Pencil Nebula in Red and Blue by José Joaquín Perez.
]]>Peace,
Marc
Image Credit: Rings Around the Ring Nebula by Hubble, Large Binocular Telescope, Subaru Telescope; Composition & Copyright: Robert Gendler.
]]>