A backup solution with restic, systemd, and backblaze

November 23, 2021Blog

Linux’s beauty lies in being coerced to learn a shitload of stuff to administrate your own system. I must admit I feel I’ve been dumbed down by all these years of blissful macOS usage. Last year I built myself a Linux desktop, and since then, I’ve been taking my digital sovereignty more seriously.

I still use cloud services for sure, but I don’t trust them anymore with my data. My Music library was absolutely crushed by Apple Music. Also, data is leaking everywhere, and what is supposed to be trusted shouldn’t be (I’m looking at you, Apple, and your dubious encryption claims.)

That said. Let’s talk about backups. Due to some rave reviews here and there on the internet, I decided to use restic as my backup solution. I want to back up my important files to a USB-connected disk and regularly upload them to an object-storage service for an easy and cheap off-site solution. I just needed a way to automatize its use.

First, a little bit of background:

  • All my computer’s disks are encrypted and decrypted at boot.
  • My USB disk is also encrypted and decrypted at boot.
  • The USB disk is not auto-mounted by systemd when the disk is first accessed, like this:

/dev/mapper/backup_crypt /backup ext4 defaults,noauto,user,x-systemd.automount,x-systemd.requires=systemd-cryptsetup@backup_crypt.service 0 2

To back up, I initially wrote a script that was run through some cronjobs, but it had the awful tendency to crash. I suspect the mix of systemd automount and restic wasn’t successful, but I did not investigate in that direction.

Instead, I searched for a way to have systemd manage this fun zoo. That way, the unit could depend on the disk being mounted and ensure that the /backup mount would be already available before the backup process. Plus, having a learning opportunity always makes things more fun for me. Instead of going through the analytical route, I went through the learning one.

As always when it comes to system administration, the Arch Wiki delivers. Systemd offers a Timers primitive that can replace crons. It works a bit differently, and it’s interesting.


set -e -o pipefail



if [ -z $BACKUP ]
    >&2 echo "Please define backup name through the BACKUP environment variable."
    exit 1


source /etc/backup/${BACKUP}.env

if [ ! -f "${BACKUP_ENV}" ]
    >&2 echo "[$BACKUP] ${BACKUP_ENV} does not exist. Exiting."
    exit 1

if [ ! -f "${BACKUP_FILES}" ]
    >&2 echo "[$BACKUP] ${BACKUP_FILES} does not exist. Exiting."
    exit 1

    BACKUP_OPTS="--iexclude-file $BACKUP_IEXCLUDE ${OPTS}"

    restic init &
    wait $!

# Remove locks in case other stale processes kept them in
restic unlock &
wait $!

# Do the actual backup
restic backup \
    --exclude-caches \
    --files-from $BACKUP_FILES &
wait $!

# Removing old snapshots
restic forget \
    --verbose \
    --prune \
    --keep-daily $BACKUP_RETENTION_DAYS \
    --keep-weekly $BACKUP_RETENTION_WEEKS \
    --keep-monthly $BACKUP_RETENTION_MONTHS \
    --keep-yearly $BACKUP_RETENTION_YEARS &
wait $!

# Verifying data integrity
restic check --read-data &
wait $!

>&2 echo "[$BACKUP] Backup done"

The above script is executed like that: name. It looks for different files allowing to configure restic execution (environment variables, etc.). Once it has found everything, it runs the backup and data verification process.

Now, it is needed to create a template unit file for systemd:

Description=Backup with Restic (%i)
# Do not run in case another backup is already running
# This service requires the backup disk to be mounted

ExecStart=/etc/backup/ %i

Description=Backup Nas Configuration

# Run everyday at 2:15am
OnCalendar=*-*-* 2:15:00
# My assumption: This should allow the job to start later if it wasn't started due to a conflict


Now we run:

systemctl daemon-reload
systemctl enable backup@nas-config.timer
systemctl start backup@nas-config.timer

And voila! It’s ready. Now we can repeat the same solution for sending backup to an off-site location. Also, the good thing is, it is possible to have execution logs with journalctl -f -u backup@nas-config.timer

auth-source issues with .authinfo.gpg and org2blog

November 23, 2021Blog

Of course, Emacs has its idiosyncrasies. In my latest case of “let’s waste some time after a complete reinstall”, I enjoyed discovering that Emacs I needed to add this line in my doom config.


Otherwise, the returned value of (auth-source-user-and-password) was (nil nil nil). Why? Because apparently, the epa package wasn’t enabled. Why? I don’t know…

And the more you know…

Un extrait de Cloud Atlas par David Mitchell

August 23, 2021Blog

« Trois ou quatre fois seulement dans ma jeunesse, j’ai entrevu les îles de la Joie avant que les brouillards, dépressions, fronts froids, vents mauvais et courants contraires ne les emportent… Croyant qu’il s’agissait des terres de l’âge adulte, je pensais les revoir au cours de mon périple ; aussi ne pris-je la peine d’en enregistrer ni la latitude, ni la longitude, ni la voie d’approche. Jeune et fieffé crétin. Que ne donnerais-je aujourd’hui pour obtenir une carte définitive d’un immuable ineffable ? Posséder, si pareille chose existait, une cartographie des nuages. »

Isolating trusted, guest, and IoT HomeKit devices with OpenWRT

April 6, 2021Blog

Lockdowns are a perfect excuse to spend a ridiculous amount of time pretending reality doesn’t exist improving my Home network security.

I switched to using the OpenWRT firmware on my router last year (yes, it was during a lockdown) and never went back to using something else. The OpenWRT team has managed to produce qualitative software that is both very powerful, yet usable. Unlike MicroTik that is hardly approachable by a beginner, OpenWRT found the right balance. Let me be clear, though: network engineering is something I dread as I have a shallow understanding of the networking stack.

While I was using OpenWRT for a year, I still had the classic all-in-one network architecture where all my devices (computers, friends’ phones, IoT devices, …) were using the same network. As IoT devices are prone to hacking, and as we’re never quite sure of our friends’ computer hygiene, it is better to isolate as many devices as possible. I want to sleep at night by knowing my lightbulb won’t hack my NAS because I gave my WiFi password to someone. Yes, that sounds ridiculous, but if the past decade has taught me something, it’s that anything can happen, especially if it is stupid.

How long could it take me to improve my network? A few hours top? Of course no, Dunning-Kruger were right again, and I’m too proud to reveal the number of hours I spent doing that. If that was a walk in the park, I wouldn’t write a cathartic article for that.

This article is for you, the beautiful stranger from the future who had the same idea as me and found this article by nervously scouring the web. I hope this will help you. I hope this will save you some time.

After a 1-minute design sprint while sipping my morning coffee, I ended up wanting to create 3 different networks. The first for trusted devices, the second for guest devices, and the third for IoT devices.

The rules are simple:

  1. Only trusted devices can access IoT devices.
  2. Guest and IoT devices can access the internet, but only through different VPN tunnels (because why not?!).
  3. Guests and IoT devices are not isolated in the same network but cannot cross networks to talk to each other.

I wanted to experiment with several OpenVPN tunnels for different reasons. The first one is protecting my privacy from potentially spying IoT devices. Now that I can afford the luxury, I prefer to hide my real country. It’s not much, but it’s always a plus. I could have used the same VPN for guests, but having an IP in a different country brings some side effects. I didn’t want guests to have the Google homepage in a foreign language.

This setup is not perfect security-wise nor privacy-wise, and I might tighten it in the future once I get better at maintaining it.

By following this rich article by Sven Kiljan (hi Sven! Thanks!), I could setup VLANs, OpenVPN, and had a working version of my network pretty quickly. Unfortunately, this setup quickly showed its limit as the way the OpenVPN routing is handled breaks the possibility for my trusted network to communicate with my IoT network. I spent a ridiculous amount of time trying to understand how to route VLANs together. I tried a lot of things and couldn’t find information on the web. There’s something magic when somehow you’re not googling the right stuff that will bring you the solution. When you don’t know what you don’t know, things can take a long time.

If you want multiple VPN tunnels and if you’re going to route traffic from some subnets to different VPN based on some arbitrary rules, don’t start looking at ip route add, nor ip route get because it is vertiginous.

Here is the grammar for the ip route command.

# ip route help
Usage: ip route { list | flush } SELECTOR
       ip route save SELECTOR
       ip route restore
       ip route showdump
       ip route get [ ROUTE_GET_FLAGS ] ADDRESS
                            [ from ADDRESS iif STRING ]
                            [ oif STRING ] [ tos TOS ]
                            [ mark NUMBER ] [ vrf NAME ]
                            [ uid NUMBER ] [ ipproto PROTOCOL ]
                            [ sport NUMBER ] [ dport NUMBER ]
       ip route { add | del | change | append | replace } ROUTE
SELECTOR := [ root PREFIX ] [ match PREFIX ] [ exact PREFIX ]
            [ table TABLE_ID ] [ vrf NAME ] [ proto RTPROTO ]
            [ type TYPE ] [ scope SCOPE ]
             [ table TABLE_ID ] [ proto RTPROTO ]
             [ scope SCOPE ] [ metric METRIC ]
             [ ttl-propagate { enabled | disabled } ]
INFO_SPEC := { NH | nhid ID } OPTIONS FLAGS [ nexthop NH ]...
	    [ dev STRING ] [ weight NUMBER ] NHFLAGS
FAMILY := [ inet | inet6 | mpls | bridge | link ]
OPTIONS := FLAGS [ mtu NUMBER ] [ advmss NUMBER ] [ as [ to ] ADDRESS ]
           [ rtt TIME ] [ rttvar TIME ] [ reordering NUMBER ]
           [ window NUMBER ] [ cwnd NUMBER ] [ initcwnd NUMBER ]
           [ ssthresh NUMBER ] [ realms REALM ] [ src ADDRESS ]
           [ rto_min TIME ] [ hoplimit NUMBER ] [ initrwnd NUMBER ]
           [ features FEATURES ] [ quickack BOOL ] [ congctl NAME ]
           [ pref PREF ] [ expires TIME ] [ fastopen_no_cookie BOOL ]
TYPE := { unicast | local | broadcast | multicast | throw |
          unreachable | prohibit | blackhole | nat }
TABLE_ID := [ local | main | default | all | NUMBER ]
SCOPE := [ host | link | global | NUMBER ]
NHFLAGS := [ onlink | pervasive ]
RTPROTO := [ kernel | boot | static | NUMBER ]
PREF := [ low | medium | high ]
TIME := NUMBER[s|ms]
BOOL := [1|0]
ENCAPTYPE := [ mpls | ip | ip6 | seg6 | seg6local | rpl ]
SEG6HDR := [ mode SEGMODE ] segs ADDR1,ADDRi,ADDRn [hmac HMACKEYID] [cleanup]
SEGMODE := [ encap | inline ]
ROUTE_GET_FLAGS := [ fibmatch ]

This help message is not helpful. It’s a giant fuck you to all beginners. I don’t say I took it personally, but this is a reminder of how arid and hostile Linux can be. It’s fucking Arrakis.

I was trying to understand routes, tables, and the poor life-choices that drove me in this situation when I did a u-turn and ultimately decided to ask for help online. A stranger (who apparently survived Arrakis’ desert) pointed me to VPN Policy Based Routing. It was the solution! The name sounds pompous and complicated like AWS IAM Policies. But not this one. This one is fine. I promise. I was finally able to have access the IoT subnet from the trusted subnet. I had to switch from Sven’s OpenVPN way of doing things to ProtonVPN’s way.

My second most significant hurdle was HomeKit. Oh boy.

First, HomeKit uses mDNS, also called dns-sd, also called Bonjour, also called avahi, depending on your age, technical level, platform, and use. The principle is “simple”. UDP packets are broadcasted to multicast IP on port 5353. Devices answer on the same port. Besides configuring the firewall for data to move freely from one zone to another, UDP multicast stops at the subnet (to prevent DDoS or something like that). Packets need to be relayed to other subnets. Otherwise, the only thing mDNS software will see are things located in the same subnet. This is precisely what I didn’t want to have.

I spent a few hours trying to find the right way of relaying these packets. I tried on my router to build software from source until I discovered that iptables can forward UDP packets. I was again fearful of trying, but I finally found that forwarding packets is a core feature of avahi, the Linux mDNS daemon! It’s called reflector. What a nice name for a feature! Did I lose my time? No. Each and every learning opportunity is a blessing.

# /etc/avahi/avahi-daemon.conf
# Enable relaying packets over all local interfaces

At this point, I only had to open ports 80, 443, 51827, and 52934 for HomeKit devices to be available from my trusted devices. Some of these ports might be superfluous, but I need to keep some tidying work for future lockdowns.

I now have 3 isolated SSIDs and I can use the IoT devices that I enjoy.

Solving Doom void-function auth-sources-user-and-password error at launch

March 20, 2021Blog

I use Emacs both on my laptop (a Mac) and on my desktop (a Linux). Recently, when launching emacs on my mac, I had the following error:

Error in private config: config.el, (void-function auth-source-user-and-password)

Being an emacs noob, I was a bit puzzled. From what I understood, auth-source is a default package, and this function should be available. I joined the Doom Discord server where I found help in just a few minutes.

From what I understand, Doom optimizes Emacs a lot by lazy loading packages. This is why it is so quick to launch (and a joy to use). In my config.el file, I was calling the auth-source-user-and-password directly. This was the source of the problem. When configuring packages in Doom, make sure you use hooks such as after! which will run the code once a package has loaded. In my case, it looked like that in the end:

(after! auth-source
    (setq foobar-credentials (auth-source-user-and-password "foobar"))

This solved the error at launch and let me use the credential in my configuration.

Listen: Silhouettes by Shadow Show

March 4, 2021Blog

Shadow Show - Silhouettes

Scrolling through the infinity of what has become Bandcamp Daily, I discovered a remarkable album of psychedelic rock from a band called Shadow Show. This trio from Detroit delivers a solid 10 tracks album. Kate Derringer (also bass player of the trio) did a great work on the mix (as well as on the bass!). It sounds like it comes straight from the 60s but with a cleaner touch. Vocals harmonics are especially great as well. I can’t wait to hear the vinyl.

Blogging from Emacs

February 28, 2021Blog

To my very own surprise, after a decade of being a vim user, I have become a doom emacs user. This is the best of both worlds. I can keep my vim bindings (which I love in a Stockholm syndrome way), and I get to have the amazing Emacs operating system at hands. Look at me now, blogging from org2blog using my vim muscular memory on my WordPress instance.

Doom Emacs is insanely great. It takes a bit of time to get used to it but it’s a very rewarding thing to do. Org Mode is simply great and deserves to have more user.


March 29, 2020Blog


I am super happy to announce that my friend Flore’s album RITUALS is finally around the corner. It will be released April 7th on POLAAR in digital and vinyl. Thanks to numerous supporters, the crowfunding for its vinyl release has met the funding goal in just a few hours. At time of writing, it is funded at 175%. The album has received a large number of positive reviews from press and radio. A terrific remix EP in on its track as well, with features from artists we would only have dreamt of working with just a few years back.

RITUALS is the culmination of more than 6 years of artistic work for Flore. It is also the culmination of 6 years of hard work on POLAAR. It is impossible to dissociate these two endeavors. During all these years, Flore has relentlessly poured her energy in both – after all, the labels’ inaugural release was her own RITUAL Part. 1. As a friend, I am so proud of what she achieves, and the consistency she boast since 20 years.

Her album is a treat and I am so glad we release it on our own imprint.

Slides: Architecture Modulaire au Symfony Live 2018 Paris

March 29, 2018Blog

J’ai eu le plaisir de donner une conférence pour le Symfony Live 2018 à Paris. Voici les slides.

Architecture modulaire grâce à Symfony et l’écosystème open-source

Les avancées présentes dans les dernières versions de Symfony et la maturité de l’écosystème open-source permettent de réaliser des architectures modulaires de grande qualité, avec des paradigmes de développement différents. Nous verrons, avec le case study d’AudienceHero, comment tirer parti des meilleures briques logicielles (Autoconfiguration, Autowiring, ApiPlatform, React Admin, etc.) afin de créer des systèmes modulaires complets.

Le PDF est dispo ici.

Links Worth Sharing #9

March 23, 2018Blog > Nebula


What’s up? Doing good? Here are some links I think are worth sharing.

  1. ??? Cultivée plutôt que riche, la «classe ambitieuse» change le rapport à la consommation. Pourquoi la consommation de légumes moches a fini par compter davantage que la marque de notre voiture.
  2. ?‍? Sally West paints beach landscapes. She works on texture to create stylish renditions of white wash.
  3. ? Given you’re into architecture and unaffordable home, check out Netflix’s The World’s Most Extraordinary Homes documentary series. My favorite one is the one in the Arizona desert. What a view!
  4. ℹ️ ? A chilling reveal: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. They were used to break society and change the psychology of people, in order to change politics.
  5. ℹ️ ? Edward Snowden: Facebook is a surveillance company rebranded as ‘social media’
  6. ℹ️ ? #deletefacebook . “I would wager I use Facebook more to broadcast my ego than interact with real humans. And I suspect that most of us are in a similar situation”. You can use this nice browser extension to delete old facebook data. Don’t forget to ask for a backup so you get to keep everything.
  7. ℹ️ Autonomous vehicles, aka self-driving cars will change where people chose to live. As a result, ~$1 trillion of real estate is on the move … here’s why. (Hint: It appears it already started in the US)
  8. ? Suprême NTM – Anthologie. The best tracks of the french hip-hop group NTM. Perfect.
  9. ? I don’t really know why I like No Age – Snares Like a Haircut but I am not alone as Drowned In Sound‘s Marc Burrows explains it much better than me: “After 40 minutes you’re still not totally sure what it is you’ve listened to. This could be great, messy pop music, or just as easily be something you dreamt, dozing in post-coital bliss with a detuned radio in the background.“.
  10. ? I had a good laugh watching Ricky Gervais’ last show “Humanity“.

Thanks for reading!


P.S: You can receive this directly in your inbox. Drop me an email and I’ll send it to you every week.

Configuring nginx to serve a Symfony project in a subdirectory of a wordpress website.

January 23, 2018Blog

It is surprisingly non-trivial to configure Nginx to serve two different PHP applications on the same domain, one being in a logical subdirectory.

I spent a few hours scouring the web and trying different things. I ended up with this configuration, thanks from a source (of which I lost track. Sorry!)

The configuration file defines a server listening for server on port 80. The WordPress application is located in /var/www/wordpress and the Symfony application is located in /var/www/symfony.

When a browsers requests the resource, the request is passed to the Symfony app. Otherwise, the request goes to the WordPress app.

server {
	listen 80;
	listen [::]:80;

	root /var/www/wordpress;
	index index.php app.php index.html;

	location /subdirectory {
		root $symfonyRoot;
		rewrite ^/subdirectory/(.*)$ /$1 break;
		try_files $uri @symfonyFront;

	location / {
		try_files $uri $uri/ /index.php?$args;

	set $symfonyRoot /var/www/symfony/web;
	set $symfonyScript app.php;
  # This is for the Symfony application
	location @symfonyFront {
		fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
		include /etc/nginx/fastcgi_params;
		fastcgi_param SCRIPT_FILENAME $symfonyRoot/$symfonyScript;
		fastcgi_param SCRIPT_NAME /subdirectory/$symfonyScript;
		fastcgi_param REQUEST_URI /subdirectory$uri?$args;

  # This is for the wordpress app
	location ~ \.php {
		fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
		fastcgi_index index.php;
		fastcgi_param PATH_INFO $fastcgi_path_info;
		fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info;
		fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
		fastcgi_param REQUEST_URI $uri?$args;
		include /etc/nginx/fastcgi_params;

I also created a public gist with this configuration file.

Aller/Retour Lyon-Tokyo en une heure

December 15, 2017Blog

Sorti du bureau, j’ai les nerfs en pelote et le cerveau qui grince. J’ai besoin de sortir de ma routine. J’ai besoin que la ville me donne ce qu’elle a de bon. Je veux un restau. Un bon. Pas loin. Pas guindé. Rapide.

Elle me propose d’aller Chez Terra. Un bistrot japonais dans le 6ème arrondissement dont j’ai entendu que du bien. Il parait que le consul du Japon y a ses habitudes. Ca fait un an qu’on trouve pas le moment idéal pour y aller, et ce soir ne me semble pas être le bon non plus. Je m’imagine un Tonkatsu quelconque et j’écarte l’idée, prétextant que j’ai envie de manger autre chose. Quel con. J’appelle tous les autres resto auxquels je pense pour réserver une table. Sans succès. J’appelle Terra. Nous nous y attablerons 15 minutes plus tard.

La salle est simple. “Dans son jus”, comme dirait le cliché du professionnel de l’immobilier. Simple, spacieuse et conviviale à la fois. Je m’y sens bien. Ce qui compte ce soir, c’est ce qu’il y a dans mon assiette.

Le service est aimable mais surtout efficace. Pas de ronds de jambe. C’est tant mieux, j’ai pas envie de me sentir invité, je veux me sentir quasi comme à la maison.

Nous commandons plats et pintes. Ca arrive. Vite.

Salade d’épinards glacé, copeaux de bonite séchée. Surprenante entrée en matière. Mes papilles font déjà moins la gueule.
Porc sucré-salé. Le gras fond dans la bouche. Un régal pour les yeux et pour le palais.
Les portions sont petites, j’ai peur d’avoir faim à la fin du repas.
Sushis de maquereau. J’ai rarement vu sushi plus laid, et rarement mangé sushi aussi bon.
Mes papilles rentrent en harmonie. Les endorphines montent.
Porc au thé, à la sauce acidulée. Wow, je suis bluffé.
Je commande des gyozas par curiosité. Et peur du manque. Aucun regret, c’est un sans faute. Nous terminons en avalant une crème brulée au thé vert. Un bon finish pour moi qui n’est pas un grand amateur de desserts japonais.

En une heure, ce restau-quickie a transformé mon mood. La note un peu élevée m’a fait redescendre, mais n’a pas assombrit le souvenir gastronomique. Sans que ça devienne une cantine, je retournerai Chez Terra avec un immense plaisir

Chez Terra
81 rue Dusguesclin
69006 Lyon

Enfin comprendre le programme des candidats, grace à des bots sur Twitter

April 11, 2017Blog

Toi même tu sais, cette période électorale pré-présidentielle-2017 est un plaisir quotidien. J’ai eu le malheur de vouloir m’intéresser aux candidats via Twitter et ça m’a rappelé la superbe vidéo de Franck Lepage qu’un copain a eu le bon goût de me partager. Voulant être de bon goût, je te la partage à mon tour:

Les bots Twitters étant apparemment un passe-temps dont j’ai du mal à me défaire (Je prends quasi l’entière responsabilité de la création de ces bots : @PassionSirene, @PassionTocsin, @PassionMouton, @pdesproges), je me suis dit qu’il manquait dans ce monde les pendants algorithmiques des candidats à la présidentielle. Sauf une, pour certaines raisons. Les voici, dans le désordre:

Et le compte meta: @2017_ebooks qui RT chaque tweete, de chaque compte.

Un  peu comme un débat sur TF1, je ne les ai pas tous inclus (car je te l’avoue, la Flemme c’est comme la Force, c’est puissant). Petit florilège:

Alors voilà, on clone les hommes politiques, on injecte leurs paroles publiques dans des robots, et un algorithme génère des phrases surréalistes.

Qu’est-ce qu’on apprend ?

Je ne sais pas trop, mais en tout cas, je me marre bien !

Un palindrome musical, inspiré par le cosmos

April 11, 2017Blog

Je suis tombé le week-end dernier sur ce bijou musical.

Dans cette vidéo, Daniel Starr-Tambor (portant fièrement un joli combo barbe-casquette) dévoile une création unique : le palindrome musical le plus long du monde. Oui oui. Grace à un tour de passe-passe solfégique (dont je t’avoue ne rien maitriser), il assigne à chaque planète une note de musique différente.

Mercure sera la note Si.

Pluton sera un Do# à 2 octaves et des poussières plus haut.

(Entre les deux, regarde la partition au milieu de la vidéo).

S’ensuit une autre preuve que ce garçon est plutôt malin.

Il calcule mathématiquement la fréquence de répétition de chaque note grace à une équation de son cru: période orbitale de la planète * 15779059,2 = fréquence de répétition. (oui monsieur).

L’alignement musicalo-planétaire  finissant par arriver (en tout cas je lui fais confiance), la pièce se transforme en palindrome.

Magnifique, je te dis !

Introducing htail, a new debugging companion

September 28, 2015Blog

Modern software development usually involves a lot of moving parts. If you work on a web application, you might interact with: a reverse proxy, a web server, an application server, a relational database, a key-value store, and so on.

When a problem arises, the debugging procedure starts. You follow your intuition and start searching where you think the problem lays in. Usually, it’s not where you think it is, but somewhere else. What if you could have a view on your whole system while reproducing your issue? You might get the chance to see in one obscure log file where the problem lays.

After falling in this situation so many times, I decided to build a simple tool in order to read all my log files. It’s called htail and it’s available right now.

By firing up htail, you get your most common log files in your terminal or your browser window. It’s damn useful.

Installation and usage are described in the github page.