From 890b34bcc1a6b4073d1e512b1386634f7bc5ea52 Mon Sep 17 00:00:00 2001 From: "Adam T. Carpenter" Date: Wed, 21 Apr 2021 22:57:39 -0400 Subject: unified posts dir, until I can figure out makefile sub-subdirs. makefile auto-generates index --- ...est-way-to-transfer-gopro-files-with-linux.html | 127 ------- ...9-28-my-preferred-method-for-data-recovery.html | 276 --------------- .../2020-07-26-now-this-is-a-minimal-install.html | 101 ------ ...n-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html | 375 --------------------- ...w-to-automate-certbot-renewal-with-haproxy.html | 256 -------------- 5 files changed, 1135 deletions(-) delete mode 100644 posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html delete mode 100644 posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html delete mode 100644 posts/unix/2020-07-26-now-this-is-a-minimal-install.html delete mode 100644 posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html delete mode 100644 posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html (limited to 'posts/unix') diff --git a/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html b/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html deleted file mode 100644 index bbe5b28..0000000 --- a/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html +++ /dev/null @@ -1,127 +0,0 @@ - - - - - - - - - - - - - 53hornet ➙ Offloading GoPro Footage the Easy Way! - - - - - -
-

Offloading GoPro Footage the Easy Way!

- -

- Transferring files off of most cameras to a Linux computer isn't all - that difficult. The exception is my GoPro Hero 4 Black. For 4th of July - week I took a bunch of video with the GoPro, approximately 20 MP4 files, - about 3GB each. The annoying thing about the GoPro's USB interface is - you need additional software to download everything through the cable. - The camera doesn't just show up as a USB filesystem that you can mount. - The GoPro does have a micro-SD card but I was away from home and didn't - have any dongles or adapters. Both of these solutions also mean taking - the camera out of its waterproof case and off of its mount. So here's - what I did. -

- -

- GoPro cameras, after the Hero 3, can open up an ad-hoc wireless network - that lets you browse the GoPro's onboard files through an HTTP server. - This means you can open your browser and scroll through the files on the - camera at an intranet address, 10.5.5.9, and download them - one by one by clicking every link on every page. If you have a lot of - footage on there it kinda sucks. So, I opened up the manual for - wget. I'm sure you could get really fancy with some of the - options but the only thing I cared about was downloading every single - MP4 video off of the camera, automatically. I did not want to download - any of the small video formats or actual HTML files. Here's what I used: -

- -
-        
-sh wget --recursive --accept "*.MP4" http://10.5.5.9:8080/
-		
-      
- -

- This tells wget to download all of the files at the GoPro's - address recursively and skips any that don't have the MP4 extension. Now - I've got a directory tree with all of my videos in it. And the best part - is I didn't have to install the dinky GoPro app on my laptop. Hopefully - this helps if you're looking for an easy way to migrate lots of footage - without manually clicking through the web interface or installing - additional software. The only downside is if you're moving a whole lot - of footage, it's not nearly as quick as just moving files off the SD - card. So I'd shoot for using the adapter to read off the card first and - only use this if that's not an option, such as when the camera is - mounted and you don't want to move it. -

- -

Some things I would like to change/add:

- - - -

- I could probably write a quick and dirty shell script to do all of this - for me but I use the camera so infrequently that it's probably not even - worth it. -

-
- - diff --git a/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html b/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html deleted file mode 100644 index 9751eda..0000000 --- a/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html +++ /dev/null @@ -1,276 +0,0 @@ - - - - - - - - - - - - - 53hornet ➙ How I Do Data Recovery - - - - - -
-

How I Do Data Recovery

- -

- This week Amy plugged in her flash drive to discover that there were no - files on it. Weeks before there had been dozens of large cuts of footage - that she needed to edit down for work. Hours of recordings were - seemingly gone. And the most annoying part was the drive had worked - perfectly on several other occasions. Just not now that the footage was - actually needed of course. Initially it looked like everything had been - wiped clean, however both Amy's Mac and her PC thought the drive was - half full. It's overall capacity was 64GB but it showed only about 36GB - free. So there still had to be data on there if we could find the right - tool to salvage it. -

- -

- Luckily this wasn't the first time I had to recover accidentally (or - magically) deleted files. I had previously done so with some success at - my tech support job, for some college friends, and for my in-laws' - retired laptops. So I had a pretty clear idea of what to expect. The - only trick was finding a tool that knew what files it was looking for. - The camera that took the video clips was a Sony and apparently they - record into m2ts files, which are kind of a unique format - in that they only show up on Blu-Ray discs and Sony camcorders. Enter my - favorite two tools for dealing with potentially-destroyed data: - ddrescue and photorec. -

- -

DDRescue

- -

- ddrescue is a godsend of a tool. If you've ever used - dd before, forget about it. Use ddrescue. You - might as well alias dd=ddrescue because it's that great. By - default it has a plethora of additional options, displays the progress - as it works, recovers and retries in the event of I/O errors, and does - everything that good old dd can do. It's particularly good - at protecting partitions or disks that have been corrupted or damaged by - rescuing undamaged portions first. Oh, and have you ever had to cancel a - dd operation? Did I mention that ddrescue can - pause and resume operations? It's that good. -

- -

PhotoRec

- -

- photorec is probably the best missing file recovery tool - I've ever used in my entire life. And I've used quite a few. I've never - had as good results as I've had with photorec with other - tools like Recuva et. al. And photorec isn't just for - photos, it can recover documents (a la Office suite), music, images, - config files, and videos (including the very odd - m2ts format!). The other nice thing is - photorec will work on just about any source. It's also free - software which makes me wonder why there are like $50 recovery tools for - Windows that look super sketchy. -

- -

In Practice

- -

- So here's what I did to get Amy's files back. Luckily she didn't write - anything out to the drive afterward so the chances (I thought) were - pretty good that I would get something back. The first thing I - always do is make a full image of whatever media I'm trying to recover - from. I do this for a couple of reasons. First of all it's a backup. If - something goes wrong during recovery I don't have to worry about the - original, fragile media being damaged or wiped. Furthermore, I can work - with multiple copies at a time. If it's a large image that means - multiple tools or even multiple PCs can work on it at once. It's also - just plain faster working off a disk image than a measly flash drive. So - I used ddrescue to make an image of Amy's drive. -

- -

-$ sudo ddrescue /dev/sdb1 amy-lexar.dd
-GNU ddrescue 1.24
-Press Ctrl-C to interrupt
-     ipos:   54198 kB, non-trimmed:        0 B,  current rate:   7864 kB/s
-     opos:   54198 kB, non-scraped:        0 B,  average rate:  18066 kB/s
-non-tried:   63967 MB,  bad-sector:        0 B,    error rate:       0 B/s
-  rescued:   54198 kB,   bad areas:        0,        run time:          2s
-pct rescued:    0.08%, read errors:        0,  remaining time:         59m
-                              time since last successful read:         n/a
-Copying non-tried blocks... Pass 1 (forwards)
-	  
- -

- The result was a very large partition image that I could fearlessly play - around with. -

- -
-		
-$ ll amy-lexar.dd
--rw-r--r-- 1 root root 60G Sep 24 02:45 amy-lexar.dd
-        
-	  
- -

- Then I could run photorec on the image. This brings up a - TUI with all of the listed media that I can try and recover from. -

- -

-$ sudo photorec amy-lexar.dd
-
-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
-  PhotoRec is free software, and
-comes with ABSOLUTELY NO WARRANTY.
-
-Select a media (use Arrow keys, then press Enter):
->Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
->[Proceed ]  [  Quit  ]
-
-Note:
-Disk capacity must be correctly detected for a successful recovery.
-If a disk listed above has incorrect size, check HD jumper settings, BIOS
-detection, and install the latest OS patches and disk drivers.
-	  
- -

- After hitting proceed photorec asks if you want to scan - just a particular partition or the whole disk (if you made a whole disk - image). I can usually get away with just selecting the partition I know - the files are on and starting a search. -

- -

-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
-Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
-     Partition                  Start        End    Size in sectors
-      Unknown                  0   0  1  7783 139  4  125042656 [Whole disk]
->   P FAT32                    0   0  1  7783 139  4  125042656 [NO NAME]
-
->[ Search ]  [Options ]  [File Opt]  [  Quit  ]
-                              Start file recovery
-	  
- -

- Then photorec asks a couple of questions about the - formatting of the media. It can usually figure them out all by itself so - I just use the default options unless it's way out in left field. -

- -

-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
-   P FAT32                    0   0  1  7783 139  4  125042656 [NO NAME]
-
-To recover lost files, PhotoRec need to know the filesystem type where the
-file were stored:
- [ ext2/ext3 ] ext2/ext3/ext4 filesystem
->[ Other     ] FAT/NTFS/HFS+/ReiserFS/...
-	  
- -

- Now this menu is where I don't just go with the default path. - photorec will offer to search just unallocated space or the - entire partition. I always go for the whole partition here; sometimes - I'll get back files that I didn't really care about but more often than - not I end up rescuing more data this way. In this scenario searching - just unallocated space found no files at all. So I told - photorec to search everything. -

- -

-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
-   P FAT32                    0   0  1  7783 139  4  125042656 [NO NAME]
-
-
-Please choose if all space need to be analysed:
- [   Free    ] Scan for file from FAT32 unallocated space only
->[   Whole   ] Extract files from whole partition
-	  
- -

- Now it'll ask where you want to save any files it finds. I threw them - all into a directory under home that I could zip up and send to Amy's - Mac later. -

- -

-PhotoRec 7.0, Data Recovery Utility, April 2015
-
-Please select a destination to save the recovered files.
-Do not choose to write the files to the same partition they were stored on.
-Keys: Arrow keys to select another directory
-      C when the destination is correct
-      Q to quit
-Directory /home/adam
- drwx------  1000  1000      4096 28-Sep-2019 12:10 .
- drwxr-xr-x     0     0      4096 26-Jan-2019 15:32 ..
->drwxr-xr-x  1000  1000      4096 28-Sep-2019 12:10 amy-lexar-recovery
-	  
- -

- And then just press C. photrec will start - copying all of the files it finds into that directory. It reports what - kinds of files it found and how many it was able to locate. I was able - to recover all of Amy's lost footage this way, past, along with some - straggler files that had been on the drive at one point. This has worked - for me many times in the past, both on newer devices like flash drives - and on super old, sketchy IDE hard drives. I probably won't ever pay for - data recovery unless a drive has been physically damaged in some way. In - other words, this software works great for me and I don't foresee the - need for anything else out there. It's simple to use and is typically - pretty reliable. -

-
- - diff --git a/posts/unix/2020-07-26-now-this-is-a-minimal-install.html b/posts/unix/2020-07-26-now-this-is-a-minimal-install.html deleted file mode 100644 index 64652a7..0000000 --- a/posts/unix/2020-07-26-now-this-is-a-minimal-install.html +++ /dev/null @@ -1,101 +0,0 @@ - - - - - - - - - - - - - 53hornet ➙ Now This is a Minimal Install! - - - - - -
-

Now This is a Minimal Install!

- -

- I just got done configuring Poudriere on Freebsd 12.1-RELEASE. The - awesome thing about it is it allows you to configure and maintain your - own package repository. All of the ports and their dependencies are - built from source with personalized options. That means that I can - maintain my own repo of just the packages I need with just the - compile-time options I need. For example, for the Nvidia driver set I - disabled all Wayland related flags. I use Xorg so there was no need to - have that functionality built in. -

- -

- Compile times are pretty long but I hope to change that by upgrading my - home server to FreeBSD as well (from Ubuntu Server). Then I can - configure poudriere to serve up a ports tree and my own pkg repo from - there. The server is a lot faster than my laptop and will build packages - way faster, and I'll be able to use those packages on both the server - and my laptop and any jails I have running. Jails (and ZFS) also make - poudriere really cool to use as all of the building is done inside a - jail. When the time comes I can just remove the jail and poudriere ports - tree from my laptop and update pkg to point to my web server. -

- -

- This is, as I understand it, the sane way to do package management in - FreeBSD. The binary package repo is basically the ports tree - pre-assembled with default options. Sometimes those packages are - compiled without functionality that most users don't need. In those - situations, you're forced to use ports. The trouble is you're not really - supposed to mix ports and binary packages. The reason, again as I - understand it, is because ports are updated more frequently. So binary - packages and ports can have different dependency versions, which can - sometimes break compatibility on an upgrade. Most FreeBSD users - recommend installing everything with ports (which is just a make install - inside the local tree) but then you lose the package management features - that come with pkg. Poudriere lets you kind of do both by creating your - "own personal binary repo" out of a list of preconfigured, pre-built - ports. -

- -

FreeBSD rocks.

-
- - diff --git a/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html b/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html deleted file mode 100644 index 6f515f3..0000000 --- a/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html +++ /dev/null @@ -1,375 +0,0 @@ - - - - - - - - - - - - - 53hornet ➙ Root on ZFS: A ZPool of Mirror VDEVs The Easy Way - - - - - -
-

Root on ZFS: A ZPool of Mirror VDEVs

- -

- I wanted/needed to make a root on ZFS pool out of multiple mirror VDEVs, - and since I'm not a ZFS expert, I took a little shortcut. -

- -

- I recently got a new-to-me server (yay!) and I wanted to do a - root-on-ZFS setup on it. I've really enjoyed using ZFS for my data - storage pools for a long time. I've also enjoyed the extra functionality - that comes with having a bootable system installed on ZFS on my laptop - and decided with this upgrade it's time to do the same on my server. - Historically I've used RAIDZ for my storage pools. RAIDZ functions - almost like a RAID10 but at the ZFS level. It gives you parity so that a - certain number of disks can die from your pool and you won't lose any - data. It does have a few tradeoffs however*, and for personal - preferences I've decided that for the future I would like to have a - single ZPool over top of multiple mirror VDEVs. In other words, my main - root+storage pool will be made up of two-disk mirrors and can be - expanded to include any number of new mirrors I can fit into the - machine. -

- -

- This did present some complications. First of all, - bsdinstall won't set this up for you automatically (and - sure enough, - in the handbook - it mentions the guided root on ZFS tool will only create a single, - top-level VDEV unless it's a stripe). It will happily let you use RAIDZ - for your ZROOT but not the more custom approach I'm taking. I did - however use - bsdinstall as a shortcut so I wouldn't have to do all of - the partitioning and pool setup manually, and that's what I'm going to - document below. Because I'm totally going to forget how this works the - next time I have to do it. -

- -

- In my scenario I have an eight-slot, hot-swappable PERC H310 controller - that's configured for AHCI passthrough. In other words, all FreeBSD sees - is as many disks as I have plugged into the backplane. I'm going to fill - it with 6x2TB hard disks which, as I said before, I want to act as three - mirrors (two disks each) in a single, bootable, growable ZPool. For - starters, I shoved the FreeBSD installer on a flash drive and booted - from it. I followed all of the regular steps (setting hostname, getting - online, etc.) until I got to the guided root on ZFS disk partitioning - setup. -

- -

- Now here's where I'm going to take the first step on my shortcut. Since - there is no option to create the pool of arbitrary mirrors I'm just - going to create a pool from a single mirror VDEV of two disks. Later I - will expand the pool to include the other two mirrors I had intended - for. My selections were as follows: -

- - - -

- Everything else was left as a default. Then I followed the installer to - completion. At the end, when it asked if I wanted to drop into a shell - to do more to the installation, I did. -

- -

- The installer created the following disk layout for the two disks that I - selected. -

- -
-
-atc@macon:~ % gpart show
-=>        40  3907029088  mfisyspd0  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd1  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-
- -

- The installer also created the following ZPool from my single mirror - VDEV. -

- -
-
-atc@macon:~ % zpool status
-  pool: zroot
- state: ONLINE
-  scan: none requested
-config:
-
-	NAME             STATE     READ WRITE CKSUM
-	zroot            ONLINE       0     0     0
-	  mirror-0       ONLINE       0     0     0
-	    mfisyspd0p3  ONLINE       0     0     0
-	    mfisyspd1p3  ONLINE       0     0     0
-
-errors: No known data errors
-
-
- -

- There are a couple of things to take note of here. First of all, - both disks in the bootable ZPool have an EFI boot partition. - That means they're both a part of (or capable of?) booting the pool. - Second, they both have some swap space. Finally, they both have a third - partition which is dedicated to ZFS data, and that partition is what got - added to my VDEV. -

- -

- So where do I go from here? I was tempted to just - zpool add mirror ... ... and just add my other disks to the - pool (actually, I did do this but it rendered the volume - unbootable for a very important reason), but then I wouldn't have those - all-important boot partitions (using whole-disk mirror VDEVS). Instead, - I need to manually go back and re-partition four disks exactly like the - first two. Or, since all I want is two more of what's already been done, - I can just clone the partitions using gpart backup and - restore! Easy! Here's what I did for all four remaining - disks: -

- -
-
-root@macon:~ # gpart backup mfisyspd0 | gpart restore -F mfisyspd2`
-
-
- -

- Full disclosure, I didn't even think of this as a possibility - until I read this Stack Exchange post. This gave me a disk layout like this: -

- -
-
-atc@macon:~ % gpart show
-=>        40  3907029088  mfisyspd0  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd1  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd2  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd3  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd4  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-=>        40  3907029088  mfisyspd5  GPT  (1.8T)
-          40      409600          1  efi  (200M)
-      409640        2008             - free -  (1.0M)
-      411648     8388608          2  freebsd-swap  (4.0G)
-     8800256  3898228736          3  freebsd-zfs  (1.8T)
-  3907028992         136             - free -  (68K)
-
-
- -

- And to be fair, this makes a lot of logical sense. You don't want a - six-disk pool to only be bootable by two of the disks or you're - defeating some of the purposes of redundancy. So now I can extend my - ZPool to include those last four disks. -

- -

- This next step may or may not be a requirement. I wanted to overwrite - where I assumed any old ZFS/ZPool metadata might be on my four new - disks. This could just be for nothing and I admit that, but I've run - into trouble in the past where a ZPool wasn't properly - exported/destroyed before the drives were removed for another purpose - and when you use those drives in future - zpool imports, you can see both the new and the old, failed - pools. And, in the previous step I cloned an old ZFS partition many - times! So I did a small dd on the remaining disks to help - me sleep at night: -

- -
-
-root@macon:~ # dd if=/dev/zero of=/dev/mfisyspd2 bs=1M count=100
-
-
- -

- One final, precautionary step is to write the EFI boot loader to the new - disks. In - zpool admin handbook - it mentions you should do this any time you replace a zroot - device, so I'll do it just for safe measure on all four additional - disks: -

- -
-
-root@macon:~ # gpart bootcode -p /boot/boot1.efifat -i 1 mfisyspd2
-
-
- -

- Don't forget that the command is different for UEFI and a traditional - BIOS. And finally, I can add my new VDEVs: -

- -
-
-root@macon:~ # zpool zroot add mirror mfisyspd2p3 mfisyspd3p3
-root@macon:~ # zpool zroot add mirror mfisyspd4p3 mfisyspd5p3
-
-
- -

And now my pool looks like this:

- -
-
-atc@macon:~ % zpool status
-  pool: zroot
- state: ONLINE
-  scan: none requested
-config:
-
-	NAME             STATE     READ WRITE CKSUM
-	zroot            ONLINE       0     0     0
-	  mirror-0       ONLINE       0     0     0
-	    mfisyspd0p3  ONLINE       0     0     0
-	    mfisyspd1p3  ONLINE       0     0     0
-	  mirror-1       ONLINE       0     0     0
-	    mfisyspd2p3  ONLINE       0     0     0
-	    mfisyspd3p3  ONLINE       0     0     0
-	  mirror-2       ONLINE       0     0     0
-	    mfisyspd4p3  ONLINE       0     0     0
-	    mfisyspd5p3  ONLINE       0     0     0
-
-errors: No known data errors
-
-
- -

- Boom. A growable, bootable zroot ZPool. Is it easier than just - configuring the partitions and root on ZFS by hand? Probably not for a - BSD veteran. But since I'm a BSD layman, this is something I can live - with pretty easily. At least until this becomes an option in - bsdintall maybe? At least now I can add as many more - mirrors as I can fit into my system. And it's just as easy to replace - them. This is better for me than my previous RAIDZ, where I would have - to destroy and re-create the pool in order to add more disks to the - VDEV. Now I just create another little mirror and grow the pool and all - of my filesystems just see more storage. And of course, having ZFS for - all of my data makes it super easy to create filesystems on the fly, - compress or quota them, and take snapshots (including the live ZROOT!) - and send those snapshots over the network. Pretty awesome. -

- -

- * I'm not going to explain why here, but - this is a pretty well thought out article - that should give you an idea about the pros and cons of RAIDZ versus - mirror VDEVs so you can draw your own conclusions. -

-
- - diff --git a/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html b/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html deleted file mode 100644 index 634530b..0000000 --- a/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html +++ /dev/null @@ -1,256 +0,0 @@ - - - - - - - - - - - - - 53hornet ➙ How to Automate Certbot Renewal with HAProxy - - - - - -
-

How to Automate Certbot Renewal with HAProxy

- -

- So this is specifically for HAProxy on FreeBSD, but it should apply to - other *nix systems as well. Basically, I use HAProxy as a reverse proxy - to a bunch of servers I administer. I use Let's Encrypt for a - certificate and I used certbot to generate that - certificate. Generating the certificate for the first time is easy and - has lots of documentation, but it wasn't initially clear on how I could - easily set up auto-renewal. Here's how I did it. -

- -

- If you've already set up TLS termination with HAProxy and - certbot, you know you need to combine your Let's Encrypt - fullchain and private key to get a combined certificate that HAProxy can - use. You can do this by cat-ing the chain and key together - like so: -

- -
-
-cat /usr/local/etc/letsencrypt/live/$SITE/fullchain.pem /usr/local/etc/letsencrypt/live/$SITE/privkey.pem > /usr/local/etc/ssl/haproxy.pem
-
-	  
- -

- In this example, $SITE is your domain name that you fed - HAProxy when you created the certificate and haproxy.pem is - wherever you're storing HAProxy's combined certificate. Your HAProxy - config then points to that certificate like this: -

- -
-
-macon% grep crt /usr/local/etc/haproxy.conf
-        bind *:443 ssl crt /usr/local/etc/ssl/haproxy.pem
-
-	  
- -

- And that was the end of the first-time setup. Then a few months later - you probably had to do it again because Let's Encrypt certs are only - good for 90 days in between renewals. To renew the certificate, you - usually run certbot renew, it detects which certificates - are present, and uses either the webroot or standlone server renewal - process. Then you have to cat the fullchain and privkey - together and restart HAProxy so it starts using the new certificate. -

- -

- To automate those steps, newer versions of - certbot will run any post renewal hooks (read: scripts) - that you want. You can also configure HAProxy and - certbot to perform the ACME challenge dance for renewal so - that you don't have to use it interactively. -

- -

- First, if you haven't already done it, change your HAProxy config so - there's a frontend+backend for responding to ACME challenges. In a - frontend listening for requests, create an access control list for any - request looking for /.well-known/acme-challenge/. Send - those requests to a backend server with an unused local port. -

- -
-
-frontend http-in
-		acl letsencrypt-acl path_beg /.well-known/acme-challenge/
-        use_backend letsencrypt-backend if letsencrypt-acl
-		...
-backend letsencrypt-backend
-		server letsencrypt 127.0.0.1:54321
-
-	  
- -

- What this will do is allow certbot and Let's Encrypt to - renew your server in standalone mode via your reverse proxy. As an added - bonus it prevents you from having to open up an additional port on your - firewall. -

- -

- Now you've gotta configure certbot to do just that. A - config file was created in certbot's - renew directory for your site. All you need to do in that - file is add a line to the [renewalparams] section - specifying the port you're using in your HAProxy config. -

- -
-
-macon% echo "http01_port = 54321" >> /usr/local/etc/letsencrypt/renewal/$SITE.conf
-
-	  
- -

- Now you need the post-renewal hooks. I dropped two separate scripts into - the renewal-hooks directory: one does the job of combining - the certificate chain and private key and the other just restarts - HAProxy. -

- -
-
-macon% cat /usr/local/etc/letsencrypt/renewal-hooks/post/001-catcerts.sh
-#!/bin/sh
-
-SITE=(your site of course)
-
-cd /usr/local/etc/letsencrypt/live/$SITE
-cat fullchain.pem privkey.pem > /usr/local/etc/ssl/haproxy.pem
-macon% cat /usr/local/etc/letsencrypt/renewal-hooks/post/002-haproxy.sh
-#!/bin/sh
-service haproxy restart
-
-	  
- -

- When certbot renew is run, certbot checks the - renewal-hooks/post directory and runs any executable things - in it after it's renewed the certificate(s). As a side note, - make sure you hit those scripts with chmod +x or - they probably won't run. -

- -

- Now all that's left is dropping a job into cron or - periodic to run certbot renew at least once or - twice within the renewal period. -

- -
-
-macon% doas crontab -l|grep certbot
-# certbot renewal
-@monthly certbot renew
-
-	  
- -

- You can always test that your scripts are working with - certbot renew --dry-run just to be safe. -

- -
-
-macon% doas certbot renew --dry-run
-Saving debug log to /var/log/letsencrypt/letsencrypt.log
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Processing /usr/local/etc/letsencrypt/renewal/53hor.net.conf
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Cert not due for renewal, but simulating renewal for dry run
-Plugins selected: Authenticator standalone, Installer None
-Simulating renewal of an existing certificate for 53hor.net and 7 more domains
-Performing the following challenges:
-http-01 challenge for 53hor.net
-http-01 challenge for carpentertutoring.com
-http-01 challenge for git.53hor.net
-http-01 challenge for nextcloud.53hor.net
-http-01 challenge for pkg.53hor.net
-http-01 challenge for plex.53hor.net
-http-01 challenge for theglassyladies.com
-http-01 challenge for www.53hor.net
-Waiting for verification...
-Cleaning up challenges
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-new certificate deployed without reload, fullchain is
-/usr/local/etc/letsencrypt/live/53hor.net/fullchain.pem
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Congratulations, all simulated renewals succeeded:
-  /usr/local/etc/letsencrypt/live/53hor.net/fullchain.pem (success)
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Running post-hook command: /usr/local/etc/letsencrypt/renewal-hooks/post/001-catcerts.sh
-Running post-hook command: /usr/local/etc/letsencrypt/renewal-hooks/post/002-haproxy.sh
-Output from post-hook command 002-haproxy.sh:
-Waiting for PIDS: 15191.
-Starting haproxy.
-
-
-		
- -

- And there it is. Automated Let's Encrypt certificate renewal on FreeBSD - with HAProxy. -

-
- - -- cgit v1.2.3