summaryrefslogtreecommitdiff
path: root/posts/unix
diff options
context:
space:
mode:
authorAdam T. Carpenter <atc@53hor.net>2021-04-21 22:57:39 -0400
committerAdam T. Carpenter <atc@53hor.net>2021-04-21 22:57:39 -0400
commit890b34bcc1a6b4073d1e512b1386634f7bc5ea52 (patch)
tree17efbec82a5bc118c2ae0b3ec56acbf159e4edda /posts/unix
parente87bdb082057c4eddd1af159374b667c7fe234d4 (diff)
download53hor-890b34bcc1a6b4073d1e512b1386634f7bc5ea52.tar.xz
53hor-890b34bcc1a6b4073d1e512b1386634f7bc5ea52.zip
unified posts dir, until I can figure out makefile sub-subdirs. makefile auto-generates index
Diffstat (limited to 'posts/unix')
-rw-r--r--posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html127
-rw-r--r--posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html276
-rw-r--r--posts/unix/2020-07-26-now-this-is-a-minimal-install.html101
-rw-r--r--posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html375
-rw-r--r--posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html256
5 files changed, 0 insertions, 1135 deletions
diff --git a/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html b/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
deleted file mode 100644
index bbe5b28..0000000
--- a/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
+++ /dev/null
@@ -1,127 +0,0 @@
-<!DOCTYPE html>
-<html>
- <head>
- <link rel="stylesheet" href="/includes/stylesheet.css" />
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1" />
- <meta
- property="og:description"
- content="The World Wide Web pages of Adam Carpenter"
- />
- <meta property="og:image" content="https://nextcloud.53hor.net/index.php/s/Nx9e7iHbw4t99wo/preview" />
- <meta property="og:site_name" content="53hor.net" />
- <meta
- property="og:title"
- content="Offloading GoPro Footage the Easy Way!"
- />
- <meta property="og:type" content="website" />
- <meta property="og:url" content="https://www.53hor.net" />
- <title>53hornet ➙ Offloading GoPro Footage the Easy Way!</title>
- </head>
-
- <body>
- <nav>
- <ul>
- <li>
- <a href="/">
- <img src="/includes/icons/home-roof.svg" />
- Home
- </a>
- </li>
- <li>
- <a href="/info.html">
- <img src="/includes/icons/information-variant.svg" />
- Info
- </a>
- </li>
- <li>
- <a href="https://git.53hor.net">
- <img src="/includes/icons/git.svg" />
- Repos
- </a>
- </li>
- <li>
- <a href="/hosted.html">
- <img src="/includes/icons/desktop-tower.svg" />
- Hosted
- </a>
- </li>
- <li>
- <a type="application/rss+xml" href="/rss.xml">
- <img src="/includes/icons/rss.svg" />
- RSS
- </a>
- </li>
- </ul>
- </nav>
-
- <article>
- <h1>Offloading GoPro Footage the Easy Way!</h1>
-
- <p>
- Transferring files off of most cameras to a Linux computer isn't all
- that difficult. The exception is my GoPro Hero 4 Black. For 4th of July
- week I took a bunch of video with the GoPro, approximately 20 MP4 files,
- about 3GB each. The annoying thing about the GoPro's USB interface is
- you need additional software to download everything through the cable.
- The camera doesn't just show up as a USB filesystem that you can mount.
- The GoPro does have a micro-SD card but I was away from home and didn't
- have any dongles or adapters. Both of these solutions also mean taking
- the camera out of its waterproof case and off of its mount. So here's
- what I did.
- </p>
-
- <p>
- GoPro cameras, after the Hero 3, can open up an ad-hoc wireless network
- that lets you browse the GoPro's onboard files through an HTTP server.
- This means you can open your browser and scroll through the files on the
- camera at an intranet address, <code>10.5.5.9</code>, and download them
- one by one by clicking every link on every page. If you have a lot of
- footage on there it kinda sucks. So, I opened up the manual for
- <code>wget</code>. I'm sure you could get really fancy with some of the
- options but the only thing I cared about was downloading every single
- MP4 video off of the camera, automatically. I did not want to download
- any of the small video formats or actual HTML files. Here's what I used:
- </p>
-
- <pre>
- <code>
-sh wget --recursive --accept "*.MP4" http://10.5.5.9:8080/
- </code>
- </pre>
-
- <p>
- This tells <code>wget</code> to download all of the files at the GoPro's
- address recursively and skips any that don't have the MP4 extension. Now
- I've got a directory tree with all of my videos in it. And the best part
- is I didn't have to install the dinky GoPro app on my laptop. Hopefully
- this helps if you're looking for an easy way to migrate lots of footage
- without manually clicking through the web interface or installing
- additional software. The only downside is if you're moving a whole lot
- of footage, it's not nearly as quick as just moving files off the SD
- card. So I'd shoot for using the adapter to read off the card first and
- only use this if that's not an option, such as when the camera is
- mounted and you don't want to move it.
- </p>
-
- <p>Some things I would like to change/add:</p>
-
- <ul>
- <li>
- Download all image files as well; should be easy, just another
- <code>--accept</code>
- </li>
- <li>Initiate parallel downloads</li>
- <li>
- Clean up the directory afterwards so I just have one level of depth
- </li>
- </ul>
-
- <p>
- I could probably write a quick and dirty shell script to do all of this
- for me but I use the camera so infrequently that it's probably not even
- worth it.
- </p>
- </article>
- </body>
-</html>
diff --git a/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html b/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html
deleted file mode 100644
index 9751eda..0000000
--- a/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html
+++ /dev/null
@@ -1,276 +0,0 @@
-<!DOCTYPE html>
-<html>
- <head>
- <link rel="stylesheet" href="/includes/stylesheet.css" />
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1" />
- <meta
- property="og:description"
- content="The World Wide Web pages of Adam Carpenter"
- />
- <meta property="og:image" content="https://nextcloud.53hor.net/index.php/s/Nx9e7iHbw4t99wo/preview" />
- <meta property="og:site_name" content="53hor.net" />
- <meta property="og:title" content="How I Do Data Recovery" />
- <meta property="og:type" content="website" />
- <meta property="og:url" content="https://www.53hor.net" />
- <title>53hornet ➙ How I Do Data Recovery</title>
- </head>
-
- <body>
- <nav>
- <ul>
- <li>
- <a href="/">
- <img src="/includes/icons/home-roof.svg" />
- Home
- </a>
- </li>
- <li>
- <a href="/info.html">
- <img src="/includes/icons/information-variant.svg" />
- Info
- </a>
- </li>
- <li>
- <a href="https://git.53hor.net">
- <img src="/includes/icons/git.svg" />
- Repos
- </a>
- </li>
- <li>
- <a href="/hosted.html">
- <img src="/includes/icons/desktop-tower.svg" />
- Hosted
- </a>
- </li>
- <li>
- <a type="application/rss+xml" href="/rss.xml">
- <img src="/includes/icons/rss.svg" />
- RSS
- </a>
- </li>
- </ul>
- </nav>
-
- <article>
- <h1>How I Do Data Recovery</h1>
-
- <p>
- This week Amy plugged in her flash drive to discover that there were no
- files on it. Weeks before there had been dozens of large cuts of footage
- that she needed to edit down for work. Hours of recordings were
- seemingly gone. And the most annoying part was the drive had worked
- perfectly on several other occasions. Just not now that the footage was
- actually needed of course. Initially it looked like everything had been
- wiped clean, however both Amy's Mac and her PC thought the drive was
- half full. It's overall capacity was 64GB but it showed only about 36GB
- free. So there still had to be data on there if we could find the right
- tool to salvage it.
- </p>
-
- <p>
- Luckily this wasn't the first time I had to recover accidentally (or
- magically) deleted files. I had previously done so with some success at
- my tech support job, for some college friends, and for my in-laws'
- retired laptops. So I had a pretty clear idea of what to expect. The
- only trick was finding a tool that knew what files it was looking for.
- The camera that took the video clips was a Sony and apparently they
- record into <code>m2ts</code> files, which are kind of a unique format
- in that they only show up on Blu-Ray discs and Sony camcorders. Enter my
- favorite two tools for dealing with potentially-destroyed data:
- <code>ddrescue</code> and <code>photorec</code>.
- </p>
-
- <h2>DDRescue</h2>
-
- <p>
- <code>ddrescue</code> is a godsend of a tool. If you've ever used
- <code>dd</code> before, forget about it. Use <code>ddrescue</code>. You
- might as well <code>alias dd=ddrescue</code> because it's that great. By
- default it has a plethora of additional options, displays the progress
- as it works, recovers and retries in the event of I/O errors, and does
- everything that good old <code>dd</code> can do. It's particularly good
- at protecting partitions or disks that have been corrupted or damaged by
- rescuing undamaged portions first. Oh, and have you ever had to cancel a
- <code>dd</code> operation? Did I mention that <code>ddrescue</code> can
- pause and resume operations? It's that good.
- </p>
-
- <h2>PhotoRec</h2>
-
- <p>
- <code>photorec</code> is probably the best missing file recovery tool
- I've ever used in my entire life. And I've used quite a few. I've never
- had as good results as I've had with <code>photorec</code> with other
- tools like Recuva et. al. And <code>photorec</code> isn't just for
- photos, it can recover documents (a la Office suite), music, images,
- config files, and videos (including the very odd
- <code>m2ts</code> format!). The other nice thing is
- <code>photorec</code> will work on just about any source. It's also free
- software which makes me wonder why there are like $50 recovery tools for
- Windows that look super sketchy.
- </p>
-
- <h2>In Practice</h2>
-
- <p>
- So here's what I did to get Amy's files back. Luckily she didn't write
- anything out to the drive afterward so the chances (I thought) were
- pretty good that I would get <em>something</em> back. The first thing I
- always do is make a full image of whatever media I'm trying to recover
- from. I do this for a couple of reasons. First of all it's a backup. If
- something goes wrong during recovery I don't have to worry about the
- original, fragile media being damaged or wiped. Furthermore, I can work
- with multiple copies at a time. If it's a large image that means
- multiple tools or even multiple PCs can work on it at once. It's also
- just plain faster working off a disk image than a measly flash drive. So
- I used <code>ddrescue</code> to make an image of Amy's drive.
- </p>
-
- <pre><code>
-$ sudo ddrescue /dev/sdb1 amy-lexar.dd
-GNU ddrescue 1.24
-Press Ctrl-C to interrupt
- ipos: 54198 kB, non-trimmed: 0 B, current rate: 7864 kB/s
- opos: 54198 kB, non-scraped: 0 B, average rate: 18066 kB/s
-non-tried: 63967 MB, bad-sector: 0 B, error rate: 0 B/s
- rescued: 54198 kB, bad areas: 0, run time: 2s
-pct rescued: 0.08%, read errors: 0, remaining time: 59m
- time since last successful read: n/a
-Copying non-tried blocks... Pass 1 (forwards)
- </code></pre>
-
- <p>
- The result was a very large partition image that I could fearlessly play
- around with.
- </p>
-
- <pre>
- <code>
-$ ll amy-lexar.dd
--rw-r--r-- 1 root root 60G Sep 24 02:45 amy-lexar.dd
- </code>
- </pre>
-
- <p>
- Then I could run <code>photorec</code> on the image. This brings up a
- TUI with all of the listed media that I can try and recover from.
- </p>
-
- <pre><code>
-$ sudo photorec amy-lexar.dd
-
-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
- PhotoRec is free software, and
-comes with ABSOLUTELY NO WARRANTY.
-
-Select a media (use Arrow keys, then press Enter):
->Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
->[Proceed ] [ Quit ]
-
-Note:
-Disk capacity must be correctly detected for a successful recovery.
-If a disk listed above has incorrect size, check HD jumper settings, BIOS
-detection, and install the latest OS patches and disk drivers.
- </code></pre>
-
- <p>
- After hitting proceed <code>photorec</code> asks if you want to scan
- just a particular partition or the whole disk (if you made a whole disk
- image). I can usually get away with just selecting the partition I know
- the files are on and starting a search.
- </p>
-
- <pre><code>
-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
-Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
- Partition Start End Size in sectors
- Unknown 0 0 1 7783 139 4 125042656 [Whole disk]
-> P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
->[ Search ] [Options ] [File Opt] [ Quit ]
- Start file recovery
- </code></pre>
-
- <p>
- Then <code>photorec</code> asks a couple of questions about the
- formatting of the media. It can usually figure them out all by itself so
- I just use the default options unless it's way out in left field.
- </p>
-
- <pre><code>
-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
- P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
-To recover lost files, PhotoRec need to know the filesystem type where the
-file were stored:
- [ ext2/ext3 ] ext2/ext3/ext4 filesystem
->[ Other ] FAT/NTFS/HFS+/ReiserFS/...
- </code></pre>
-
- <p>
- Now this menu is where I don't just go with the default path.
- <code>photorec</code> will offer to search just unallocated space or the
- entire partition. I always go for the whole partition here; sometimes
- I'll get back files that I didn't really care about but more often than
- not I end up rescuing more data this way. In this scenario searching
- just unallocated space found no files at all. So I told
- <code>photorec</code> to search everything.
- </p>
-
- <pre><code>
-PhotoRec 7.0, Data Recovery Utility, April 2015
-http://www.cgsecurity.org
-
- P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
-
-Please choose if all space need to be analysed:
- [ Free ] Scan for file from FAT32 unallocated space only
->[ Whole ] Extract files from whole partition
- </code></pre>
-
- <p>
- Now it'll ask where you want to save any files it finds. I threw them
- all into a directory under home that I could zip up and send to Amy's
- Mac later.
- </p>
-
- <pre><code>
-PhotoRec 7.0, Data Recovery Utility, April 2015
-
-Please select a destination to save the recovered files.
-Do not choose to write the files to the same partition they were stored on.
-Keys: Arrow keys to select another directory
- C when the destination is correct
- Q to quit
-Directory /home/adam
- drwx------ 1000 1000 4096 28-Sep-2019 12:10 .
- drwxr-xr-x 0 0 4096 26-Jan-2019 15:32 ..
->drwxr-xr-x 1000 1000 4096 28-Sep-2019 12:10 amy-lexar-recovery
- </code></pre>
-
- <p>
- And then just press <code>C</code>. <code>photrec</code> will start
- copying all of the files it finds into that directory. It reports what
- kinds of files it found and how many it was able to locate. I was able
- to recover all of Amy's lost footage this way, past, along with some
- straggler files that had been on the drive at one point. This has worked
- for me many times in the past, both on newer devices like flash drives
- and on super old, sketchy IDE hard drives. I probably won't ever pay for
- data recovery unless a drive has been physically damaged in some way. In
- other words, this software works great for me and I don't foresee the
- need for anything else out there. It's simple to use and is typically
- pretty reliable.
- </p>
- </article>
- </body>
-</html>
diff --git a/posts/unix/2020-07-26-now-this-is-a-minimal-install.html b/posts/unix/2020-07-26-now-this-is-a-minimal-install.html
deleted file mode 100644
index 64652a7..0000000
--- a/posts/unix/2020-07-26-now-this-is-a-minimal-install.html
+++ /dev/null
@@ -1,101 +0,0 @@
-<!DOCTYPE html>
-<html>
- <head>
- <link rel="stylesheet" href="/includes/stylesheet.css" />
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1" />
- <meta
- property="og:description"
- content="The World Wide Web pages of Adam Carpenter"
- />
- <meta property="og:image" content="https://nextcloud.53hor.net/index.php/s/Nx9e7iHbw4t99wo/preview" />
- <meta property="og:site_name" content="53hor.net" />
- <meta property="og:title" content="Now This is a Minimal Install!" />
- <meta property="og:type" content="website" />
- <meta property="og:url" content="https://www.53hor.net" />
- <title>53hornet ➙ Now This is a Minimal Install!</title>
- </head>
-
- <body>
- <nav>
- <ul>
- <li>
- <a href="/">
- <img src="/includes/icons/home-roof.svg" />
- Home
- </a>
- </li>
- <li>
- <a href="/info.html">
- <img src="/includes/icons/information-variant.svg" />
- Info
- </a>
- </li>
- <li>
- <a href="https://git.53hor.net">
- <img src="/includes/icons/git.svg" />
- Repos
- </a>
- </li>
- <li>
- <a href="/hosted.html">
- <img src="/includes/icons/desktop-tower.svg" />
- Hosted
- </a>
- </li>
- <li>
- <a type="application/rss+xml" href="/rss.xml">
- <img src="/includes/icons/rss.svg" />
- RSS
- </a>
- </li>
- </ul>
- </nav>
-
- <article>
- <h1>Now This is a Minimal Install!</h1>
-
- <p>
- I just got done configuring Poudriere on Freebsd 12.1-RELEASE. The
- awesome thing about it is it allows you to configure and maintain your
- own package repository. All of the ports and their dependencies are
- built from source with personalized options. That means that I can
- maintain my own repo of just the packages I need with just the
- compile-time options I need. For example, for the Nvidia driver set I
- disabled all Wayland related flags. I use Xorg so there was no need to
- have that functionality built in.
- </p>
-
- <p>
- Compile times are pretty long but I hope to change that by upgrading my
- home server to FreeBSD as well (from Ubuntu Server). Then I can
- configure poudriere to serve up a ports tree and my own pkg repo from
- there. The server is a lot faster than my laptop and will build packages
- way faster, and I'll be able to use those packages on both the server
- and my laptop and any jails I have running. Jails (and ZFS) also make
- poudriere really cool to use as all of the building is done inside a
- jail. When the time comes I can just remove the jail and poudriere ports
- tree from my laptop and update pkg to point to my web server.
- </p>
-
- <p>
- This is, as I understand it, the sane way to do package management in
- FreeBSD. The binary package repo is basically the ports tree
- pre-assembled with default options. Sometimes those packages are
- compiled without functionality that most users don't need. In those
- situations, you're forced to use ports. The trouble is you're not really
- supposed to mix ports and binary packages. The reason, again as I
- understand it, is because ports are updated more frequently. So binary
- packages and ports can have different dependency versions, which can
- sometimes break compatibility on an upgrade. Most FreeBSD users
- recommend installing everything with ports (which is just a make install
- inside the local tree) but then you lose the package management features
- that come with pkg. Poudriere lets you kind of do both by creating your
- "own personal binary repo" out of a list of preconfigured, pre-built
- ports.
- </p>
-
- <p>FreeBSD rocks.</p>
- </article>
- </body>
-</html>
diff --git a/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html b/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html
deleted file mode 100644
index 6f515f3..0000000
--- a/posts/unix/2021-01-15-root-on-zfs-a-zpool-of-mirror-vdevs-the-easy-way.html
+++ /dev/null
@@ -1,375 +0,0 @@
-<!DOCTYPE html>
-<html>
- <head>
- <link rel="stylesheet" href="/includes/stylesheet.css" />
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1" />
- <meta
- property="og:description"
- content="The World Wide Web pages of Adam Carpenter"
- />
- <meta
- property="og:image"
- content="https://nextcloud.53hor.net/index.php/s/Nx9e7iHbw4t99wo/preview"
- />
- <meta property="og:site_name" content="53hor.net" />
- <meta
- property="og:title"
- content="Root on ZFS: A ZPool of Mirror VDEVs The Easy Way"
- />
- <meta property="og:type" content="website" />
- <meta property="og:url" content="https://www.53hor.net" />
- <title>53hornet ➙ Root on ZFS: A ZPool of Mirror VDEVs The Easy Way</title>
- </head>
-
- <body>
- <nav>
- <ul>
- <li>
- <a href="/">
- <img src="/includes/icons/home-roof.svg" />
- Home
- </a>
- </li>
- <li>
- <a href="/info.html">
- <img src="/includes/icons/information-variant.svg" />
- Info
- </a>
- </li>
- <li>
- <a href="https://git.53hor.net">
- <img src="/includes/icons/git.svg" />
- Repos
- </a>
- </li>
- <li>
- <a href="/hosted.html">
- <img src="/includes/icons/desktop-tower.svg" />
- Hosted
- </a>
- </li>
- <li>
- <a type="application/rss+xml" href="/rss.xml">
- <img src="/includes/icons/rss.svg" />
- RSS
- </a>
- </li>
- </ul>
- </nav>
-
- <article>
- <h1>Root on ZFS: A ZPool of Mirror VDEVs</h1>
-
- <p class="description">
- I wanted/needed to make a root on ZFS pool out of multiple mirror VDEVs,
- and since I'm not a ZFS expert, I took a little shortcut.
- </p>
-
- <p>
- I recently got a new-to-me server (yay!) and I wanted to do a
- root-on-ZFS setup on it. I've really enjoyed using ZFS for my data
- storage pools for a long time. I've also enjoyed the extra functionality
- that comes with having a bootable system installed on ZFS on my laptop
- and decided with this upgrade it's time to do the same on my server.
- Historically I've used RAIDZ for my storage pools. RAIDZ functions
- almost like a RAID10 but at the ZFS level. It gives you parity so that a
- certain number of disks can die from your pool and you won't lose any
- data. It does have a few tradeoffs however*, and for personal
- preferences I've decided that for the future I would like to have a
- single ZPool over top of multiple mirror VDEVs. In other words, my main
- root+storage pool will be made up of two-disk mirrors and can be
- expanded to include any number of new mirrors I can fit into the
- machine.
- </p>
-
- <p>
- This did present some complications. First of all,
- <code>bsdinstall</code> won't set this up for you automatically (and
- sure enough,
- <a
- href="https://www.freebsd.org/doc/handbook/bsdinstall-partitioning.html"
- >in the handbook</a
- >
- it mentions the guided root on ZFS tool will only create a single,
- top-level VDEV unless it's a stripe). It will happily let you use RAIDZ
- for your ZROOT but not the more custom approach I'm taking. I did
- however use
- <code>bsdinstall</code> as a shortcut so I wouldn't have to do all of
- the partitioning and pool setup manually, and that's what I'm going to
- document below. Because I'm totally going to forget how this works the
- next time I have to do it.
- </p>
-
- <p>
- In my scenario I have an eight-slot, hot-swappable PERC H310 controller
- that's configured for AHCI passthrough. In other words, all FreeBSD sees
- is as many disks as I have plugged into the backplane. I'm going to fill
- it with 6x2TB hard disks which, as I said before, I want to act as three
- mirrors (two disks each) in a single, bootable, growable ZPool. For
- starters, I shoved the FreeBSD installer on a flash drive and booted
- from it. I followed all of the regular steps (setting hostname, getting
- online, etc.) until I got to the guided root on ZFS disk partitioning
- setup.
- </p>
-
- <p>
- Now here's where I'm going to take the first step on my shortcut. Since
- there is no option to create the pool of arbitrary mirrors I'm just
- going to create a pool from a single mirror VDEV of two disks. Later I
- will expand the pool to include the other two mirrors I had intended
- for. My selections were as follows:
- </p>
-
- <ul>
- <li>Pool Type/Disks: mirror mfisyspd0 mfisyspd1</li>
- <li>Pool Name: zroot</li>
- <li>Partition Scheme: GPT (EFI)</li>
- <li>Swap Size: 4g</li>
- </ul>
-
- <p>
- Everything else was left as a default. Then I followed the installer to
- completion. At the end, when it asked if I wanted to drop into a shell
- to do more to the installation, I did.
- </p>
-
- <p>
- The installer created the following disk layout for the two disks that I
- selected.
- </p>
-
- <pre>
-<code>
-atc@macon:~ % gpart show
-=> 40 3907029088 mfisyspd0 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd1 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-</code>
-</pre>
-
- <p>
- The installer also created the following ZPool from my single mirror
- VDEV.
- </p>
-
- <pre>
-<code>
-atc@macon:~ % zpool status
- pool: zroot
- state: ONLINE
- scan: none requested
-config:
-
- NAME STATE READ WRITE CKSUM
- zroot ONLINE 0 0 0
- mirror-0 ONLINE 0 0 0
- mfisyspd0p3 ONLINE 0 0 0
- mfisyspd1p3 ONLINE 0 0 0
-
-errors: No known data errors
-</code>
-</pre>
-
- <p>
- There are a couple of things to take note of here. First of all,
- <em>both</em> disks in the bootable ZPool have an EFI boot partition.
- That means they're both a part of (or capable of?) booting the pool.
- Second, they both have some swap space. Finally, they both have a third
- partition which is dedicated to ZFS data, and that partition is what got
- added to my VDEV.
- </p>
-
- <p>
- So where do I go from here? I was tempted to just
- <code>zpool add mirror ... ...</code> and just add my other disks to the
- pool (actually, I <em>did</em> do this but it rendered the volume
- unbootable for a very important reason), but then I wouldn't have those
- all-important boot partitions (using whole-disk mirror VDEVS). Instead,
- I need to manually go back and re-partition four disks exactly like the
- first two. Or, since all I want is two more of what's already been done,
- I can just clone the partitions using <code>gpart backup</code> and
- <code>restore</code>! Easy! Here's what I did for all four remaining
- disks:
- </p>
-
- <pre>
-<code>
-root@macon:~ # gpart backup mfisyspd0 | gpart restore -F mfisyspd2`
-</code>
-</pre>
-
- <p>
- Full disclosure, I didn't even think of this as a possibility
- <a
- href="ihttps://unix.stackexchange.com/questions/472147/replacing-disk-when-using-freebsd-zfs-zroot-zfs-on-partition#472175"
- >until I read this Stack Exchange post</a
- >. This gave me a disk layout like this:
- </p>
-
- <pre>
-<code>
-atc@macon:~ % gpart show
-=> 40 3907029088 mfisyspd0 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd1 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd2 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd3 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd4 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-
-=> 40 3907029088 mfisyspd5 GPT (1.8T)
- 40 409600 1 efi (200M)
- 409640 2008 - free - (1.0M)
- 411648 8388608 2 freebsd-swap (4.0G)
- 8800256 3898228736 3 freebsd-zfs (1.8T)
- 3907028992 136 - free - (68K)
-</code>
-</pre>
-
- <p>
- And to be fair, this makes a lot of logical sense. You don't want a
- six-disk pool to only be bootable by two of the disks or you're
- defeating some of the purposes of redundancy. So now I can extend my
- ZPool to include those last four disks.
- </p>
-
- <p>
- This next step may or may not be a requirement. I wanted to overwrite
- where I assumed any old ZFS/ZPool metadata might be on my four new
- disks. This could just be for nothing and I admit that, but I've run
- into trouble in the past where a ZPool wasn't properly
- exported/destroyed before the drives were removed for another purpose
- and when you use those drives in future
- <code>zpool import</code>s, you can see both the new and the old, failed
- pools. And, in the previous step I cloned an old ZFS partition many
- times! So I did a small <code>dd</code> on the remaining disks to help
- me sleep at night:
- </p>
-
- <pre>
-<code>
-root@macon:~ # dd if=/dev/zero of=/dev/mfisyspd2 bs=1M count=100
-</code>
-</pre>
-
- <p>
- One final, precautionary step is to write the EFI boot loader to the new
- disks. In
- <a href="https://www.freebsd.org/doc/handbook/zfs-zpool.html"
- >zpool admin handbook</a
- >
- it mentions you should do this any time you <em>replace</em> a zroot
- device, so I'll do it just for safe measure on all four additional
- disks:
- </p>
-
- <pre>
-<code>
-root@macon:~ # gpart bootcode -p /boot/boot1.efifat -i 1 mfisyspd2
-</code>
-</pre>
-
- <p>
- Don't forget that the command is different for UEFI and a traditional
- BIOS. And finally, I can add my new VDEVs:
- </p>
-
- <pre>
-<code>
-root@macon:~ # zpool zroot add mirror mfisyspd2p3 mfisyspd3p3
-root@macon:~ # zpool zroot add mirror mfisyspd4p3 mfisyspd5p3
-</code>
-</pre>
-
- <p>And now my pool looks like this:</p>
-
- <pre>
-<code>
-atc@macon:~ % zpool status
- pool: zroot
- state: ONLINE
- scan: none requested
-config:
-
- NAME STATE READ WRITE CKSUM
- zroot ONLINE 0 0 0
- mirror-0 ONLINE 0 0 0
- mfisyspd0p3 ONLINE 0 0 0
- mfisyspd1p3 ONLINE 0 0 0
- mirror-1 ONLINE 0 0 0
- mfisyspd2p3 ONLINE 0 0 0
- mfisyspd3p3 ONLINE 0 0 0
- mirror-2 ONLINE 0 0 0
- mfisyspd4p3 ONLINE 0 0 0
- mfisyspd5p3 ONLINE 0 0 0
-
-errors: No known data errors
-</code>
-</pre>
-
- <p>
- Boom. A growable, bootable zroot ZPool. Is it easier than just
- configuring the partitions and root on ZFS by hand? Probably not for a
- BSD veteran. But since I'm a BSD layman, this is something I can live
- with pretty easily. At least until this becomes an option in
- <code>bsdintall</code> maybe? At least now I can add as many more
- mirrors as I can fit into my system. And it's just as easy to replace
- them. This is better for me than my previous RAIDZ, where I would have
- to destroy and re-create the pool in order to add more disks to the
- VDEV. Now I just create another little mirror and grow the pool and all
- of my filesystems just see more storage. And of course, having ZFS for
- all of my data makes it super easy to create filesystems on the fly,
- compress or quota them, and take snapshots (including the live ZROOT!)
- and send those snapshots over the network. Pretty awesome.
- </p>
-
- <p>
- * I'm not going to explain why here, but
- <a
- href="http://www.openoid.net/zfs-you-should-use-mirror-vdevs-not-raidz/"
- >this is a pretty well thought out article</a
- >
- that should give you an idea about the pros and cons of RAIDZ versus
- mirror VDEVs so you can draw your own conclusions.
- </p>
- </article>
- </body>
-</html>
diff --git a/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html b/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html
deleted file mode 100644
index 634530b..0000000
--- a/posts/unix/2021-03-19-how-to-automate-certbot-renewal-with-haproxy.html
+++ /dev/null
@@ -1,256 +0,0 @@
-<!DOCTYPE html>
-<html>
- <head>
- <link rel="stylesheet" href="/includes/stylesheet.css" />
- <meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1" />
- <meta
- property="og:description"
- content="The World Wide Web pages of Adam Carpenter"
- />
- <meta
- property="og:image"
- content="https://nextcloud.53hor.net/index.php/s/Nx9e7iHbw4t99wo/preview"
- />
- <meta property="og:site_name" content="53hor.net" />
- <meta
- property="og:title"
- content="How to Automate Certbot Renewal with HAProxy"
- />
- <meta property="og:type" content="website" />
- <meta property="og:url" content="https://www.53hor.net" />
- <title>53hornet ➙ How to Automate Certbot Renewal with HAProxy</title>
- </head>
-
- <body>
- <nav>
- <ul>
- <li>
- <a href="/">
- <img src="/includes/icons/home-roof.svg" />
- Home
- </a>
- </li>
- <li>
- <a href="/info.html">
- <img src="/includes/icons/information-variant.svg" />
- Info
- </a>
- </li>
- <li>
- <a href="https://git.53hor.net">
- <img src="/includes/icons/git.svg" />
- Repos
- </a>
- </li>
- <li>
- <a href="/hosted.html">
- <img src="/includes/icons/desktop-tower.svg" />
- Hosted
- </a>
- </li>
- <li>
- <a type="application/rss+xml" href="/rss.xml">
- <img src="/includes/icons/rss.svg" />
- RSS
- </a>
- </li>
- </ul>
- </nav>
-
- <article>
- <h1>How to Automate Certbot Renewal with HAProxy</h1>
-
- <p>
- So this is specifically for HAProxy on FreeBSD, but it should apply to
- other *nix systems as well. Basically, I use HAProxy as a reverse proxy
- to a bunch of servers I administer. I use Let's Encrypt for a
- certificate and I used <code>certbot</code> to generate that
- certificate. Generating the certificate for the first time is easy and
- has lots of documentation, but it wasn't initially clear on how I could
- easily set up auto-renewal. Here's how I did it.
- </p>
-
- <p>
- If you've already set up TLS termination with HAProxy and
- <code>certbot</code>, you know you need to combine your Let's Encrypt
- fullchain and private key to get a combined certificate that HAProxy can
- use. You can do this by <code>cat</code>-ing the chain and key together
- like so:
- </p>
-
- <pre>
-<code>
-cat /usr/local/etc/letsencrypt/live/$SITE/fullchain.pem /usr/local/etc/letsencrypt/live/$SITE/privkey.pem > /usr/local/etc/ssl/haproxy.pem
-</code>
- </pre>
-
- <p>
- In this example, <code>$SITE</code> is your domain name that you fed
- HAProxy when you created the certificate and <code>haproxy.pem</code> is
- wherever you're storing HAProxy's combined certificate. Your HAProxy
- config then points to that certificate like this:
- </p>
-
- <pre>
-<code>
-macon% grep crt /usr/local/etc/haproxy.conf
- bind *:443 ssl crt /usr/local/etc/ssl/haproxy.pem
-</code>
- </pre>
-
- <p>
- And that was the end of the first-time setup. Then a few months later
- you probably had to do it again because Let's Encrypt certs are only
- good for 90 days in between renewals. To renew the certificate, you
- usually run <code>certbot renew</code>, it detects which certificates
- are present, and uses either the webroot or standlone server renewal
- process. Then you have to <code>cat</code> the fullchain and privkey
- together and restart HAProxy so it starts using the new certificate.
- </p>
-
- <p>
- To automate those steps, newer versions of
- <code>certbot</code> will run any post renewal hooks (read: scripts)
- that you want. You can also configure HAProxy and
- <code>certbot</code> to perform the ACME challenge dance for renewal so
- that you don't have to use it interactively.
- </p>
-
- <p>
- First, if you haven't already done it, change your HAProxy config so
- there's a frontend+backend for responding to ACME challenges. In a
- frontend listening for requests, create an access control list for any
- request looking for <code>/.well-known/acme-challenge/</code>. Send
- those requests to a backend server with an unused local port.
- </p>
-
- <pre>
-<code>
-frontend http-in
- acl letsencrypt-acl path_beg /.well-known/acme-challenge/
- use_backend letsencrypt-backend if letsencrypt-acl
- ...
-backend letsencrypt-backend
- server letsencrypt 127.0.0.1:54321
-</code>
- </pre>
-
- <p>
- What this will do is allow <code>certbot</code> and Let's Encrypt to
- renew your server in standalone mode via your reverse proxy. As an added
- bonus it prevents you from having to open up an additional port on your
- firewall.
- </p>
-
- <p>
- Now you've gotta configure <code>certbot</code> to do just that. A
- config file was created in <code>certbot</code>'s
- <code>renew</code> directory for your site. All you need to do in that
- file is add a line to the <code>[renewalparams]</code> section
- specifying the port you're using in your HAProxy config.
- </p>
-
- <pre>
-<code>
-macon% echo "http01_port = 54321" >> /usr/local/etc/letsencrypt/renewal/$SITE.conf
-</code>
- </pre>
-
- <p>
- Now you need the post-renewal hooks. I dropped two separate scripts into
- the <code>renewal-hooks</code> directory: one does the job of combining
- the certificate chain and private key and the other just restarts
- HAProxy.
- </p>
-
- <pre>
-<code>
-macon% cat /usr/local/etc/letsencrypt/renewal-hooks/post/001-catcerts.sh
-#!/bin/sh
-
-SITE=(your site of course)
-
-cd /usr/local/etc/letsencrypt/live/$SITE
-cat fullchain.pem privkey.pem > /usr/local/etc/ssl/haproxy.pem
-macon% cat /usr/local/etc/letsencrypt/renewal-hooks/post/002-haproxy.sh
-#!/bin/sh
-service haproxy restart
-</code>
- </pre>
-
- <p>
- When <code>certbot renew</code> is run, <code>certbot</code> checks the
- <code>renewal-hooks/post</code> directory and runs any executable things
- in it after it's renewed the certificate(s). As a side note,
- <em>make sure you hit those scripts with <code>chmod +x</code></em> or
- they probably won't run.
- </p>
-
- <p>
- Now all that's left is dropping a job into <code>cron</code> or
- <code>periodic</code> to run <code>certbot renew</code> at least once or
- twice within the renewal period.
- </p>
-
- <pre>
-<code>
-macon% doas crontab -l|grep certbot
-# certbot renewal
-@monthly certbot renew
-</code>
- </pre>
-
- <p>
- You can always test that your scripts are working with
- <code>certbot renew --dry-run</code> just to be safe.
- </p>
-
- <pre>
-<code>
-macon% doas certbot renew --dry-run
-Saving debug log to /var/log/letsencrypt/letsencrypt.log
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Processing /usr/local/etc/letsencrypt/renewal/53hor.net.conf
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Cert not due for renewal, but simulating renewal for dry run
-Plugins selected: Authenticator standalone, Installer None
-Simulating renewal of an existing certificate for 53hor.net and 7 more domains
-Performing the following challenges:
-http-01 challenge for 53hor.net
-http-01 challenge for carpentertutoring.com
-http-01 challenge for git.53hor.net
-http-01 challenge for nextcloud.53hor.net
-http-01 challenge for pkg.53hor.net
-http-01 challenge for plex.53hor.net
-http-01 challenge for theglassyladies.com
-http-01 challenge for www.53hor.net
-Waiting for verification...
-Cleaning up challenges
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-new certificate deployed without reload, fullchain is
-/usr/local/etc/letsencrypt/live/53hor.net/fullchain.pem
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Congratulations, all simulated renewals succeeded:
- /usr/local/etc/letsencrypt/live/53hor.net/fullchain.pem (success)
-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Running post-hook command: /usr/local/etc/letsencrypt/renewal-hooks/post/001-catcerts.sh
-Running post-hook command: /usr/local/etc/letsencrypt/renewal-hooks/post/002-haproxy.sh
-Output from post-hook command 002-haproxy.sh:
-Waiting for PIDS: 15191.
-Starting haproxy.
-
-</code>
- </pre>
-
- <p>
- And there it is. Automated Let's Encrypt certificate renewal on FreeBSD
- with HAProxy.
- </p>
- </article>
- </body>
-</html>