summaryrefslogtreecommitdiff
path: root/unix
diff options
context:
space:
mode:
authorAdam T. Carpenter <atc@53hor.net>2020-11-27 10:34:19 -0500
committerAdam T. Carpenter <atc@53hor.net>2020-11-27 10:34:19 -0500
commit0d26219384c908999fbfa942c30e10d44c487899 (patch)
tree93193ffd91f21d6e22ace0a8ad3378bf129377ae /unix
parentdb88cf6a17bf89759bf555647b14233b99be673c (diff)
download53hor-0d26219384c908999fbfa942c30e10d44c487899.tar.xz
53hor-0d26219384c908999fbfa942c30e10d44c487899.zip
added posts as html, fixed nav, updated styles and images
Diffstat (limited to 'unix')
-rw-r--r--unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html128
-rw-r--r--unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.md64
-rw-r--r--unix/2019-09-28-my-preferred-method-for-data-recovery.html286
-rw-r--r--unix/2019-09-28-my-preferred-method-for-data-recovery.md203
-rw-r--r--unix/2020-07-26-now-this-is-a-minimal-install.html107
-rw-r--r--unix/2020-07-26-now-this-is-a-minimal-install.md54
-rw-r--r--unix/dear-god-why-are-pdf-editors-such-an-ordeal.html79
-rw-r--r--unix/dear-god-why-are-pdf-editors-such-an-ordeal.md18
-rw-r--r--unix/the-quest-for-automated-bluray-ripping.md10
9 files changed, 600 insertions, 349 deletions
diff --git a/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html b/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
new file mode 100644
index 0000000..47fb0b3
--- /dev/null
+++ b/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
@@ -0,0 +1,128 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="Offloading GoPro Video the Easy Way!" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ Offloading GoPro Video the Easy Way!</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>Offloading GoPro Video the Easy Way!</h1>
+
+ <p>
+ Transferring files off of most cameras to a Linux computer isn't all
+ that difficult. The exception is my GoPro Hero 4 Black. For 4th of July
+ week I took a bunch of video with the GoPro, approximately 20 MP4 files,
+ about 3GB each. The annoying thing about the GoPro's USB interface is
+ you need additional software to download everything through the cable.
+ The camera doesn't just show up as a USB filesystem that you can mount.
+ The GoPro does have a micro-SD card but I was away from home and didn't
+ have any dongles or adapters. Both of these solutions also mean taking
+ the camera out of its waterproof case and off of its mount. So here's
+ what I did.
+ </p>
+
+ <p>
+ GoPro cameras, after the Hero 3, can open up an ad-hoc wireless network
+ that lets you browse the GoPro's onboard files through an HTTP server.
+ This means you can open your browser and scroll through the files on the
+ camera at an intranet address, <code>10.5.5.9</code>, and download them
+ one by one by clicking every link on every page. If you have a lot of
+ footage on there it kinda sucks. So, I opened up the manual for
+ <code>wget</code>. I'm sure you could get really fancy with some of the
+ options but the only thing I cared about was downloading every single
+ MP4 video off of the camera, automatically. I did not want to download
+ any of the small video formats or actual HTML files. Here's what I used:
+ </p>
+
+ <p>
+ <code>sh wget --recursive --accept "*.MP4" http://10.5.5.9:8080/ </code>
+ </p>
+
+ <p>
+ This tells <code>wget</code> to download all of the files at the GoPro's
+ address recursively and skips any that don't have the MP4 extension. Now
+ I've got a directory tree with all of my videos in it. And the best part
+ is I didn't have to install the dinky GoPro app on my laptop. Hopefully
+ this helps if you're looking for an easy way to migrate lots of footage
+ without manually clicking through the web interface or installing
+ additional software. The only downside is if you're moving a whole lot
+ of footage, it's not nearly as quick as just moving files off the SD
+ card. So I'd shoot for using the adapter to read off the card first and
+ only use this if that's not an option, such as when the camera is
+ mounted and you don't want to move it.
+ </p>
+
+ <p>Some things I would like to change/add:</p>
+
+ <ul>
+ <li>
+ Download all image files as well; should be easy, just another
+ <code>--accept</code>
+ </li>
+ <li>Initiate parallel downloads</li>
+ <li>
+ Clean up the directory afterwards so I just have one level of depth
+ </li>
+ </ul>
+
+ <p>
+ I could probably write a quick and dirty shell script to do all of this
+ for me but I use the camera so infrequently that it's probably not even
+ worth it.
+ </p>
+ </article>
+ </body>
+</html>
diff --git a/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.md b/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.md
deleted file mode 100644
index 89ebe97..0000000
--- a/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.md
+++ /dev/null
@@ -1,64 +0,0 @@
----
-permalink: "/posts/{{categories}}/{{slug}}"
-title: The Best Way to Transfer GoPro Files with Linux
-categories:
- - technology
-tags:
- - gopro
- - camera
- - video
- - download
- - linux
- - wireless
-published_date: "2019-07-04 21:54:49 +0000"
-layout: post.liquid
-is_draft: false
-excerpt_separator: "\n\n\n"
----
-
-Transferring files off of most cameras to a Linux computer isn't all that
-difficult. The exception is my GoPro Hero 4 Black. For 4th of July week I took
-a bunch of video with the GoPro, approximately 20 MP4 files, about 3GB each.
-The annoying thing about the GoPro's USB interface is you need additional
-software to download everything through the cable. The camera doesn't just show
-up as a USB filesystem that you can mount. The GoPro does have a micro-SD card
-but I was away from home and didn't have any dongles or adapters. Both of these
-solutions also mean taking the camera out of its waterproof case and off of its
-mount. So here's what I did.
-
-GoPro cameras, after the Hero 3, can open up an ad-hoc wireless network that
-lets you browse the GoPro's onboard files through an HTTP server. This means
-you can open your browser and scroll through the files on the camera at an
-intranet address, `10.5.5.9`, and download them one by one by clicking every
-link on every page. If you have a lot of footage on there it kinda sucks. So, I
-opened up the manual for `wget`. I'm sure you could get really fancy with some
-of the options but the only thing I cared about was downloading every single
-MP4 video off of the camera, automatically. I did not want to download any of
-the small video formats or actual HTML files. Here's what I used:
-
-```sh
-wget --recursive --accept "*.MP4" http://10.5.5.9:8080/
-```
-
-This tells `wget` to download all of the files at the GoPro's address
-recursively and skips any that don't have the MP4 extension. Now I've got a
-directory tree with all of my videos in it. And the best part is I didn't have
-to install the dinky GoPro app on my laptop. Hopefully this helps if you're
-looking for an easy way to migrate lots of footage without manually clicking
-through the web interface or installing additional software. The only downside
-is if you're moving a whole lot of footage, it's not nearly as quick as just
-moving files off the SD card. So I'd shoot for using the adapter to read off
-the card first and only use this if that's not an option, such as when the
-camera is mounted and you don't want to move it.
-
-Some things I would like to change/add:
-
-- Download all image files as well; should be easy, just another `--accept`
-- Initiate parallel downloads
-- Clean up the directory afterwards so I just have one level of depth
-
-I could probably write a quick and dirty shell script to do all of this for me
-but I use the camera so infrequently that it's probably not even worth it.
-
-
-
diff --git a/unix/2019-09-28-my-preferred-method-for-data-recovery.html b/unix/2019-09-28-my-preferred-method-for-data-recovery.html
new file mode 100644
index 0000000..07d9bff
--- /dev/null
+++ b/unix/2019-09-28-my-preferred-method-for-data-recovery.html
@@ -0,0 +1,286 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="How I Do Data Recovery" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ How I Do Data Recovery</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>How I Do Data Recovery</h1>
+
+ <p>
+ This week Amy plugged in her flash drive to discover that there were no
+ files on it. Weeks before there had been dozens of large cuts of footage
+ that she needed to edit down for work. Hours of recordings were
+ seemingly gone. And the most annoying part was the drive had worked
+ perfectly on several other occasions. Just not now that the footage was
+ actually needed of course. Initially it looked like everything had been
+ wiped clean, however both Amy's Mac and her PC thought the drive was
+ half full. It's overall capacity was 64GB but it showed only about 36GB
+ free. So there still had to be data on there if we could find the right
+ tool to salvage it.
+ </p>
+
+ <p>
+ Luckily this wasn't the first time I had to recover accidentally (or
+ magically) deleted files. I had previously done so with some success at
+ my tech support job, for some college friends, and for my in-laws'
+ retired laptops. So I had a pretty clear idea of what to expect. The
+ only trick was finding a tool that knew what files it was looking for.
+ The camera that took the video clips was a Sony and apparently they
+ record into <code>m2ts</code> files, which are kind of a unique format
+ in that they only show up on Blu-Ray discs and Sony camcorders. Enter my
+ favorite two tools for dealing with potentially-destroyed data:
+ <code>ddrescue</code> and <code>photorec</code>.
+ </p>
+
+ <h2>DDRescue</h2>
+
+ <p>
+ <code>ddrescue</code> is a godsend of a tool. If you've ever used
+ <code>dd</code> before, forget about it. Use <code>ddrescue</code>. You
+ might as well <code>alias dd=ddrescue</code> because it's that great. By
+ default it has a plethora of additional options, displays the progress
+ as it works, recovers and retries in the event of I/O errors, and does
+ everything that good old <code>dd</code> can do. It's particularly good
+ at protecting partitions or disks that have been corrupted or damaged by
+ rescuing undamaged portions first. Oh, and have you ever had to cancel a
+ <code>dd</code> operation? Did I mention that <code>ddrescue</code> can
+ pause and resume operations? It's that good.
+ </p>
+
+ <h2>PhotoRec</h2>
+
+ <p>
+ <code>photorec</code> is probably the best missing file recovery tool
+ I've ever used in my entire life. And I've used quite a few. I've never
+ had as good results as I've had with <code>photorec</code> with other
+ tools like Recuva et. al. And <code>photorec</code> isn't just for
+ photos, it can recover documents (a la Office suite), music, images,
+ config files, and videos (including the very odd
+ <code>m2ts</code> format!). The other nice thing is
+ <code>photorec</code> will work on just about any source. It's also free
+ software which makes me wonder why there are like $50 recovery tools for
+ Windows that look super sketchy.
+ </p>
+
+ <h2>In Practice</h2>
+
+ <p>
+ So here's what I did to get Amy's files back. Luckily she didn't write
+ anything out to the drive afterward so the chances (I thought) were
+ pretty good that I would get <em>something</em> back. The first thing I
+ always do is make a full image of whatever media I'm trying to recover
+ from. I do this for a couple of reasons. First of all it's a backup. If
+ something goes wrong during recovery I don't have to worry about the
+ original, fragile media being damaged or wiped. Furthermore, I can work
+ with multiple copies at a time. If it's a large image that means
+ multiple tools or even multiple PCs can work on it at once. It's also
+ just plain faster working off a disk image than a measly flash drive. So
+ I used <code>ddrescue</code> to make an image of Amy's drive.
+ </p>
+
+ <pre><code>
+$ sudo ddrescue /dev/sdb1 amy-lexar.dd
+GNU ddrescue 1.24
+Press Ctrl-C to interrupt
+ ipos: 54198 kB, non-trimmed: 0 B, current rate: 7864 kB/s
+ opos: 54198 kB, non-scraped: 0 B, average rate: 18066 kB/s
+non-tried: 63967 MB, bad-sector: 0 B, error rate: 0 B/s
+ rescued: 54198 kB, bad areas: 0, run time: 2s
+pct rescued: 0.08%, read errors: 0, remaining time: 59m
+ time since last successful read: n/a
+Copying non-tried blocks... Pass 1 (forwards)
+ </code></pre>
+
+ <p>
+ The result was a very large partition image that I could fearlessly play
+ around with.
+ </p>
+
+ <pre>
+ <code>
+$ ll amy-lexar.dd
+-rw-r--r-- 1 root root 60G Sep 24 02:45 amy-lexar.dd
+ </code>
+ </pre>
+
+ <p>
+ Then I could run <code>photorec</code> on the image. This brings up a
+ TUI with all of the listed media that I can try and recover from.
+ </p>
+
+ <pre><code>
+$ sudo photorec amy-lexar.dd
+
+PhotoRec 7.0, Data Recovery Utility, April 2015
+Christophe GRENIER <grenier@cgsecurity.org>
+http://www.cgsecurity.org
+
+ PhotoRec is free software, and
+comes with ABSOLUTELY NO WARRANTY.
+
+Select a media (use Arrow keys, then press Enter):
+>Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
+
+>[Proceed ] [ Quit ]
+
+Note:
+Disk capacity must be correctly detected for a successful recovery.
+If a disk listed above has incorrect size, check HD jumper settings, BIOS
+detection, and install the latest OS patches and disk drivers.
+ </code></pre>
+
+ <p>
+ After hitting proceed <code>photorec</code> asks if you want to scan
+ just a particular partition or the whole disk (if you made a whole disk
+ image). I can usually get away with just selecting the partition I know
+ the files are on and starting a search.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+Christophe GRENIER <grenier@cgsecurity.org>
+http://www.cgsecurity.org
+
+Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
+
+ Partition Start End Size in sectors
+ Unknown 0 0 1 7783 139 4 125042656 [Whole disk]
+> P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+>[ Search ] [Options ] [File Opt] [ Quit ]
+ Start file recovery
+ </code></pre>
+
+ <p>
+ Then <code>photorec</code> asks a couple of questions about the
+ formatting of the media. It can usually figure them out all by itself so
+ I just use the default options unless it's way out in left field.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+Christophe GRENIER <grenier@cgsecurity.org>
+http://www.cgsecurity.org
+
+ P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+To recover lost files, PhotoRec need to know the filesystem type where the
+file were stored:
+ [ ext2/ext3 ] ext2/ext3/ext4 filesystem
+>[ Other ] FAT/NTFS/HFS+/ReiserFS/...
+ </code></pre>
+
+ <p>
+ Now this menu is where I don't just go with the default path.
+ <code>photorec</code> will offer to search just unallocated space or the
+ entire partition. I always go for the whole partition here; sometimes
+ I'll get back files that I didn't really care about but more often than
+ not I end up rescuing more data this way. In this scenario searching
+ just unallocated space found no files at all. So I told
+ <code>photorec</code> to search everything.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+Christophe GRENIER <grenier@cgsecurity.org>
+http://www.cgsecurity.org
+
+ P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+
+Please choose if all space need to be analysed:
+ [ Free ] Scan for file from FAT32 unallocated space only
+>[ Whole ] Extract files from whole partition
+ </code></pre>
+
+ <p>
+ Now it'll ask where you want to save any files it finds. I threw them
+ all into a directory under home that I could zip up and send to Amy's
+ Mac later.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+
+Please select a destination to save the recovered files.
+Do not choose to write the files to the same partition they were stored on.
+Keys: Arrow keys to select another directory
+ C when the destination is correct
+ Q to quit
+Directory /home/adam
+ drwx------ 1000 1000 4096 28-Sep-2019 12:10 .
+ drwxr-xr-x 0 0 4096 26-Jan-2019 15:32 ..
+>drwxr-xr-x 1000 1000 4096 28-Sep-2019 12:10 amy-lexar-recovery
+ </code></pre>
+
+ <p>
+ And then just press <code>C</code>. <code>photrec</code> will start
+ copying all of the files it finds into that directory. It reports what
+ kinds of files it found and how many it was able to locate. I was able
+ to recover all of Amy's lost footage this way, past, along with some
+ straggler files that had been on the drive at one point. This has worked
+ for me many times in the past, both on newer devices like flash drives
+ and on super old, sketchy IDE hard drives. I probably won't ever pay for
+ data recovery unless a drive has been physically damaged in some way. In
+ other words, this software works great for me and I don't foresee the
+ need for anything else out there. It's simple to use and is typically
+ pretty reliable.
+ </p>
+ </article>
+ </body>
+</html>
diff --git a/unix/2019-09-28-my-preferred-method-for-data-recovery.md b/unix/2019-09-28-my-preferred-method-for-data-recovery.md
deleted file mode 100644
index 14aaab4..0000000
--- a/unix/2019-09-28-my-preferred-method-for-data-recovery.md
+++ /dev/null
@@ -1,203 +0,0 @@
----
-permalink: "/posts/{{categories}}/{{slug}}"
-title: My Preferred Method for Data Recovery
-categories:
- - life
-tags:
- - data
- - file
- - photo
- - recovery
- - linux
- - photorec
-excerpt_separator: "\n\n\n"
-published_date: "2019-09-28 20:20:05 +0000"
-layout: post.liquid
-is_draft: false
----
-This week Amy plugged in her flash drive to discover that there were no files
-on it. Weeks before there had been dozens of large cuts of footage that she
-needed to edit down for work. Hours of recordings were seemingly gone. And the
-most annoying part was the drive had worked perfectly on several other
-occasions. Just not now that the footage was actually needed of course.
-Initially it looked like everything had been wiped clean, however both Amy's
-Mac and her PC thought the drive was half full. It's overall capacity was 64GB
-but it showed only about 36GB free. So there still had to be data on there if
-we could find the right tool to salvage it.
-
-Luckily this wasn't the first time I had to recover accidentally (or magically)
-deleted files. I had previously done so with some success at my tech support
-job, for some college friends, and for my in-laws' retired laptops. So I had a
-pretty clear idea of what to expect. The only trick was finding a tool that
-knew what files it was looking for. The camera that took the video clips was a
-Sony and apparently they record into `m2ts` files, which are kind of a unique
-format in that they only show up on Blu-Ray discs and Sony camcorders. Enter my
-favorite two tools for dealing with potentially-destroyed data: `ddrescue` and
-`photorec`.
-
-## DDRescue
-
-`ddrescue` is a godsend of a tool. If you've ever used `dd` before, forget
-about it. Use `ddrescue`. You might as well `alias dd=ddrescue` because it's
-that great. By default it has a plethora of additional options, displays the
-progress as it works, recovers and retries in the event of I/O errors, and does
-everything that good old `dd` can do. It's particularly good at protecting
-partitions or disks that have been corrupted or damaged by rescuing undamaged
-portions first. Oh, and have you ever had to cancel a `dd` operation? Did I
-mention that `ddrescue` can pause and resume operations? It's that good.
-
-## PhotoRec
-
-`photorec` is probably the best missing file recovery tool I've ever used in my
-entire life. And I've used quite a few. I've never had as good results as I've
-had with `photorec` with other tools like Recuva et. al. And `photorec` isn't
-just for photos, it can recover documents (a la Office suite), music, images,
-config files, and videos (including the very odd `m2ts` format!). The other
-nice thing is `photorec` will work on just about any source. It's also free
-software which makes me wonder why there are like $50 recovery tools for
-Windows that look super sketchy.
-
-## In Practice
-
-So here's what I did to get Amy's files back. Luckily she didn't write anything
-out to the drive afterward so the chances (I thought) were pretty good that I
-would get *something* back. The first thing I always do is make a full image of
-whatever media I'm trying to recover from. I do this for a couple of reasons.
-First of all it's a backup. If something goes wrong during recovery I don't
-have to worry about the original, fragile media being damaged or wiped.
-Furthermore, I can work with multiple copies at a time. If it's a large image
-that means multiple tools or even multiple PCs can work on it at once. It's
-also just plain faster working off a disk image than a measly flash drive. So I
-used `ddrescue` to make an image of Amy's drive.
-
-```shell
-$ sudo ddrescue /dev/sdb1 amy-lexar.dd
-GNU ddrescue 1.24
-Press Ctrl-C to interrupt
- ipos: 54198 kB, non-trimmed: 0 B, current rate: 7864 kB/s
- opos: 54198 kB, non-scraped: 0 B, average rate: 18066 kB/s
-non-tried: 63967 MB, bad-sector: 0 B, error rate: 0 B/s
- rescued: 54198 kB, bad areas: 0, run time: 2s
-pct rescued: 0.08%, read errors: 0, remaining time: 59m
- time since last successful read: n/a
-Copying non-tried blocks... Pass 1 (forwards)
-```
-
-The result was a very large partition image that I could fearlessly play around
-with.
-
-```shell
-$ ll amy-lexar.dd
--rw-r--r-- 1 root root 60G Sep 24 02:45 amy-lexar.dd
-```
-
-Then I could run `photorec` on the image. This brings up a TUI with all of the
-listed media that I can try and recover from.
-
-```shell
-$ sudo photorec amy-lexar.dd
-
-PhotoRec 7.0, Data Recovery Utility, April 2015
-Christophe GRENIER <grenier@cgsecurity.org>
-http://www.cgsecurity.org
-
- PhotoRec is free software, and
-comes with ABSOLUTELY NO WARRANTY.
-
-Select a media (use Arrow keys, then press Enter):
->Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
->[Proceed ] [ Quit ]
-
-Note:
-Disk capacity must be correctly detected for a successful recovery.
-If a disk listed above has incorrect size, check HD jumper settings, BIOS
-detection, and install the latest OS patches and disk drivers.
-```
-
-After hitting proceed `photorec` asks if you want to scan just a particular
-partition or the whole disk (if you made a whole disk image). I can usually get
-away with just selecting the partition I know the files are on and starting a
-search.
-
-```shell
-PhotoRec 7.0, Data Recovery Utility, April 2015
-Christophe GRENIER <grenier@cgsecurity.org>
-http://www.cgsecurity.org
-
-Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
-
- Partition Start End Size in sectors
- Unknown 0 0 1 7783 139 4 125042656 [Whole disk]
-> P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
->[ Search ] [Options ] [File Opt] [ Quit ]
- Start file recovery
-```
-
-Then `photorec` asks a couple of questions about the formatting of the media.
-It can usually figure them out all by itself so I just use the default options
-unless it's way out in left field.
-
-```shell
-PhotoRec 7.0, Data Recovery Utility, April 2015
-Christophe GRENIER <grenier@cgsecurity.org>
-http://www.cgsecurity.org
-
- P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
-To recover lost files, PhotoRec need to know the filesystem type where the
-file were stored:
- [ ext2/ext3 ] ext2/ext3/ext4 filesystem
->[ Other ] FAT/NTFS/HFS+/ReiserFS/...
-```
-
-Now this menu is where I don't just go with the default path. `photorec` will
-offer to search just unallocated space or the entire partition. I always go for
-the whole partition here; sometimes I'll get back files that I didn't really
-care about but more often than not I end up rescuing more data this way. In
-this scenario searching just unallocated space found no files at all. So I told
-`photorec` to search everything.
-
-```shell
-PhotoRec 7.0, Data Recovery Utility, April 2015
-Christophe GRENIER <grenier@cgsecurity.org>
-http://www.cgsecurity.org
-
- P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
-
-
-Please choose if all space need to be analysed:
- [ Free ] Scan for file from FAT32 unallocated space only
->[ Whole ] Extract files from whole partition
-```
-
-Now it'll ask where you want to save any files it finds. I threw them all into
-a directory under home that I could zip up and send to Amy's Mac later.
-
-```shell
-PhotoRec 7.0, Data Recovery Utility, April 2015
-
-Please select a destination to save the recovered files.
-Do not choose to write the files to the same partition they were stored on.
-Keys: Arrow keys to select another directory
- C when the destination is correct
- Q to quit
-Directory /home/adam
- drwx------ 1000 1000 4096 28-Sep-2019 12:10 .
- drwxr-xr-x 0 0 4096 26-Jan-2019 15:32 ..
->drwxr-xr-x 1000 1000 4096 28-Sep-2019 12:10 amy-lexar-recovery
-```
-
-And then just press `C`. `photrec` will start copying all of the files it finds
-into that directory. It reports what kinds of files it found and how many it
-was able to locate. I was able to recover all of Amy's lost footage this way,
-past, along with some straggler files that had been on the drive at one point.
-This has worked for me many times in the past, both on newer devices like flash
-drives and on super old, sketchy IDE hard drives. I probably won't ever pay for
-data recovery unless a drive has been physically damaged in some way. In other
-words, this software works great for me and I don't foresee the need for
-anything else out there. It's simple to use and is typically pretty reliable.
-
-
-
diff --git a/unix/2020-07-26-now-this-is-a-minimal-install.html b/unix/2020-07-26-now-this-is-a-minimal-install.html
new file mode 100644
index 0000000..07a398a
--- /dev/null
+++ b/unix/2020-07-26-now-this-is-a-minimal-install.html
@@ -0,0 +1,107 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="Now This is a Minimal Install!" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ Now This is a Minimal Install!</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>Now This is a Minimal Install!</h1>
+
+ <p>
+ I just got done configuring Poudriere on Freebsd 12.1-RELEASE. The
+ awesome thing about it is it allows you to configure and maintain your
+ own package repository. All of the ports and their dependencies are
+ built from source with personalized options. That means that I can
+ maintain my own repo of just the packages I need with just the
+ compile-time options I need. For example, for the Nvidia driver set I
+ disabled all Wayland related flags. I use Xorg so there was no need to
+ have that functionality built in.
+ </p>
+
+ <p>
+ Compile times are pretty long but I hope to change that by upgrading my
+ home server to FreeBSD as well (from Ubuntu Server). Then I can
+ configure poudriere to serve up a ports tree and my own pkg repo from
+ there. The server is a lot faster than my laptop and will build packages
+ way faster, and I'll be able to use those packages on both the server
+ and my laptop and any jails I have running. Jails (and ZFS) also make
+ poudriere really cool to use as all of the building is done inside a
+ jail. When the time comes I can just remove the jail and poudriere ports
+ tree from my laptop and update pkg to point to my web server.
+ </p>
+
+ <p>
+ This is, as I understand it, the sane way to do package management in
+ FreeBSD. The binary package repo is basically the ports tree
+ pre-assembled with default options. Sometimes those packages are
+ compiled without functionality that most users don't need. In those
+ situations, you're forced to use ports. The trouble is you're not really
+ supposed to mix ports and binary packages. The reason, again as I
+ understand it, is because ports are updated more frequently. So binary
+ packages and ports can have different dependency versions, which can
+ sometimes break compatibility on an upgrade. Most FreeBSD users
+ recommend installing everything with ports (which is just a make install
+ inside the local tree) but then you lose the package management features
+ that come with pkg. Poudriere lets you kind of do both by creating your
+ "own personal binary repo" out of a list of preconfigured, pre-built
+ ports.
+ </p>
+
+ <p>FreeBSD rocks.</p>
+ </article>
+ </body>
+</html>
diff --git a/unix/2020-07-26-now-this-is-a-minimal-install.md b/unix/2020-07-26-now-this-is-a-minimal-install.md
deleted file mode 100644
index 9936ad4..0000000
--- a/unix/2020-07-26-now-this-is-a-minimal-install.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-permalink: "/posts/{{categories}}/{{slug}}"
-title: Now This is a Minimal Install!
-categories:
- - technology
- - unix
-tags:
- - FreeBSD
- - packages
- - poudriere
- - saneness
-excerpt_separator: "\n\n\n"
-published_date: "2020-07-26 15:21:13 +0000"
-layout: post.liquid
-is_draft: false
----
-Now this is a minimal install!
-
-I just got done configuring Poudriere on Freebsd 12.1-RELEASE. The awesome
-thing about it is it allows you to configure and maintain your own package
-repository. All of the ports and their dependencies are built from source with
-personalized options. That means that I can maintain my own repo of just the
-packages I need with just the compile-time options I need. For example, for the
-Nvidia driver set I disabled all Wayland related flags. I use Xorg so there was
-no need to have that functionality built in.
-
-Compile times are pretty long but I hope to change that by upgrading my home
-server to FreeBSD as well (from Ubuntu Server). Then I can configure poudriere
-to serve up a ports tree and my own pkg repo from there. The server is a lot
-faster than my laptop and will build packages way faster, and I'll be able to
-use those packages on both the server and my laptop and any jails I have
-running. Jails (and ZFS) also make poudriere really cool to use as all of the
-building is done inside a jail. When the time comes I can just remove the jail
-and poudriere ports tree from my laptop and update pkg to point to my web
-server.
-
-This is, as I understand it, the sane way to do package management in FreeBSD.
-The binary package repo is basically the ports tree pre-assembled with default
-options. Sometimes those packages are compiled without functionality that most
-users don't need. In those situations, you're forced to use ports. The trouble
-is you're not really supposed to mix ports and binary packages. The reason,
-again as I understand it, is because ports are updated more frequently. So
-binary packages and ports can have different dependency versions, which can
-sometimes break compatibility on an upgrade. Most FreeBSD users recommend
-installing everything with ports (which is just a make install inside the local
-tree) but then you lose the package management features that come with pkg.
-Poudriere lets you kind of do both by creating your "own personal binary repo"
-out of a list of preconfigured, pre-built ports.
-
-FreeBSD rocks.
-
-
-
-
diff --git a/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html b/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html
new file mode 100644
index 0000000..9adc833
--- /dev/null
+++ b/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html
@@ -0,0 +1,79 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="All PDF Readers/Editors Suck" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ All PDF Readers/Editors Suck</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>All PDF Readers/Editors Suck</h1>
+
+ <p>All PDF editors/mergers/tools either:</p>
+
+ <ol>
+ <li>Cost hundreds of dollars</li>
+ <li>Require uploading private documents to a server for processing</li>
+ <li>Leave watermarks or charge you for "pro" features</li>
+ <li>Are blatant malware</li>
+ </ol>
+
+ <p>
+ Except mupdf and mutool, which are absolutely amazing and I can't live
+ without them.
+ </p>
+ </article>
+ </body>
+</html>
diff --git a/unix/dear-god-why-are-pdf-editors-such-an-ordeal.md b/unix/dear-god-why-are-pdf-editors-such-an-ordeal.md
deleted file mode 100644
index 5d7e5f4..0000000
--- a/unix/dear-god-why-are-pdf-editors-such-an-ordeal.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-permalink: "/posts/{{categories}}/{{slug}}"
-title: Dear God Why Are PDF Editors Such an Ordeal?
-categories: []
-tags: []
-excerpt_separator: "\n\n\n"
-layout: post.liquid
-is_draft: true
----
-
-All PDF editors/mergers/tools either:
-
-1. Cost hundreds of dollars
-1. Require uploading private documents to a server for processing
-1. Leave watermarks or charge you for "pro" features
-1. Are blatant malware
-
-Except mupdf and mutool, which are absolutely amazing.
diff --git a/unix/the-quest-for-automated-bluray-ripping.md b/unix/the-quest-for-automated-bluray-ripping.md
deleted file mode 100644
index 6e20c14..0000000
--- a/unix/the-quest-for-automated-bluray-ripping.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-excerpt_separator: "\n\n\n"
-permalink: "/posts/{{categories}}/{{slug}}"
-title: The Quest for Automated BluRay Ripping
-categories: []
-tags: []
-layout: post.liquid
-is_draft: true
----
--> Start here <-