summaryrefslogtreecommitdiff
path: root/posts/unix
diff options
context:
space:
mode:
authorAdam T. Carpenter <atc@53hor.net>2020-11-29 08:53:22 -0500
committerAdam T. Carpenter <atc@53hor.net>2020-11-29 08:53:22 -0500
commitaa6ade8c1bc51bc8f379442bb00710438d1385fd (patch)
treed0a99de1f2ceec24c6fe15d61661f96a33a05d3b /posts/unix
parentdaa21252743400c83f9d46c7fdefc00058553d7f (diff)
download53hor-aa6ade8c1bc51bc8f379442bb00710438d1385fd.tar.xz
53hor-aa6ade8c1bc51bc8f379442bb00710438d1385fd.zip
organized posts, added profile, started makefile
Diffstat (limited to 'posts/unix')
-rw-r--r--posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html133
-rw-r--r--posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html282
-rw-r--r--posts/unix/2020-07-26-now-this-is-a-minimal-install.html107
-rw-r--r--posts/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html79
4 files changed, 601 insertions, 0 deletions
diff --git a/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html b/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
new file mode 100644
index 0000000..15c776f
--- /dev/null
+++ b/posts/unix/2019-07-04-the-best-way-to-transfer-gopro-files-with-linux.html
@@ -0,0 +1,133 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta
+ property="og:title"
+ content="Offloading GoPro Footage the Easy Way!"
+ />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ Offloading GoPro Footage the Easy Way!</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>Offloading GoPro Footage the Easy Way!</h1>
+
+ <p>
+ Transferring files off of most cameras to a Linux computer isn't all
+ that difficult. The exception is my GoPro Hero 4 Black. For 4th of July
+ week I took a bunch of video with the GoPro, approximately 20 MP4 files,
+ about 3GB each. The annoying thing about the GoPro's USB interface is
+ you need additional software to download everything through the cable.
+ The camera doesn't just show up as a USB filesystem that you can mount.
+ The GoPro does have a micro-SD card but I was away from home and didn't
+ have any dongles or adapters. Both of these solutions also mean taking
+ the camera out of its waterproof case and off of its mount. So here's
+ what I did.
+ </p>
+
+ <p>
+ GoPro cameras, after the Hero 3, can open up an ad-hoc wireless network
+ that lets you browse the GoPro's onboard files through an HTTP server.
+ This means you can open your browser and scroll through the files on the
+ camera at an intranet address, <code>10.5.5.9</code>, and download them
+ one by one by clicking every link on every page. If you have a lot of
+ footage on there it kinda sucks. So, I opened up the manual for
+ <code>wget</code>. I'm sure you could get really fancy with some of the
+ options but the only thing I cared about was downloading every single
+ MP4 video off of the camera, automatically. I did not want to download
+ any of the small video formats or actual HTML files. Here's what I used:
+ </p>
+
+ <pre>
+ <code>
+sh wget --recursive --accept "*.MP4" http://10.5.5.9:8080/
+ </code>
+ </pre>
+
+ <p>
+ This tells <code>wget</code> to download all of the files at the GoPro's
+ address recursively and skips any that don't have the MP4 extension. Now
+ I've got a directory tree with all of my videos in it. And the best part
+ is I didn't have to install the dinky GoPro app on my laptop. Hopefully
+ this helps if you're looking for an easy way to migrate lots of footage
+ without manually clicking through the web interface or installing
+ additional software. The only downside is if you're moving a whole lot
+ of footage, it's not nearly as quick as just moving files off the SD
+ card. So I'd shoot for using the adapter to read off the card first and
+ only use this if that's not an option, such as when the camera is
+ mounted and you don't want to move it.
+ </p>
+
+ <p>Some things I would like to change/add:</p>
+
+ <ul>
+ <li>
+ Download all image files as well; should be easy, just another
+ <code>--accept</code>
+ </li>
+ <li>Initiate parallel downloads</li>
+ <li>
+ Clean up the directory afterwards so I just have one level of depth
+ </li>
+ </ul>
+
+ <p>
+ I could probably write a quick and dirty shell script to do all of this
+ for me but I use the camera so infrequently that it's probably not even
+ worth it.
+ </p>
+ </article>
+ </body>
+</html>
diff --git a/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html b/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html
new file mode 100644
index 0000000..b63ba5a
--- /dev/null
+++ b/posts/unix/2019-09-28-my-preferred-method-for-data-recovery.html
@@ -0,0 +1,282 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="How I Do Data Recovery" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ How I Do Data Recovery</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>How I Do Data Recovery</h1>
+
+ <p>
+ This week Amy plugged in her flash drive to discover that there were no
+ files on it. Weeks before there had been dozens of large cuts of footage
+ that she needed to edit down for work. Hours of recordings were
+ seemingly gone. And the most annoying part was the drive had worked
+ perfectly on several other occasions. Just not now that the footage was
+ actually needed of course. Initially it looked like everything had been
+ wiped clean, however both Amy's Mac and her PC thought the drive was
+ half full. It's overall capacity was 64GB but it showed only about 36GB
+ free. So there still had to be data on there if we could find the right
+ tool to salvage it.
+ </p>
+
+ <p>
+ Luckily this wasn't the first time I had to recover accidentally (or
+ magically) deleted files. I had previously done so with some success at
+ my tech support job, for some college friends, and for my in-laws'
+ retired laptops. So I had a pretty clear idea of what to expect. The
+ only trick was finding a tool that knew what files it was looking for.
+ The camera that took the video clips was a Sony and apparently they
+ record into <code>m2ts</code> files, which are kind of a unique format
+ in that they only show up on Blu-Ray discs and Sony camcorders. Enter my
+ favorite two tools for dealing with potentially-destroyed data:
+ <code>ddrescue</code> and <code>photorec</code>.
+ </p>
+
+ <h2>DDRescue</h2>
+
+ <p>
+ <code>ddrescue</code> is a godsend of a tool. If you've ever used
+ <code>dd</code> before, forget about it. Use <code>ddrescue</code>. You
+ might as well <code>alias dd=ddrescue</code> because it's that great. By
+ default it has a plethora of additional options, displays the progress
+ as it works, recovers and retries in the event of I/O errors, and does
+ everything that good old <code>dd</code> can do. It's particularly good
+ at protecting partitions or disks that have been corrupted or damaged by
+ rescuing undamaged portions first. Oh, and have you ever had to cancel a
+ <code>dd</code> operation? Did I mention that <code>ddrescue</code> can
+ pause and resume operations? It's that good.
+ </p>
+
+ <h2>PhotoRec</h2>
+
+ <p>
+ <code>photorec</code> is probably the best missing file recovery tool
+ I've ever used in my entire life. And I've used quite a few. I've never
+ had as good results as I've had with <code>photorec</code> with other
+ tools like Recuva et. al. And <code>photorec</code> isn't just for
+ photos, it can recover documents (a la Office suite), music, images,
+ config files, and videos (including the very odd
+ <code>m2ts</code> format!). The other nice thing is
+ <code>photorec</code> will work on just about any source. It's also free
+ software which makes me wonder why there are like $50 recovery tools for
+ Windows that look super sketchy.
+ </p>
+
+ <h2>In Practice</h2>
+
+ <p>
+ So here's what I did to get Amy's files back. Luckily she didn't write
+ anything out to the drive afterward so the chances (I thought) were
+ pretty good that I would get <em>something</em> back. The first thing I
+ always do is make a full image of whatever media I'm trying to recover
+ from. I do this for a couple of reasons. First of all it's a backup. If
+ something goes wrong during recovery I don't have to worry about the
+ original, fragile media being damaged or wiped. Furthermore, I can work
+ with multiple copies at a time. If it's a large image that means
+ multiple tools or even multiple PCs can work on it at once. It's also
+ just plain faster working off a disk image than a measly flash drive. So
+ I used <code>ddrescue</code> to make an image of Amy's drive.
+ </p>
+
+ <pre><code>
+$ sudo ddrescue /dev/sdb1 amy-lexar.dd
+GNU ddrescue 1.24
+Press Ctrl-C to interrupt
+ ipos: 54198 kB, non-trimmed: 0 B, current rate: 7864 kB/s
+ opos: 54198 kB, non-scraped: 0 B, average rate: 18066 kB/s
+non-tried: 63967 MB, bad-sector: 0 B, error rate: 0 B/s
+ rescued: 54198 kB, bad areas: 0, run time: 2s
+pct rescued: 0.08%, read errors: 0, remaining time: 59m
+ time since last successful read: n/a
+Copying non-tried blocks... Pass 1 (forwards)
+ </code></pre>
+
+ <p>
+ The result was a very large partition image that I could fearlessly play
+ around with.
+ </p>
+
+ <pre>
+ <code>
+$ ll amy-lexar.dd
+-rw-r--r-- 1 root root 60G Sep 24 02:45 amy-lexar.dd
+ </code>
+ </pre>
+
+ <p>
+ Then I could run <code>photorec</code> on the image. This brings up a
+ TUI with all of the listed media that I can try and recover from.
+ </p>
+
+ <pre><code>
+$ sudo photorec amy-lexar.dd
+
+PhotoRec 7.0, Data Recovery Utility, April 2015
+http://www.cgsecurity.org
+
+ PhotoRec is free software, and
+comes with ABSOLUTELY NO WARRANTY.
+
+Select a media (use Arrow keys, then press Enter):
+>Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
+
+>[Proceed ] [ Quit ]
+
+Note:
+Disk capacity must be correctly detected for a successful recovery.
+If a disk listed above has incorrect size, check HD jumper settings, BIOS
+detection, and install the latest OS patches and disk drivers.
+ </code></pre>
+
+ <p>
+ After hitting proceed <code>photorec</code> asks if you want to scan
+ just a particular partition or the whole disk (if you made a whole disk
+ image). I can usually get away with just selecting the partition I know
+ the files are on and starting a search.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+http://www.cgsecurity.org
+
+Disk amy-lexar.dd - 64 GB / 59 GiB (RO)
+
+ Partition Start End Size in sectors
+ Unknown 0 0 1 7783 139 4 125042656 [Whole disk]
+> P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+>[ Search ] [Options ] [File Opt] [ Quit ]
+ Start file recovery
+ </code></pre>
+
+ <p>
+ Then <code>photorec</code> asks a couple of questions about the
+ formatting of the media. It can usually figure them out all by itself so
+ I just use the default options unless it's way out in left field.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+http://www.cgsecurity.org
+
+ P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+To recover lost files, PhotoRec need to know the filesystem type where the
+file were stored:
+ [ ext2/ext3 ] ext2/ext3/ext4 filesystem
+>[ Other ] FAT/NTFS/HFS+/ReiserFS/...
+ </code></pre>
+
+ <p>
+ Now this menu is where I don't just go with the default path.
+ <code>photorec</code> will offer to search just unallocated space or the
+ entire partition. I always go for the whole partition here; sometimes
+ I'll get back files that I didn't really care about but more often than
+ not I end up rescuing more data this way. In this scenario searching
+ just unallocated space found no files at all. So I told
+ <code>photorec</code> to search everything.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+http://www.cgsecurity.org
+
+ P FAT32 0 0 1 7783 139 4 125042656 [NO NAME]
+
+
+Please choose if all space need to be analysed:
+ [ Free ] Scan for file from FAT32 unallocated space only
+>[ Whole ] Extract files from whole partition
+ </code></pre>
+
+ <p>
+ Now it'll ask where you want to save any files it finds. I threw them
+ all into a directory under home that I could zip up and send to Amy's
+ Mac later.
+ </p>
+
+ <pre><code>
+PhotoRec 7.0, Data Recovery Utility, April 2015
+
+Please select a destination to save the recovered files.
+Do not choose to write the files to the same partition they were stored on.
+Keys: Arrow keys to select another directory
+ C when the destination is correct
+ Q to quit
+Directory /home/adam
+ drwx------ 1000 1000 4096 28-Sep-2019 12:10 .
+ drwxr-xr-x 0 0 4096 26-Jan-2019 15:32 ..
+>drwxr-xr-x 1000 1000 4096 28-Sep-2019 12:10 amy-lexar-recovery
+ </code></pre>
+
+ <p>
+ And then just press <code>C</code>. <code>photrec</code> will start
+ copying all of the files it finds into that directory. It reports what
+ kinds of files it found and how many it was able to locate. I was able
+ to recover all of Amy's lost footage this way, past, along with some
+ straggler files that had been on the drive at one point. This has worked
+ for me many times in the past, both on newer devices like flash drives
+ and on super old, sketchy IDE hard drives. I probably won't ever pay for
+ data recovery unless a drive has been physically damaged in some way. In
+ other words, this software works great for me and I don't foresee the
+ need for anything else out there. It's simple to use and is typically
+ pretty reliable.
+ </p>
+ </article>
+ </body>
+</html>
diff --git a/posts/unix/2020-07-26-now-this-is-a-minimal-install.html b/posts/unix/2020-07-26-now-this-is-a-minimal-install.html
new file mode 100644
index 0000000..07a398a
--- /dev/null
+++ b/posts/unix/2020-07-26-now-this-is-a-minimal-install.html
@@ -0,0 +1,107 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="Now This is a Minimal Install!" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ Now This is a Minimal Install!</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>Now This is a Minimal Install!</h1>
+
+ <p>
+ I just got done configuring Poudriere on Freebsd 12.1-RELEASE. The
+ awesome thing about it is it allows you to configure and maintain your
+ own package repository. All of the ports and their dependencies are
+ built from source with personalized options. That means that I can
+ maintain my own repo of just the packages I need with just the
+ compile-time options I need. For example, for the Nvidia driver set I
+ disabled all Wayland related flags. I use Xorg so there was no need to
+ have that functionality built in.
+ </p>
+
+ <p>
+ Compile times are pretty long but I hope to change that by upgrading my
+ home server to FreeBSD as well (from Ubuntu Server). Then I can
+ configure poudriere to serve up a ports tree and my own pkg repo from
+ there. The server is a lot faster than my laptop and will build packages
+ way faster, and I'll be able to use those packages on both the server
+ and my laptop and any jails I have running. Jails (and ZFS) also make
+ poudriere really cool to use as all of the building is done inside a
+ jail. When the time comes I can just remove the jail and poudriere ports
+ tree from my laptop and update pkg to point to my web server.
+ </p>
+
+ <p>
+ This is, as I understand it, the sane way to do package management in
+ FreeBSD. The binary package repo is basically the ports tree
+ pre-assembled with default options. Sometimes those packages are
+ compiled without functionality that most users don't need. In those
+ situations, you're forced to use ports. The trouble is you're not really
+ supposed to mix ports and binary packages. The reason, again as I
+ understand it, is because ports are updated more frequently. So binary
+ packages and ports can have different dependency versions, which can
+ sometimes break compatibility on an upgrade. Most FreeBSD users
+ recommend installing everything with ports (which is just a make install
+ inside the local tree) but then you lose the package management features
+ that come with pkg. Poudriere lets you kind of do both by creating your
+ "own personal binary repo" out of a list of preconfigured, pre-built
+ ports.
+ </p>
+
+ <p>FreeBSD rocks.</p>
+ </article>
+ </body>
+</html>
diff --git a/posts/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html b/posts/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html
new file mode 100644
index 0000000..9adc833
--- /dev/null
+++ b/posts/unix/dear-god-why-are-pdf-editors-such-an-ordeal.html
@@ -0,0 +1,79 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <link rel="stylesheet" href="/includes/stylesheet.css" />
+ <meta charset="utf-8" />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <meta
+ property="og:description"
+ content="The World Wide Web pages of Adam Carpenter"
+ />
+ <meta property="og:image" content="/includes/images/logo_diag.png" />
+ <meta property="og:site_name" content="53hor.net" />
+ <meta property="og:title" content="All PDF Readers/Editors Suck" />
+ <meta property="og:type" content="website" />
+ <meta property="og:url" content="https://www.53hor.net" />
+ <title>53hornet ➙ All PDF Readers/Editors Suck</title>
+ </head>
+
+ <body>
+ <nav>
+ <ul>
+ <li>
+ <a href="/">
+ <img src="/includes/icons/home-roof.svg" />
+ Home
+ </a>
+ </li>
+ <li>
+ <a href="/about.html">
+ <img src="/includes/icons/information-variant.svg" />
+ About
+ </a>
+ </li>
+ <li>
+ <a href="/software.html">
+ <img src="/includes/icons/git.svg" />
+ Software
+ </a>
+ </li>
+ <li>
+ <a href="/hosted.html">
+ <img src="/includes/icons/desktop-tower.svg" />
+ Hosted
+ </a>
+ </li>
+ <li>
+ <a type="application/rss+xml" href="/rss.xml">
+ <img src="/includes/icons/rss.svg" />
+ RSS
+ </a>
+ </li>
+ <li>
+ <a href="/contact.html">
+ <img src="/includes/icons/at.svg" />
+ Contact
+ </a>
+ </li>
+ </ul>
+ </nav>
+
+ <article>
+ <h1>All PDF Readers/Editors Suck</h1>
+
+ <p>All PDF editors/mergers/tools either:</p>
+
+ <ol>
+ <li>Cost hundreds of dollars</li>
+ <li>Require uploading private documents to a server for processing</li>
+ <li>Leave watermarks or charge you for "pro" features</li>
+ <li>Are blatant malware</li>
+ </ol>
+
+ <p>
+ Except mupdf and mutool, which are absolutely amazing and I can't live
+ without them.
+ </p>
+ </article>
+ </body>
+</html>