summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
author53hornet <atc@53hor.net>2021-10-12 21:06:30 -0400
committer53hornet <atc@53hor.net>2021-10-12 21:06:30 -0400
commit7ecb8930235e6e7ab35cae08d257a4dbf406fa7b (patch)
tree166efa2da5b0dc88b13ac9bb39f7449d2f266947
parent57a447473b78cb3134988f301490612087d14570 (diff)
download53hor-7ecb8930235e6e7ab35cae08d257a4dbf406fa7b.tar.xz
53hor-7ecb8930235e6e7ab35cae08d257a4dbf406fa7b.zip
add angelshark post, add tarpit draft
-rw-r--r--cv.html16
-rw-r--r--drafts/2021-10-06-write-your-own-ssh-tarpit-in-rust-with-async-std.php192
-rw-r--r--drafts/altrustic-angelshark.php119
-rw-r--r--index.php1
-rw-r--r--posts/2021-10-12-altruistic-angelshark.php127
-rwxr-xr-xserve.sh4
6 files changed, 332 insertions, 127 deletions
diff --git a/cv.html b/cv.html
index c9ecd92..f4f743d 100644
--- a/cv.html
+++ b/cv.html
@@ -96,13 +96,15 @@
<em>Automatic Data Processing (June 2018-August 2021)</em>
<ul>
<li>
- Authored <em>Altruistic Angelshark</em>, an Avaya Communication Manager
- automation daemon, to ease friction caused by existing, interactive, and
- fragile tools and processes. This tool was used to save the company
- rougly half a million dollars per year by enabling unused license
- cleanup with little operator input. It was deemed appropriately useful
- to release as free and open source software. It was written in Rust and
- operates over the SSH2 library using an undocumented Avaya protocol.
+ Authored
+ <a href="https://github.com/adpllc/altruistic-angelshark"><em>Altruistic Angelshark</em></a>,
+ an Avaya Communication Manager automation
+ daemon, to ease friction caused by existing, interactive, and fragile
+ tools and processes. This tool was used to save the company rougly half a
+ million dollars per year by enabling unused license cleanup with little
+ operator input. It was deemed appropriately useful to release as free and
+ open source software. It was written in Rust and operates over the SSH2
+ library using an undocumented Avaya protocol.
</li>
<li>
Co-authored an authentication/authorization API to specifically serve
diff --git a/drafts/2021-10-06-write-your-own-ssh-tarpit-in-rust-with-async-std.php b/drafts/2021-10-06-write-your-own-ssh-tarpit-in-rust-with-async-std.php
new file mode 100644
index 0000000..26824da
--- /dev/null
+++ b/drafts/2021-10-06-write-your-own-ssh-tarpit-in-rust-with-async-std.php
@@ -0,0 +1,192 @@
+<?php
+$title = "Write Your Own SSH Tarpit in Rust with async-std";
+if (isset($early) && $early) {
+ return;
+}
+include($_SERVER['DOCUMENT_ROOT'] . '/includes/head.php');
+?>
+
+<p class="description">
+ A software tarpit is simple and fun. Long story short, it's sort of a reverse denial-of-service attack. It usually works by inserting an intentional, arbitrary delay in responding to malicious clients, thus wasting their time and resources. It's kind of like those YouTubers who purposely joke around with phone scammers as long as possible to waste their time and have fun. I recently learned about <a href="https://github.com/skeeto/endlessh"><code>endlessh</code></a>, an SSH tarpit. I decided it would be a fun exercise to use Rust's <code>async-std</code> library to write an SSH tarpit of my own, with my own personal <em>flair</em>. If you want to learn more about <code>endlessh</code> or SSH tarpits I highly recommend reading <a href="https://nullprogram.com/blog/2019/03/22/">this blog post</a> by the <code>endlessh</code> author.
+</p>
+
+<h2>Goals</h2>
+
+<p>
+ So what does an SSH tarpit need to do? Basically, an SSH tarpit is a TCP listener that very slowly writes a never-ending SSH banner back to the client. This all happens pre-login, so no fancy SSH libraries are needed. Really, the program just needs to write bytes [slowly] back to the incoming connection and never offer up the chance to authenticate (see the blog post above to learn more). So now I'm getting a to-do list together. My program needs to...
+</p>
+
+<ol>
+ <li>Listen for incoming TCP connections on a given port</li>
+ <li>Upon receiving an incoming connection, write some data to the TCP stream</li>
+ <li>Wait a little bit (say 1-10 seconds) and then repeat step 2</li>
+ <li>Upon client disconnect, continue working on other connections and listening for new ones</li>
+ <li>Handle many of these connections at the same time with few resources</li>
+</ol>
+
+<p>
+ Additionally, to spruce things up and have more fun, I'm adding the following requirements:
+</p>
+
+<ul>
+ <li>The listening port should be user-configurable (to make debugging easier)</li>
+ <li>Client connection and disconnection events are logged, so I can see who is stuck in the pit and when</li>
+ <li><em>The data written back to the client should be user-configurable</em></li>
+ <li>The data is written one word at a time</li>
+</ul>
+
+<p>
+ That's right. It's probably a waste of resources, but I want to be able to feed the attacker whatever information I want. For example, I want to be able to pipe a Unix Fortune across the network to the attacker very slowly. I want to relish in the knowledge that if the attacker manually debugs the data coming down the pipe, they'll see their fortune.
+</p>
+
+<h2>Implementation</h2>
+
+<p>
+ I've chosen Rust and the recently-stabilized <a href="https://async.rs/"><code>async-std</code></a> library for a variety of reasons. First, I like Rust. It's a good language. Second, <code>async-std</code> offers an asynchronous task-switching runtime much like Python's <code>asyncio</code>, or even JavaScript's <code>Promise</code> (even though things work a little differently under the hood). Long story short, it allows for easy concurrent programming with few resources and high performance. Everything I could want, really. It also comes with a variety of useful standard library API features reimplemented as asynchronous tasks.
+</p>
+
+<p>
+ For starters, I need to include <code>async-std</code> as a dependency. I'm also going to pull in <a href="https://docs.rs/anyhow/1.0.44/anyhow/"><code>anyhow</code></a> for friendlier error handling.
+</p>
+
+<p>
+ <a href="https://git.53hor.net/53hornet/fortune-pit/src/commit/4fa2c2a100dba6b264cc232963f6d797ffb671cf/Cargo.toml">Cargo.toml:</a>
+<pre>
+<code>
+[package]
+name = "fortune-pit"
+version = "0.1.0"
+edition = "2018"
+
+[dependencies.async-std]
+version = "1"
+features = ["attributes"]
+
+[dependencies.anyhow]
+version = "1"
+</code>
+</pre>
+</p>
+
+<p>
+ Now I've gotta actually write some code. I already said I wanted to listen for incoming TCP connections, so I'll start with a <code>TcpListener</code>.
+
+<pre>
+<code>
+use anyhow::Result;
+use async_std::net::TcpListener;
+
+#[async_std::main]
+async fn main() -> Result<()> {
+ let listener = TcpListener::bind("0.0.0.0:2222").await?;
+ Ok(())
+}
+</code>
+</pre>
+
+Better yet, I'll make the bind port configurable at runtime. In a live environment, it would be best to run this on port 22. But for testing purposes, as a non-root user, I'll run it on 2222. These changes will let me do that at will. I'll also print a nicer error message if anything goes wrong here.
+</p>
+
+<p>
+<pre>
+<code>
+use anyhow::{Context, Result};
+use async_std::net::{Ipv4Addr, SocketAddrV4, TcpListener};
+use std::env;
+
+#[async_std::main]
+async fn main() -> Result<()> {
+ let listener = TcpListener::bind(read_addr()?)
+ .await
+ .with_context(|| "tarpit: failed to bind TCP listener")?;
+ Ok(())
+}
+
+fn read_addr() -> Result<SocketAddrV4> {
+ let port = env::args()
+ .nth(1)
+ .map(|arg| arg.parse())
+ .unwrap_or(Ok(22))
+ .with_context(|| "tarpit: failed to parse bind port")?;
+
+ Ok(SocketAddrV4::new(Ipv4Addr::new(0, 0, 0, 0), port))
+}
+</code>
+</pre>
+
+There. Slightly more complicated, but much more convenient.
+</p>
+
+<p>
+ Now I need to actually do something with my <code>TcpListener</code>. I'll loop over incoming connections and open streams for all of them. Then I'll write something to those streams and close them.
+
+<pre>
+<code>
+use anyhow::{Context, Result};
+use async_std::{
+ io::prelude::*,
+ net::{Ipv4Addr, SocketAddrV4, TcpListener},
+ prelude::*,
+};
+use std::env;
+
+#[async_std::main]
+async fn main() -> Result<()> {
+ let listener = TcpListener::bind(read_addr()?)
+ .await
+ .with_context(|| "tarpit: failed to bind TCP listener")?;
+
+ let mut incoming = listener.incoming();
+ while let Some(stream) = incoming.next().await {
+ let mut stream = stream?;
+ writeln!(stream, "Here's your fortune!").await?;
+ }
+
+ Ok(())
+}
+
+fn read_addr() -> Result<SocketAddrV4> {
+ let port = env::args()
+ .nth(1)
+ .map(|arg| arg.parse())
+ .unwrap_or(Ok(22))
+ .with_context(|| "tarpit: failed to parse bind port")?;
+
+ Ok(SocketAddrV4::new(Ipv4Addr::new(0, 0, 0, 0), port))
+}
+</code>
+</pre>
+
+And it works! If I <code>cargo run -- 2222</code>, passing my port as the first argument, I get a very basic TCP server. I can test it with <code>nc(1)</code>.
+
+<pre>
+<code>
+$ nc localhost 2222
+Here's your fortune!
+^C
+</code>
+</pre>
+</p>
+
+<p>
+ Great. But not there yet. First of all, my client immediately receives a response. I want to keep feeding information to the client over and over until it gives up. I don't want it to time out waiting for nothing. I also want the user to pick what gets written.
+</p>
+
+<p>
+ So let's read in whatever the user wants to send along on STDIN. I thought this would be better than reading a file because it's really easy to send files to STDIN with pipes or redirection. It also lets you write the output of commands like <code>fortune(6)</code>.
+
+<pre>
+<code>
+</code>
+</pre>
+</p>
+
+<h2>Improvements</h2>
+
+<p>
+ The SSH RFC specifies that lines written for the banner cannot exceed 255 characters including carriage return and line feed. This program imposes no such restriction, although it would probably be fairly easy to break up incoming text into 255 character lines and write those one word at a time.
+</p>
+
+<p>
+ It's probably also worth noting that should any lines begin with <code>SSH-</code>, they're interpreted as SSH protocol version identifiers, causing the client and server to begin their authentication dance. Normally, unless the following data is valid, this results in a disconnect. In a worst-case scenario, if you pipe a completely legitimate protocol version, the client will successfully attempt authentication.
+</p>
diff --git a/drafts/altrustic-angelshark.php b/drafts/altrustic-angelshark.php
deleted file mode 100644
index a5e6ce6..0000000
--- a/drafts/altrustic-angelshark.php
+++ /dev/null
@@ -1,119 +0,0 @@
-<p class="description">
- I finally got the opportunity to release a long-term project from work online
- as free and open-source software. Woohoo! It's called Altruistic Angelshark
- and here's what it's about.
-</p>
-
-<h2>Background</h2>
-
-<p>
- Altruistic Angelshark is an automation library, command-line application, and
- RESTful web service for more easily performing CRUD operations on Avaya
- Communication Managers. If you're not from the world of voice/telephony IT,
- you should probably know the ACMs use a precambrian mainframe interactive
- terminal interface to create, modify, and remove stations, extensions,
- hunt-groups, etc. Your only other choice is a graphical, also interactive,
- user interface that can perform bulk operations and generate reports in the
- form of Excel spreadsheets.
-</p>
-
-<h2>Impetus</h2>
-
-<p>
- Neither the interactive, VT220-style terminal nor the GUI application (Avaya
- Site Administration) are very easy to work with. When I say that I mean
- they're not easy to automate over. At our company, it's important for us to be
- able to automatically clean up old stations in bulk, as an example. Or
- sometimes we want to automatically run audits on possible malformed data and
- even fix those entries when they're found. The terminal requires a user's
- input to constantly paginate through data, or tab through form fields to
- insert a new entity. The GUI is worse. While it does let you automatically run
- certain reports to extract useful data, it has to be running to do it. That
- means a dedicated Windows server, and not a headless one. It's also pretty
- crash-prone.
-</p>
-
-<p>
- Another issue with the tools available to us is we run more than one ACM at
- our company (think > 10). The interactive terminal and GUI are only good for
- running one operation or "command" on one ACM at a time. This makes it
- annoying to, for example, search for a particular user's extension on all of
- the ACMs if you don't know which one it's on. In a worst-case scenario, that
- means logging into 11 different servers and running the same command.
-</p>
-
-<h2>OSSI: The Dark Magic Enabler</h2>
-
-<p>
- Long story short, there's a proprietary protocol called OSSI. This protocol is
- the backbone of ASA, the GUI app. It's a terminal interface, but it's for
- machine reading and writing instead of interactive use. If you packet sniff
- ASA you can learn a lot about how it's getting its data and the different
- things you can use the OSSI terminal for. However, no documentation was made
- available to us on OSSI because Avaya guards it pretty closely. So, I had to
- improvise. We already had some knowledgable architects who knew a trick or
- two. There were also a couple of useful forums available online that gave us
- more information. Eventually I figured out enough to replicate 99% of what we
- were doing in ASA. Maybe more on that another time.
-</p>
-
-<h2>Architecting Angelshark, Altrusitically</h2>
-
-<p>
- Angelshark can do anything ASA can do by reading and writing to an OSSI
- terminal over an SSH connection. It works on top of the SSH2 library, so you
- don't need an SSH client installed. It can also run commands on one or more
- ACMs at a time. All of your logins are stored in a config file.
-</p>
-
-<p>
- Angelshark's functionality is exposed in a couple of different methods. First,
- there's a command-line interface, which lets you write commands on STDIN, runs
- them on the ACMs they're intended for, and then writes their output on STDOUT.
- It can also automatically parse the output into JSON, CSV, or TSV. This is
- nice for quickly building Excel reports like ASA.
-</p>
-
-<p>
- Even better though (I think) is the Angelshark Daemon. This runs Angelshark as
- an HTTP service, listening for incoming requests. You can submit the same
- kinds of commands and which ACMs you want them to run on as JSON POSTs. It
- feeds those to a runner, which executes commands just like the CLI app. It
- then feeds the results back to you over JSON. You can use this functionality
- from the browser, in a script with <code>cURL</code>, or from pretty much
- anything that can make HTTP requests. The logins are all in a config file
- local to Angelshark and commands are queued. That way multiple users don't
- have to share passwords and won't overload the ACMs. To speed things up,
- commands on separate ACMs are run in parallel. That way your output only takes
- as long as the longest running ACM.
-</p>
-
-<p>
- There are a couple of relevant projects that I found online which do something
- similar but don't take it quite as far. They either send OSSI commands from a
- file over an SSH client with <code>expect</code>-like functionality or
- automate over an interactive terminal.
-</p>
-
-<p>
- This second method was something that I was also interested in implementing.
- In ASA you can dump terminal screenshots for an entire command's output. Some
- of my team members had tools in place that relied on this. A third sub-project
- of Altruistic Angelshark is <code>asa-cli</code>, and it does exactly that.
- For any <code>list</code> or <code>display</code> command, it emulates a VT220
- terminal and dumps all pages of output to STDOUT.
-</p>
-
-<h2>Free and Open Source</h2>
-
-<p>
- I got to thinking that this would be a great project to let other developers
- worldwide use. If it's helpful to us it's got to be helpful to someone else
- out there. I pitched the idea of open-sourcing Angelshark to management and
- they were a mix of enthusiastic and indifferent. "Sure, sounds fine," they
- said as long as nothing internal to the company be divulged with the project.
-</p>
-
-<h2>Tooling and Development</h2>
-
-<p>Rust, libssh, HTTP, etc.</p>
diff --git a/index.php b/index.php
index c507320..b272302 100644
--- a/index.php
+++ b/index.php
@@ -26,5 +26,6 @@ include('./includes/head.php');
}
}
?>
+ <li><a href="https://www.53hor.net/posts/?C=M&O=A">All posts...</a></li>
</ul>
</article>
diff --git a/posts/2021-10-12-altruistic-angelshark.php b/posts/2021-10-12-altruistic-angelshark.php
new file mode 100644
index 0000000..b0602ef
--- /dev/null
+++ b/posts/2021-10-12-altruistic-angelshark.php
@@ -0,0 +1,127 @@
+<?php
+$title = "Altruistic Angelshark";
+if (isset($early) && $early) {
+ return;
+}
+include($_SERVER['DOCUMENT_ROOT'] . '/includes/head.php');
+?>
+
+<p class="description">
+ I finally got the opportunity to release a long-term project from work online
+ as free and open-source software. Woohoo! It's called Altruistic Angelshark
+ and here's what it's about.
+
+ TLDR: <a href="https://github.com/adpllc/altruistic-angelshark">Here is the GitHub to a Communication Manager automation suite</a>.
+</p>
+
+<h2>Background</h2>
+
+<p>
+ Altruistic Angelshark is an automation library, command-line application, and
+ HTTP web service for more easily performing CRUD operations on Avaya
+ Communication Managers. If you're not from the world of voice/telephony IT,
+ you should probably know the ACMs use a Precambrian mainframe interactive
+ terminal interface to create, modify, and remove stations, extensions,
+ hunt-groups, etc. Your only other choice is a graphical, also interactive,
+ user interface that can perform bulk operations and generate reports in the
+ form of Excel spreadsheets.
+</p>
+
+<h2>Impetus</h2>
+
+<p>
+ Neither the interactive, VT220-style terminal nor the GUI application (Avaya
+ Site Administration) are very easy to work with. When I say that I mean
+ they're not easy to automate over. At our company, it's important for us to be
+ able to automatically clean up old stations in bulk, as an example. Or
+ sometimes we want to automatically run audits on possible malformed data and
+ even fix those entries when they're found. The terminal requires a user's
+ input to constantly paginate through data, or tab through form fields to
+ insert a new entity. The GUI is worse. While it does let you automatically run
+ certain reports to extract useful data, it has to be running to do it. That
+ means a dedicated Windows server, and not a headless one. It's also prone to crashing.
+</p>
+
+<p>
+ Another issue with the tools available to us is we run more than one ACM at
+ our company (about a dozen). The interactive terminal and GUI are only good for
+ running one operation or "command" on one ACM at a time. This makes it
+ annoying to, for example, search for a particular user's extension on all of
+ the ACMs if you don't know which one it's on. In a worst-case scenario, that
+ means logging into 12 different servers and running the same command.
+</p>
+
+<h2>OSSI: The Dark Magic Enabler</h2>
+
+<p>
+ Long story short, there's a proprietary protocol called OSSI. This protocol is
+ the backbone of ASA, the GUI app. It's a terminal interface, but it's for
+ machine reading and writing instead of interactive use. If you packet sniff
+ ASA you can learn a lot about how it's getting its data and the different
+ things you can use the OSSI terminal for. However, no documentation was made
+ available to us on OSSI because Avaya guards it pretty closely. So, I had to
+ improvise. We already had some knowledgable architects who knew a trick or
+ two. There were also a couple of useful forums available online that gave us
+ more information. Eventually I figured out enough to replicate 99% of what we
+ were doing in ASA. Maybe more on that another time.
+</p>
+
+<h2>Architecting Angelshark, Altrusitically</h2>
+
+<p>
+ Angelshark can do anything ASA can do by reading and writing to an OSSI
+ terminal over an SSH connection. It works on top of the SSH2 library, so you
+ don't need an SSH client installed. It can also run commands on one or more
+ ACMs at a time. All of your logins are stored in a config file.
+</p>
+
+<p>
+ Angelshark's functionality is exposed in a couple of different forms. First,
+ there's a command-line interface, which lets you write commands on STDIN, runs
+ them on the ACMs they're intended for, and then writes their output on STDOUT.
+ It can also automatically parse the output into JSON, CSV, or TSV. This is
+ nice for quickly building Excel reports like ASA.
+</p>
+
+<p>
+ Even better though (I think) is the Angelshark Daemon. This runs Angelshark
+ as an HTTP service, listening for incoming requests. You can submit the same
+ kinds of commands and which ACMs you want them to run on as JSON POSTs. It
+ feeds those to a runner, which executes commands just like the CLI app. It then
+ feeds the results back to you over JSON. You can use this functionality from
+ the browser, in a script with <code>curl(1)</code>, or from pretty much
+ anything that can make HTTP requests. The logins are all in a config file with
+ the same format as the CLI. To speed things up, commands on separate ACMs are
+ run in parallel. That way your output only takes as long as the longest running
+ ACM.
+</p>
+
+<p>
+ There are a couple of relevant projects that I found online which do something
+ similar but don't take it quite as far. They either send OSSI commands from a
+ file over an SSH client with <code>expect</code>-like functionality or
+ automate over an interactive terminal.
+</p>
+
+<h2>Free and Open Source</h2>
+
+<p>
+ I got to thinking that this would be a great project to let other developers
+ worldwide use. If it's helpful to us it's got to be helpful to someone else
+ out there. I pitched the idea of open-sourcing Angelshark to management and
+ they were a mix of enthusiastic and indifferent. "Sure, sounds fine," they
+ said as long as nothing internal to the company be divulged with the project.
+</p>
+
+<p>
+ <a href="https://github.com/adpllc/altruistic-angelshark">Here's the GitHub repo where Angelshark lives.</a>
+</p>
+
+<p>
+ I'm pretty proud of this project. It's not a very large project, but that's
+ one of the things I'm proud of. It went through a couple of iterations, and
+ with each I actually ended up removing code that wasn't being used to solve the
+ primary problem. It's the first time I've gotten the chance to release an
+ internal project as FOSS and I'm super stoked. Hopefully someone else will
+ benefit from its release. Maybe I'll delve into its inner workings in another post sometime.
+</p>
diff --git a/serve.sh b/serve.sh
index e7f2b4c..007e036 100755
--- a/serve.sh
+++ b/serve.sh
@@ -1,2 +1,4 @@
#!/bin/sh
-php -S localhost:8000
+php -S localhost:8000 &
+[ -n "$1" ] && firefox "localhost:8000/$1" &
+wait