1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
|
<h1>Use .NET 5.0 To Be Rid of Windows Server!</h1>
<p>
I work at a three-letter company. The majority of our development teams are
heavily invested in Java and .NET Framework. Our team has traditionally shied
away from .NET Framework for what I'm guessing is a pretty common reason: it
only runs on Windows. For many shops this probably isn't a big deal but for
our team it was a bit of an annoyance. Most of our software gets wrapped up in
containers and deployed to an on-site Kubernetes cluster. However, we have one
small application that needs to talk to a proprietary vendor library written
in .NET Framework. That means that 99% of our stuff is deployed using the same
methods and in the same location. The other 1% was this little app, and it
actually caused us quite a few headaches. Luckily, .NET 5.0 came out recently.
It represents the convergence of .NET Core and .NET Framework into a single,
cross-platform runtime. This is one of the only times in my career that a
single upgrade has magically solved everything.
</p>
<p>
The library in question is an older .NET Framework (not .NET Standard, which
can be included in Framework or Core projects) collection of .dlls that lets
us communicate with the vendor's systems. It's basically a platform-dependent
SDK. Our only requirement was to make accessible a table of information on the
vendor's system via the SDK. At the time, the simplest thing to do seemed to
be an ASP.NET Core app that rendered a very basic HTML table constructed from
vendor data. And it worked just fine, tested great. The whole project wasn't
more than a few thousand SLOC.
</p>
<p>
Everything was going swimmingly all the way through deployment. We knew that
because we had to target .NET Framework for our project to build, we would
need to build a Windows Server instance. So we did that and set used Visual
Studio + IIS to create a deployment pipeline. This was alright; click publish
and the new version builds, deploys, and goes live. Then things started to get
weird.
</p>
<p>
Our application had some config to know which servers to log into. This was
stored in <code>appsettings.Env.json</code>, where <code>Env</code> changed
for Development and Production. For whatever reason, regardless of the
environment we specified (typically
<code>ASPNETCORE_ENVIRONMENT=Production</code>), the config would default to
Development after an application restart. There were a few problems
interacting at once here. First, why was the app restarting? It seemed like if
we started it up, it would run great. If we left it alone for a while, it
would die. Then, accessing its URL again would slowly bring it back to life.
When it came back to life it had the Development config. Only a manual app
restart would bring the Production config back, and then only until it went to
sleep again.
</p>
<p>
Keep in mind this was a purely default deployment environment. Vanilla Windows
Server 2016, base install of IIS with no tweaking. Even the ASP.NET Core
project was scaffolded by the <code>dotnet</code> CLI tool. We just added some
source code that changed the homepage by running a few library functions.
</p>
<p>Eventually we tracked down the</p>
- resources - patching/AMI upgrades - IIS weirdness
<p>
quick ASP.NET Core app on top of their .NET Framework library app, fired up
Windows Server, and shoved it there. It wouldn't let us forget about it. For a
myriad of reasons, maintaining IIS as a reverse proxy to the application (for
certificates, etc.) produced strange and unusual behavior. For example, IIS
comes with a collection of
</p>
|