<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Adventures in Technology]]></title><description><![CDATA[Just an Aussie .NET junkie and InfoSec architect proving technology doesn't work any better upside down.]]></description><link>https://blog.agchapman.com/</link><generator>Ghost 3.42</generator><lastBuildDate>Tue, 24 Jun 2025 14:19:10 GMT</lastBuildDate><atom:link href="https://blog.agchapman.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Installing .NET 5 on Fedora: some installation gotchas]]></title><description><![CDATA[Some tips on upgrading from .NET Core 3.1 to .NET 5 on Fedora, including fixing "framework version not found" errors after upgrades.]]></description><link>https://blog.agchapman.com/errors-installing-and-upgrading-net-5-on-fedora/</link><guid isPermaLink="false">5ff7ff7f3c953c00018b4add</guid><category><![CDATA[c#]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[tip]]></category><category><![CDATA[tools]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Fri, 08 Jan 2021 07:40:05 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2021/01/sheng-li-Kt8SFsX9TYk-sm.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.agchapman.com/content/images/2021/01/sheng-li-Kt8SFsX9TYk-sm.jpg" alt="Installing .NET 5 on Fedora: some installation gotchas"><p>If you're a regular .NET Core user and follow <a href="https://docs.microsoft.com/en-us/dotnet/core/install/linux-fedora">Microsoft's official docs</a> on installing .NET 5 in Fedora, you might end up running into some problems with Targeting Packs for earlier .NET Core versions.</p><p>In particular, you might get the infamous:</p><pre><code>The framework 'Microsoft.NETCore.App', version '3.1.0' was not found.
  - The following frameworks were found:
      5.0.1 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]

You can resolve the problem by installing the specified framework and/or SDK.</code></pre><p>even if you already have all of <code>dotnet-sdk-3.1</code>, <code>dotnet-runtime-3.1</code> and <code>dotnet-targeting-pack-3.1</code> installed. This (usually) happens when your installation ends up as a mix of Fedora and Microsoft-sourced packages.</p><p></p><h3 id="what-s-happening">What's happening</h3><p>What's causing this is essentially just different installation paths: the packages in the Microsoft repo will install the runtime and all its moving parts into <code>/usr/share/dotnet</code> which is also where the <code>dotnet</code> CLI will look <em>if it comes from a Microsoft-installed package.</em></p><p>On the other hand, Fedora installs its <code>dotnet-*</code> packages to <code>/usr/lib64/dotnet</code> and will look for installed SDKs and runtimes there instead. As such, the Fedora packages will only "see" runtimes/SDKs from the Fedora packages, not the Microsoft ones.</p><h3 id="how-to-fix-it">How to fix it</h3><p>There's two options here: wait or reinstall from one source. At the time of writing, .NET 5 packages aren't available in the official repos for Fedora 32, so if you only want to use 3.1 you can just hang on until .NET 5 packages land in the official repos. </p><p>If you want to play with the fun stuff in .NET 5 right away though, you can install both 3.1 and 5 from the Microsoft packages and they should see each other. First, remove your existing packages:</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">sudo dnf remove dotnet-*</code></pre><figcaption>check that you're not removing things you want to keep</figcaption></figure><p>Then, install all the packages you need, only from the Microsoft repo, using <code>dnf</code>'s <code>disablerepo</code> option:</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">sudo dnf install --enablerepo=packages-microsoft-com-prod --disablerepo=updates --disablerepo=fedora dotnet-sdk-3.1 dotnet-sdk-5.0</code></pre><figcaption>This will enable the Microsoft repo and disable the default Fedora repos, just for this command</figcaption></figure><p>You should then be able to run <code>dotnet --info</code> and see both 3.1 and 5.0 SDKs/runtimes installed and available.</p><p>This should also resolve problems with global tools not running after upgrading to .NET 5</p>]]></content:encoded></item><item><title><![CDATA[Bypassing bad CORS configurations with YARP]]></title><description><![CDATA[How to use a YARP reverse proxy to bypass or change CORS restrictions in HTTP APIs, without any code.]]></description><link>https://blog.agchapman.com/bypassing-cors-with-yarp-proxy/</link><guid isPermaLink="false">5f9e2fec3c953c00018b4a84</guid><category><![CDATA[asp.net]]></category><category><![CDATA[how-to]]></category><category><![CDATA[proxy]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Mon, 09 Nov 2020 05:06:00 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2020/11/matt-duncan-IUY_3DvM__w-unsplash-min.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.agchapman.com/content/images/2020/11/matt-duncan-IUY_3DvM__w-unsplash-min.jpg" alt="Bypassing bad CORS configurations with YARP"><p>I was recently trying to work with a (questionably designed) REST API that had its CORS configuration set up to only allow client-side requests from their own domain, effectively limiting any API integrations to being server-side.</p><p>Fortunately, CORS is basically just HTTP response headers so if you're willing to run a proxy server (and the API owners/maintainers don't prevent it), you can always set up a reverse proxy to alter the response headers from the API and allow your own domain.</p><hr><p>While you can use pretty much any proxy-capable web server here, I'm going to demonstrate how you can do this with <a href="https://github.com/microsoft/reverse-proxy">YARP</a>, Microsoft's new ASP.NET Core-based reverse proxy server. You could do this with your own server, but I'm going to use my <a href="https://quay.io/repository/agc93/yarp">own pre-built YARP server image</a>. </p><p>Rather than having to build this in code, we're just going to use YARP's configuration system. In short, you just need to add the following configuration to your server's configuration file:<br></p><pre><code class="language-json">{
    "ReverseProxy": {
        "Routes": [{
            "RouteId": "proxy-route",
            "ClusterId": "upstream-api",
            "Match": {
                "Path": "{**catch-all}"
            },
            "Transforms": [{
                "ResponseHeader": "Access-Control-Allow-Origin",
                "Set": "https://yourchosendomain.com"
            }]
        }],
        "Clusters": {
            "upstream-api": {
                "Destinations": {
                    "upstream-api/public": {
                        "Address": "https://api.upstream.com/"
                    }
                }
            }
        }
    }
}</code></pre><p>Obviously replace your own domain (line 11) and your upstream API address (line 18).</p><blockquote>You can also just set the <code>ResponseHeader</code> value to <code>*</code> to allow all, but I don't recommend that!</blockquote><p>You can then just use whatever path you were using with the upstream API (<code>/v1/users/info</code> or whatever) with your own domain and YARP will automatically replace the <code>Access-Control-Allow-Origin</code> header in the response, allowing your client.</p><blockquote>If you're using my Docker image, just bind-mount that JSON into the container's <code>/app/reverseProxy.json</code></blockquote>]]></content:encoded></item><item><title><![CDATA[Emulating a Raspberry Pi with QEMU: the extras]]></title><description><![CDATA[Some extra details for anyone who setup a QEMU-emulated Raspberry Pi and might be hitting some pain points.]]></description><link>https://blog.agchapman.com/emulating-a-raspberry-pi-with-qemu-the-extras/</link><guid isPermaLink="false">5efb05982228b500014be03e</guid><category><![CDATA[raspberry-pi]]></category><category><![CDATA[virtualisation]]></category><category><![CDATA[how-to]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Sun, 01 Nov 2020 14:37:00 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2020/10/patrick-schneider-346887-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2020/10/patrick-schneider-346887-sm.jpg" alt="Emulating a Raspberry Pi with QEMU: the extras"><p><em>Author's Note:</em> This post has been sitting in draft state since my original post was published, so some details might be a little out of date. I'm publishing it anyway and hoping its helpful for someone!</p>
<hr>
<p>I recently published a post outlining how to use QEMU to emulate a Raspberry Pi. In that post, we got things working, but not much more than that. Today, I'm covering a few post-install tweaks you might want to work on.</p>
<h2 id="display">Display</h2>
<p>When you run the commands from the <a href="https://blog.agchapman.com/using-qemu-to-emulate-a-raspberry-pi/">previous post</a> you'll see QEMU pop up a UI window with the Pi's display output. While this can be handy if you're constantly interacting with the UI, I find it quite irritating and would prefer an option to disconnect and reconnect to the UI without resetting the Pi. To that end, we can tell QEMU to serve the Pi's raw display output over a VNC connection instead!</p>
<p>By adding a simple <code>-vnc :5</code> to the command, we tell QEMU to serve the vPi's display over a local-only VNC server on port <code>5905</code> (port <code>5900</code> + <code>:5</code> = <code>5905</code>). You can connect to the session with any VNC-compatible client (such as GNOME Remote Desktop Viewer). Note that since QEMU is the server here (not the vPi), the server is active during boot as well!</p>
<pre><code class="language-bash">qemu-system-arm \
## trimmed for brevity
-net tap,ifname=vnet0,script=no,downscript=no \
-vnc :5 # &lt;- this is the important part!
</code></pre>
<h2 id="scripting">Scripting</h2>
<p>The main thing is that every time you wanted to start the virtual Pi (vPi) you would need to run the same lengthy <code>qemu-system-arm</code> command, and it would block the prompt while doing so). Fortunately, there's a better way!</p>
<p>In my case, I went with a combination of a simple bash script and <strong>GNU Screen</strong>, a handy utility for running detachable terminal sessions in the background.</p>
<p>So, I created a small bash script (called <code>vpi</code>):</p>
<pre><code class="language-bash">#!/usr/bin/env bash
cd /path/to/work-folder # where your kernel and disk image are!
qemu-system-arm \
## TRIMMED FOR BREVITY
</code></pre>
<p>and added it to my PATH. Combined with the trick above, running this script (you may need <code>chmod +x</code> first) will give you a fully backgrounded running vPi.</p>
<p>While this is effective, it still blocks the current console and will kill the vPi if we close it. Instead, I'll preface the whole command with <code>screen -dmS vpi</code>. Thus, the full script will be:</p>
<pre><code class="language-bash">#!/usr/bin/env bash
cd /path/to/work-folder
screen -dmS vpi qemu-system-arm \
-kernel ./kernel-qemu-4.4.34-jessie \
-cpu arm1176 -m 256 -M versatilepb \
-no-reboot -serial stdio \
-append &quot;root=/dev/sda2 panic=1 rootfstype=ext4 rw&quot; \
-hda raspbian.qcow \
-net nic -net user \
-net tap,ifname=vnet0,script=no,downscript=no \
-vnc :5
</code></pre>
<p>Now, run <code>chmod +x vpi</code> and run <code>vpi</code>. You'll get a screen session called <code>vpi</code> started in the background that spins up the machine. You can then attach (with <code>screen -r vpi</code>) and detach (with Ctrl-A D) at will to get the serial console, and connect via VNC (on port 5905) at any time.</p>
<p>If you followed the instructions in the last post, you can also run <code>ssh pi@192.168.122.200</code> to quickly SSH into your new Pi.</p>
<h2 id="permissions">Permissions</h2>
<p>Note that you might run into some wonky permissions around the <code>/dev/net/tun</code> device if you try and run these scripts not as root. The details of fixing this are a bit complex to get into here, but essentially come down to:</p>
<ul>
<li>Give your user access to or ownership of <code>/dev/net/tun</code> (I'd recommend a <code>udev</code> rules file)</li>
<li>Add your user/group to the device with <code>ip</code> using <code>sudo ip tuntap add dev vnet0 mode tap group &lt;group-here&gt;</code></li>
<li>Run the machine, making sure to include <code>-net user</code></li>
</ul>
<p>This process can be a little glitchy but does take out the need to run as root.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Getting started with Git Profile Manager]]></title><description><![CDATA[Getting started with Git Profile Manager (GPM), a super-simple CLI for quickly configuring Git repos.]]></description><link>https://blog.agchapman.com/getting-started-git-profile-manager/</link><guid isPermaLink="false">5efb05982228b500014be032</guid><category><![CDATA[dotnet]]></category><category><![CDATA[git]]></category><category><![CDATA[tools]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Sat, 31 Oct 2020 14:23:17 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2020/10/josiah-ingels-X79v5N3O4yA-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2020/10/josiah-ingels-X79v5N3O4yA-unsplash.jpg" alt="Getting started with Git Profile Manager"><p><em><strong>TL;DR:</strong></em> I published a simple CLI tool to manage switching between complex Git configs. You can find all the docs and details <a href="https://agc93.github.io/git-profile-manager">here</a>, the source <a href="https://github.com/agc93/git-profile-manager">here</a> and download it <a href="https://github.com/agc93/git-profile-manager/releases">here</a></p>
<hr>
<p>I've recently published a new command line utility: <a href="https://agc93.github.io/git-profile-manager/">Git Profile Manager</a>.</p>
<p>Git Profile Manager (GPM) was originally intended purely as a demo for .NET Core native publishing during a conference presentation. As it happened, the tool worked a little better than I thought so I kept hacking on it so I published it earlier this year for anyone who finds it useful!</p>
<h2 id="whatdoesitdo">What does it do?</h2>
<p>Git Profile Manager, as the name suggests, manages Git profiles. What that means is collections of Git configuration options. As a practical example, I work on a lot of different Git projects, and many of them have different configuration requirements:</p>
<ul>
<li><a href="https://github.com/cake-build/cake">Cake</a>: Sign all commits with Keybase key, commit with <code>cakebuild.net</code> address</li>
<li>Other GitHub projects: sign with Keybase key, commit with personal email, whole suite of aliases</li>
<li>Work projects: sign with internal key and internal address, no auto CRLF detection, custom commit message template</li>
</ul>
<p>Likewise, I use different diff tools for different projects, I only use aliases for some complex projects, and projects using submodules need some extra config.</p>
<p>Now, I could sit there and run <code>git config</code> commands for ages (used to do that), or write up project-specific shell scripts (recently did that), but now I can simply create &quot;profiles&quot; with GPM that contain a set of configuration options for each project or type of project. Now, when I clone a new work project, I just run:</p>
<pre><code class="language-bash">gpm activate work-profile
</code></pre>
<p>and the repo is immediately configured with my work keys and our team's customised Git configs.</p>
<p>On the same workstation, I can start a new Cake project, run <code>gpm activate cake</code> and now I'm using my Cake configuration (including my <a href="https://keybase.io/agc93">Keybase</a> key for signing commits).</p>
<p>Plus, since you can activate multiple profiles at the same time, I can quickly run <code>gpm activate submodules</code> and add submodule-specific config without affecting the rest of the repo. I can also take my existing shell scripts and use <code>gpm profile import /path/to/script.sh</code> and convert them easily into profiles that are stored in my home folder.</p>
<p>GPM supports a whole host of commands for easily activating, deactivating and manipulating profiles, and they're all detailed in <a href="https://agc93.github.io/git-profile-manager/docs/">the documentation</a>.</p>
<h2 id="howdoigetit">How do I get it?</h2>
<p>The easiest way (in my opinion) is as a .NET Core global tool: just run <code>dotnet tool install -g git-profile-manager</code> and <code>gpm</code> should be ready to go.</p>
<p>Native packages are also available for Windows as well as a range of Linux distributions (Ubuntu, Debian, RHEL, CentOS).</p>
<p>You can also download the latest release and manually extract it to your PC, if you don't want to install it!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running a sandboxed browser session with SELinux]]></title><description><![CDATA[A short tutorial on using SELinux sandboxes to run a clean browser session in an SELinux-enforced container/sandbox.]]></description><link>https://blog.agchapman.com/selinux-browser-sandbox/</link><guid isPermaLink="false">5efb05982228b500014be045</guid><category><![CDATA[linux]]></category><category><![CDATA[how-to]]></category><category><![CDATA[tip]]></category><category><![CDATA[security]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Tue, 25 Jun 2019 06:58:00 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2019/06/rhand-mccoy-345125-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://blog.agchapman.com/content/images/2019/06/rhand-mccoy-345125-sm.jpg" alt="Running a sandboxed browser session with SELinux"><p>This post was written on, and tested with, Fedora 30. Other distributions or releases might be a little different.</p>
</blockquote>
<p>While plenty of users probably haven't used SELinux much, most will only know it as a security framework to prevent misbehaving apps and enforcing access control in Linux. If you get into the meat of it though, you can use its sweet security controls to do all sorts of Fun™ things.</p>
<p>Today I'm going to show you how to use SELinux to run a clean Firefox session in a sandbox.</p>
<h2 id="theboringbits">The boring bits</h2>
<p>SELinux is a maddeningly complex bit of technology. It's really powerful, boasts some really awesome design and is something you'll really want to learn if you want to up your security (or Linux) skills. That being said, the finer details of users, roles, types, sensitivities and categories are way beyond this post, and you don't really need to understand everything to get started!</p>
<h2 id="playingwithyoursandbox">Playing with your <code>sandbox</code></h2>
<p>The crucial command behind all of this is the <code>sandbox</code> command. On most SELinux-enabled systems (and certainly on Fedora), <code>sandbox</code> will be installed by default.</p>
<p>As the name implies <code>sandbox</code> is used to run any command in a fully user-controllable SELinux-enforced sandbox. This allows you to isolate commands/applications from the rest of your system and only grant specific permissions and capabilities. Even better, SELinux includes a handful of prebuilt sandbox types that allow access to certain critical resources.</p>
<p>After you install one dependency (<code>policycoreutils-sandbox</code>), you can even run X apps in an SELinux sandbox!</p>
<h2 id="thegoodbit">The good bit</h2>
<p>With that context, out of the way, here's the magic command for running Firefox in an SELinux sandbox on Fedora:</p>
<pre><code class="language-bash">sandbox -X -t sandbox_net_t -t sandbox_web_t -w 1280x1024 firefox
</code></pre>
<p>This runs a <code>sandbox</code> with its own X server (<code>-X</code>), allows ports required for web browsing and general network access (<code>-t sandbox_web_t</code> and <code>-t sandbox_net_t</code>) and launches <code>firefox</code> in a 1280x1024 window (<code>-w 1280x1024</code>).</p>
<p>This will open up a new window with a completely clean instance of Firefox that is isolated from the rest of your processes by SELinux. Note that this also means you won't be able to access any of your files (including your Firefox profile) so you will get a completely fresh session every time.</p>
<h2 id="nextsteps">Next steps</h2>
<p>If you find this handy, you might want to try extending this to other apps that you use where you might want to test things in clean environments or are handling files you don't <em>entirely</em> trust. For example, here's how to open a <code>report.pdf</code> from my home directory in a sandboxed PDF viewer:</p>
<pre><code class="language-bash">sandbox -X -w 1280x1024 -i ~/report.pdf evince report.pdf
</code></pre>
<blockquote>
<p>Since these commands are unwieldy and the options are app-specific, I recommend setting up some aliases for anything you use often.</p>
</blockquote>
<p>There's quite a few ways to achieve sandboxed X apps these days! If SELinux/<code>sandbox</code> isn't right for you, you might want to check out LXC/LXD, Docker (especially with subuser) or Podman.</p>
<p>Now go forth and sandbox all the things!</p>
<blockquote>
<p>You should also be aware that sand is coarse and rough and irritating and it gets everywhere.</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[nghttp2: an excellent alternative to an NGINX proxy]]></title><description><![CDATA[A quick rundown on using nghttpx (from the nghttp2 project) as an alternative to NGINX for an HTTP/2 reverse proxy server.]]></description><link>https://blog.agchapman.com/nghttp2-an-alternative-to-nginx/</link><guid isPermaLink="false">5efb05982228b500014be044</guid><category><![CDATA[linux]]></category><category><![CDATA[how-to]]></category><category><![CDATA[github]]></category><category><![CDATA[proxy]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Mon, 27 May 2019 03:08:20 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2019/05/anastasia-taioglou-345848-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2019/05/anastasia-taioglou-345848-sm.jpg" alt="nghttp2: an excellent alternative to an NGINX proxy"><p>Recently at work, I had a very strange problem with an NGINX reverse proxy I was using: it was truncating static files at seemingly arbitrary places. To this day, I don't know what caused it.</p>
<p>On the plus side, a colleague of mine (thanks Mark!) pointed me towards an alternative I could test with: <a href="https://github.com/nghttp2/nghttp2">nghttp2</a>.</p>
<p>To be clear: nghttp2 is <em>not</em> a complete NGINX replacement. It does, however, make for a stunningly easy HTTP/2 compliant reverse proxy alternative.</p>
<blockquote>
<p>The nghttp2 project actually includes quite a few components: a client (<code>nghttp</code>), server (<code>nghttpd</code>) and a proxy (<code>nghttpx</code>). I'm only looking at the proxy today!</p>
</blockquote>
<p>Previously, using NGINX as a reverse proxy required a slightly obtuse (but very powerful) config file and restarting the nginx server. Using <code>nghttpx</code> allows you to directly run the server on-demand, then easily swap it to a system service later.</p>
<h2 id="runningtheproxy">Running the proxy</h2>
<p>Once you've got it installed (packages are available in EPEL for RHEL/CentOS, or in the mainline repos for Fedora), you can run the proxy component directly with the <code>nghttpx</code> command. The syntax can be a little dizzying at first, but is super-easy to tweak and adjust. For example, here's the command my app ended up needing:</p>
<pre><code>nghttpx -s -f'*,443' -b127.0.0.1,9442 /etc/ssl/cert.key /etc/ssl/cert.pem
</code></pre>
<p>That starts up a HTTP/2 proxy (<code>-s</code>) on port 443 (<code>-f'*,443'</code>) proxying a local HTTP app on port 9442 (<code>-b127.0.0.1,9442</code>) using the cert and key from <code>/etc/ssl</code>. If your upstream is HTTPS-enabled, just change it up slightly:</p>
<pre><code>nghttpx -s -f'*,443' -b'127.0.0.1,9442;;tls' /etc/ssl/cert.key /etc/ssl/cert.pem
</code></pre>
<blockquote>
<p>The <code>man</code> page includes a lot of info on the various ways to use the <code>-b</code>/<code>--backend</code> option!</p>
</blockquote>
<p>It also supports the same <code>-k</code> behaviour as curl if your upstream is using a self-signed cert.</p>
<h2 id="runningasaservice">Running as a service</h2>
<p>To run it as a service, you'll want to use a config file. Like many other tools <code>nghttpx</code> uses a key-value config file that reflects the same options as the command line switches. Here's the above command as a config file:</p>
<pre><code class="language-text">frontend=0.0.0.0,443
backend=127.0.0.1,9443;/;tls
insecure=yes
private-key-file=/etc/ssl/cert.key
certificate-file=/etc/ssl/cert.pem

add-x-forwarded-for=yes
accesslog-file=/var/log/nghttpx/access.log
errorlog-file=/var/log/nghttpx/error.log
</code></pre>
<blockquote>
<p>You may have to create the directories for your log files before starting the service.</p>
</blockquote>
<p>Put that config file in the default location of <code>/etc/nghttpx/nghttpx.conf</code> and you're one easy command away from running proxy:</p>
<pre><code class="language-bash">systemctl start nghttpx.service
# or for a start-at-boot server:
systemctl enable --now nghttpx.service
</code></pre>
<p>Seriously, if you need to use a reverse proxy for anything you're running (non-production-grade apps, containers, older servers etc), take a look at <a href="https://github.com/nghttp2/nghttp2">nghttp2</a>: there's plenty of docs <a href="https://github.com/nghttp2/nghttp2">on GitHub</a> and the <code>man</code> page is really detailed and easy to understand.</p>
<hr>
<p>For anyone who doesn't realise: HTTP/2 is SSL/TLS-<strong>only</strong> so make sure you've got a cert before you try and jump on the new hotness!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fixing broken MSBuild SDK resolvers]]></title><description><![CDATA[One method of fixing broken MSBuild SDK resolver paths that can affect loading projects in Visual Studio, VS Code and OmniSharp on Linux.]]></description><link>https://blog.agchapman.com/fixing-broken-sdk-resolvers/</link><guid isPermaLink="false">5efb05982228b500014be043</guid><category><![CDATA[linux]]></category><category><![CDATA[c#]]></category><category><![CDATA[project]]></category><category><![CDATA[tools]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Tue, 22 Jan 2019 15:36:24 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2019/01/giordano-rossoni-1313378-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2019/01/giordano-rossoni-1313378-unsplash.jpg" alt="Fixing broken MSBuild SDK resolvers"><p>This post is just a quick documentation of a fix I found the hard way.</p>
<p><strong>TL;DR:</strong><br>
<code>SDK Resolver folder exists but without an SDK Resolver DLL or manifest file. This may indicate a corrupt or invalid installation of MSBuild</code></p>
<p><em>might</em> be fixed by</p>
<p><code>sudo rm -r /usr/lib/mono/msbuild/15.0/bin/SdkResolvers/NuGet.MSBuildSdkResolver</code></p>
<p>(or equivalent <code>mv</code>). YMMV.</p>
<hr>
<p>After updating both VS Code and the .NET Core SDK on my Fedora 29 machine, VS Code couldn't start OmniSharp correctly and wouldn't load .NET Core projects properly. The error code was a a rather cryptic one:</p>
<blockquote>
<p><code>SDK Resolver folder exists but without an SDK Resolver DLL or manifest file. This may indicate a corrupt or invalid installation of MSBuild</code></p>
</blockquote>
<p>This was pretty surprising given its a &quot;standard&quot; <code>dnf</code> install from the official MS repos and to be honest, I still don't actually know for sure what caused this issue (but you might see my guess if you keep reading)</p>
<p>Regardless, the VS Code logs weren't giving me any information on my resolvers for some reason. I could see the problem was not specific to VS Code and was actually coming from MSBuild for some reason.</p>
<p>Thankfully the Internet delivered in the form of two GitHub issues and a Developer Community post from which I could piece together the following:</p>
<ol>
<li>that it could be fixed by reinstalling the VS Build Tools package</li>
<li>that the problem file (on Windows) was <code>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\SdkResolvers\NuGet.MSBuildSdkResolver\NuGet.MSBuildSdkResolver.dll.new</code></li>
</ol>
<p>Given #1 is completely irrelevant on Linux, I followed the rabbit hole of #2 and one simple <code>locate</code> command later I could see that there were two different MSBuild resolver locations: <code>/usr/share/dotnet/sdk/&lt;version&gt;/SdkResolvers/NuGet.MSBuildSdkResolver</code> and <code>/usr/lib/mono/msbuild/15.0/bin/SdkResolvers/NuGet.MSBuildSdkResolver</code>.</p>
<p>I don't know enough about the internals of MSBuild to tell you why but simply removing the entire <code>/usr/lib/mono/msbuild/15.0/bin/SdkResolvers/NuGet.MSBuildSdkResolver</code> directory and restarting OmniSharp led to a successful project load!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Auto-mounting network file systems with systemd]]></title><description><![CDATA[This guide shows you how to use systemd unit files to easily manage and mount network filesystems and mount them on boot correctly!]]></description><link>https://blog.agchapman.com/auto-mounting-network-file-systems-with-systemd/</link><guid isPermaLink="false">5efb05982228b500014be040</guid><category><![CDATA[linux]]></category><category><![CDATA[how-to]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Fri, 05 Oct 2018 04:13:24 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2018/10/will-africano-1084946-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2018/10/will-africano-1084946-unsplash.jpg" alt="Auto-mounting network file systems with systemd"><p>If you're new to systemd you're probably thinking that the title sounds like the sort of thing that <strong>should</strong> be easy, and you're right! If you're used to systemd, you'll already know that it's probably <strong>not</strong> easy, and you're also right!</p>
<hr>
<p>On the surface, you should be able to just add your entries to <code>/etc/fstab</code> as you've been doing up until now and if that's working for you: stick to it!</p>
<p>If, however, you either <em>a)</em> are having problems with mounts being attempted before your network is up (common with WiFi systems) or <em>b)</em> want to do everything the full systemd way, then read on!</p>
<h2 id="themountfile">The Mount File</h2>
<p>Systemd includes a helpful <code>mount</code> unit type, handled as a <code>.mount</code> unit file (just like <code>.service</code> etc). As of the time of writing, this file actually just gets 'executed' through the usual <code>mount</code> command anyway, but let's have a look at what a mount file looks like.</p>
<blockquote>
<p>Check out <a href="https://oguya.ch/posts/2015-09-01-systemd-mount-partition/">James Oguya's excellent post on this topic</a> for the full details.</p>
</blockquote>
<pre><code class="language-systemd">[Unit]
Description = Mount NFS Share

[Mount]
What=172.24.0.5:/srv/backups
Where=/mnt/backups
Type=nfs
Options=defaults
# Uncomment the below if your server is real slow
# TimeoutSec=10

[Install]
WantedBy=multi-user.target
</code></pre>
<p>and that's it! Sort of.</p>
<p>Put this file in <code>/etc/systemd/system</code> like your other system units, but <strong>be warned</strong>: the file must be named <em>exactly</em> for its target mount point. In the example above, the file would be <code>/etc/systemd/system/mnt-backups.mount</code>.</p>
<blockquote>
<p>Don't ask me why, I don't know either.</p>
</blockquote>
<p>Then run <code>systemctl daemon-reload</code> and <code>systemctl start mnt-backups.mount</code> to mount your filesystem. Now to mount at boot, you would think you can just <code>systemctl enable ...</code> but unfortunately it's not that simple.</p>
<h2 id="puttingabootinit">Putting a boot in it</h2>
<p>Now if you just run <code>systemctl enable ...</code> with your unit as it stands, it will almost certainly fail since systemd won't know to wait for the network before it actually runs the mount.</p>
<p>You can control this using the <code>After=</code> option in the <code>[Unit]</code> section:</p>
<pre><code class="language-systemd">[Unit]
Description = Mount Server Private Directory
After=## this bit here ##

[Mount]
...
</code></pre>
<p>Now your first thought will be to use the &quot;special&quot; <code>remote-fs.target</code> here and this might work in simple setups. If your system is using Network Manager however, the accepted wisdom seems to be to use <code>After=NetworkManager-wait-online.service</code> as your dependency. This will usually work for wired connections, so try this first.</p>
<p>There's a bit of a caveat here though:</p>
<h3 id="networkmanagerdoesntactuallywait">NetworkManager doesn't actually wait</h3>
<p>If you check the <code>/usr/lib/systemd/system/NetworkManager-wait-online.service</code> file yourself, you'll notice the service is just using <code>nm-online</code> to wait for the network. <strong>However</strong>, if you check the arguments being used, you'll see that using the <code>-s</code> flag here only waits for NetworkManager <em>itself</em> to be ready, not the actual connection to be connected and ready. While you can simply remove the <code>-s</code> flag here (see notes below), I find it better to create a new service.</p>
<h3 id="creatingaservicethatactuallywaits">Creating a service that actually waits</h3>
<p>In my case, I created the following file at <code>/etc/systemd/system/network-online.service</code>:</p>
<pre><code class="language-systemd">[Unit]
Description=Wait until NM actually online
Requires=NetworkManager-wait-online.service
After=NetworkManager-wait-online.service

[Service]
Type=oneshot
ExecStart=/usr/bin/nm-online -q --timeout=120
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target
</code></pre>
<p>This will run <em>after</em> the existing <code>NetworkManager-wait-online.service</code> but will only successfully start (and stay active) once Network Manager has actually connected to the network.</p>
<p>Run <code>systemctl daemon-reload</code> and <code>systemctl enable --now network-online.service</code> to confirm your service is working and enable the check to run at boot</p>
<h3 id="addingthistoyourmount">Adding this to your mount</h3>
<p>Now you have this service, you can slightly tweak your existing <code>.mount</code> file with the following lines in the <code>[Unit]</code> section:</p>
<pre><code class="language-systemd">Requires=network-online.service
After=network-online.service
</code></pre>
<p>Now your mount won't be run until NetworkManager reports your network as actually connected.</p>
<h2 id="puttingittogether">Putting it together</h2>
<p>You can actually use this new <code>network-online.service</code> as a <code>After=</code> criteria on any other services you have that should wait until the network is actually connected, instead of just having the stack up.</p>
<h2 id="notes">Notes</h2>
<h4 id="whataboutifimnotusingnetworkmanager">What about if I'm not using NetworkManager?</h4>
<p>Fair point actually. The overall principles at work remain the same: create a service with a command that will only return a success code when the network is available and have your mounts depend on that. You'll just have to find the right system service to use as an <code>After=</code> substitute for <code>NetworkManager-wait-online.service</code> and find a command you can use to replace <code>nm-online</code>.</p>
<h4 id="whataboutremovingthesflaginthebuiltinunit">What about removing the <code>-s</code> flag in the builtin unit?</h4>
<p>So this is a valid approach with two important caveats:</p>
<ul>
<li><strong>You might break existing stuff</strong>. In my non-rigorous testing, boot behaviour actually got worse when I made this change and there were a few services that seemed to have trouble starting/waiting for the modified service.</li>
<li><strong>DO NOT modify the builtin service file</strong>. If you're on a vaguely recent release, you should be able to create a <code>/etc/systemd/system/NetworkManager-wait-online.service.d/</code> directory and put <code>.conf</code> files in there to override the built-in one. Check the <a href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html">docs</a> or the <a href="https://wiki.archlinux.org/index.php/systemd#Drop-in_files">better unofficial docs</a></li>
</ul>
<h4 id="cantyoudothisinetcfstab">Can't you do this in <code>/etc/fstab</code>?</h4>
<p>Yes and no, in my experience. Yes, <code>/etc/fstab</code> is both the easiest and most Linux-y way of doing mounts, but it's also unintuitive, error-prone and doesn't integrate well with things like NetworkManager/netctl/whatever. Personally, I'd prefer to whip up an easy <code>.mount</code> file then try and decipher whatever <code>x-systemd.automount</code> means, and which column of this plaintext file it should be in.</p>
<h4 id="whynotuseautomount">Why not use <code>automount</code>?</h4>
<p>This is the big one: the answer to the vast majority of problems with NetworkManager/systemd/etc and mount delay is to use &quot;automounts&quot; which are essentially lazy filesystem mounts, that are auto-mounted the first time they're accessed. For the most part, these do solve a few of the problems from above. That being said, if you want your boot to <em>actually wait</em> for the filesystem rather than leave it until first access, you will need to use this method instead.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Building custom firmware for the ANAVI Light Controller]]></title><description><![CDATA[A short guide on how to get started with developing custom Arduino firmware for the ANAVI Light Controller, from setting up your environment to compiling and flashing your first firmware!]]></description><link>https://blog.agchapman.com/building-custom-firmware-for-the-anavi-light-controller/</link><guid isPermaLink="false">5efb05982228b500014be042</guid><category><![CDATA[Anavi]]></category><category><![CDATA[automation]]></category><category><![CDATA[linux]]></category><category><![CDATA[how-to]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Wed, 03 Oct 2018 09:01:54 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2018/10/kym-1080849-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2018/10/kym-1080849-unsplash.jpg" alt="Building custom firmware for the ANAVI Light Controller"><p>As great as the <a href="https://www.crowdsupply.com/anavi-technology/light-controller">ANAVI Light Controller</a> is out-of-the-box, you might want to either customise the firmware that's factory-loaded or add your own behaviour to the Light Controller.</p>
<p><img src="https://www.crowdsupply.com/img/502d/details03-1_jpg_project-body.jpg" alt="Building custom firmware for the ANAVI Light Controller"></p>
<p>Note that these instructions are using the Arduino IDE on Linux. You can also use PlatformIO, but that's not covered here. This guide is basically a prologue for Leon's <a href="https://www.youtube.com/watch?v=DIa7_CQ2T7c">excellent guide on flashing the Controller</a>.</p>
<h2 id="backgroundandsetup">Background and Setup</h2>
<p>Before you get too far, make sure you have a feel for what you're working with. The Light Controller is an ESP-12E module wired up with 3 pins for each RGB channel, one pin for the button (marked SW1) and one for the LED (marked D1). You'll find the 3 UART pins just below the 3 I2C sensor slots, and the acrylic case even includes convenient cutouts for both the button and UART pins.</p>
<p>Setting up your environment is pretty easy. First, check your USB UART adapter is correctly recognised (as <code>/dev/ttyUSB0</code> on my Fedora system) and it's <em>strongly</em> recommended you allow your user access to the device without <code>sudo</code>.</p>
<blockquote>
<p>On Fedora, this can be done by adding your user to the <code>dialout</code> group (with <code>sudo usermod -aG dialout $USER</code>) then logging out and back in again.</p>
</blockquote>
<p>Finally, make sure you have installed the Arduino IDE. On most Linux distros (especially convenient on Fedora) you can use the <a href="https://flathub.org/apps/details/cc.arduino.arduinoide">Flatpak package from Flathub</a> or you can <a href="https://www.arduino.cc/en/Main/Software">manually install it</a>.</p>
<h3 id="getthecode">Get the code</h3>
<p>If you're building your own firmware from scratch, just use your own sketch file and skip this section. For everyone else, you can use either the <a href="https://github.com/AnaviTechnology/anavi-light-controller-sw/blob/master/anavi-light-controller-sw/anavi-light-controller-sw.ino">stock firmware file</a> or the much simpler <a href="https://github.com/AnaviTechnology/anavi-examples/blob/master/anavi-light-controller/anavi-blinking-led/anavi-blinking-led.ino">blinking LED demo</a> as a starting point. Either download the file directly or fork and clone the repo with Git to get the <code>.ino</code> file.</p>
<p>Finally, open it up in Arduino IDE!</p>
<h3 id="setuparduinoide">Set up Arduino IDE</h3>
<p>To build and flash the ANAVI Light Controller, you will need to set up the Arduino IDE to suit the Controller.</p>
<p><strong>Add the board</strong></p>
<ol>
<li>First, open the <em>File -&gt; Preferences</em> menu and add the following URL to the &quot;Additional Boards Manager URLs&quot; field: <code>http://arduino.esp8266.com/stable/package_esp8266com_index.json</code></li>
</ol>
<p><img src="https://blog.agchapman.com/content/images/2018/10/Screenshot-from-2018-10-03-18-19-01.png" alt="Building custom firmware for the ANAVI Light Controller"></p>
<ol start="2">
<li>Close the Preferences window and open the <em>Tools &gt; Board &gt; Boards Manager...</em> menu to open the Boards Manager. Once the index has updated, use the filter field to find the <em>esp8266</em> board and install the latest version.</li>
</ol>
<p><strong>Enable the flash storage</strong></p>
<p>If you want the onboard flash storage to be available, you also need to use the <em>Tools &gt; Flash Size</em> menu to add SPIFFS. Personally, I use the <em>4M (1M SPIFFS)</em> option and haven't had any issues.</p>
<p><strong>Check the port</strong></p>
<p>Finally, check the port in the <em>Tools &gt; Port</em> menu to make sure you're using your USB UART adapter (usually <code>/dev/ttyUSB0</code>)</p>
<h2 id="buildandverify">Build and Verify</h2>
<p><strong>Install the dependencies</strong></p>
<p>Firstly, you need to install your sketch's dependencies. This will be different if you're using your own sketch! If you're using the blinking LED sample, you can skip this step entirely, but if you're building from the stock firmware, keep reading.</p>
<p>Use the <em>Sketch &gt; Include Library &gt; Manage Libraries...</em> menu to open the Library Manager. You will need to install several libraries, so work down the following list, installing the version shown:</p>
<p><img src="https://blog.agchapman.com/content/images/2018/10/Screenshot-from-2018-10-03-18-24-23.png" alt="Building custom firmware for the ANAVI Light Controller"></p>
<ul>
<li><code>DNSServer</code> (1.1.0)</li>
<li><code>ESP8266WebServer</code> (1.0.0)</li>
<li><code>WiFiManager</code> (by Tzapu, 0.14.0)</li>
<li><code>ArduinoJson</code> (5.13.2, <strong>not 6.x</strong>)</li>
<li><code>PubSubClient</code> (by Nick O'Leary, 2.6.0)</li>
<li>Adafruit APDS9960 Library (search for <code>adafruit_apds</code>, 1.0.5)</li>
<li>Adafruit HTU21DF Library (search for <code>adafruit_htu</code>, 1.0.1)</li>
</ul>
<p><strong>Build the sketch</strong></p>
<p>Once you have the dependencies installed and your sketch is ready, just click the check mark button in the top left to build and verify your sketch. Watch the status window at the bottom to make sure there's no errors in building your sketch.</p>
<p><img src="https://blog.agchapman.com/content/images/2018/10/Screenshot-from-2018-10-03-18-32-59.png" alt="Building custom firmware for the ANAVI Light Controller"></p>
<blockquote>
<p>You can increase the verbosity of the build window using the <em>Show verbose output during:</em> option in the Preferences window</p>
</blockquote>
<h2 id="flashtheboard">Flash the board</h2>
<p>This bit is much better explained by Leon, so watch the nice and easy-to-follow video from Leon himself below:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/DIa7_CQ2T7c" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<h2 id="monitoringyourcontroller">Monitoring your Controller</h2>
<p>If your sketch prints to the serial console at all (both of the Anavi-provided sketches do out-of-the-box), you can monitor this by opening a terminal window and running <code>screen /dev/ttyUSB0 115200</code>.</p>
<blockquote>
<p>Note that <code>screen</code> will own the port while this is running so you will need to close the session (press Ctrl-A and type <code>:quit</code>) before you can upload another sketch using the IDE.</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Connecting the ANAVI Light Controller with Home Assistant]]></title><description><![CDATA[A quick look into how to connect the new ANAVI Light Controller to Home Assistant.]]></description><link>https://blog.agchapman.com/anavi-light-controller-home-assistant/</link><guid isPermaLink="false">5efb05982228b500014be041</guid><category><![CDATA[automation]]></category><category><![CDATA[Anavi]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Tue, 25 Sep 2018 15:01:34 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2018/09/marc-kleen-1071196-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2018/09/marc-kleen-1071196-unsplash.jpg" alt="Connecting the ANAVI Light Controller with Home Assistant"><p><strong>TL:DR;</strong></p>
<pre><code class="language-yaml">light:
  - platform: mqtt_json
    command_topic: &quot;cmnd/&lt;device-id&gt;/color&quot;
    state_topic: &quot;stat/&lt;device-id&gt;/color&quot;
    brightness: true
    rgb: true
</code></pre>
<h3 id="background">Background</h3>
<p>I recently received my ANAVI Light Controller in the mail and was excited to try it out. The easiest way to get set up is to follow Leon's excellent intro video (included below).</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Y_81CuuGm0Y?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<p>While you're connecting to your WiFi network, you'll have the option of setting your own MQTT broker, and here make sure here you change it from the default <code>eclipse.org</code> broker to the broker your Home Assistant install is using.</p>
<p>As the interface warns you, make certain that you grab the device ID (for future reference, the device ID is the MD5 sum of your Chip ID).</p>
<h4 id="checkingyourcontroller">Checking your controller</h4>
<p>If your MQTT broker also supports a Websockets connection, you can also use the <code>demo.anavi.technology</code> website, but if you're using a local/self-hosted MQTT broker, that's probably not the case. If you really want to make sure, you can always fire up MQTTSpy, connect to your broker and watch the messages. You can add a subscription for <code>#</code> to see <strong>all</strong> messages on the broker.</p>
<h3 id="controllingyourcontroller">Controlling your Controller</h3>
<p>If you just want to add your Controller to Home Assistant, jump down to the next topic.</p>
<p>The Controller uses the following MQTT topics:</p>
<h4 id="commandtopics">Command Topics</h4>
<p>There are two command topics:</p>
<p><code>cmnd/&lt;device-id&gt;/power</code>: This is the simple on/off switch topic. Publishing a message with a payload of either &quot;ON&quot; or &quot;OFF&quot; will turn the strip on or off.</p>
<p><code>cmnd/&lt;device-id&gt;/color</code>: This is the more advanced topic and can even replace the normal <code>/power</code> topic. This topic expects a simple payload with the following structure:</p>
<pre><code class="language-json">{
    &quot;state&quot;: &quot;ON&quot;,
    &quot;brightness&quot;: 255
    &quot;color&quot;: {&quot;r&quot;: 255, &quot;g&quot;: 0, &quot;b&quot;: 0}
}
</code></pre>
<p>You can quickly set the state, brightness and color in one hit using this topic, or any of the three keys individually (which is also what Home Assistant will do).</p>
<h4 id="statetopics">State Topics</h4>
<p>The controller will automatically publish a status message to the two topics below whenever it acts on an incoming MQTT message.</p>
<p><code>stat/&lt;device-id&gt;/power</code>: Just like its <code>cmnd</code> equivalent, messages on this topic will have their payloads set to the string &quot;ON&quot; or &quot;OFF&quot; to reflect the current state.</p>
<p><code>stat/&lt;device-id&gt;/color</code>: Again, this topic works much like its <code>cmnd</code> equivalent, publishing the same JSON payload (with <a href="https://github.com/AnaviTechnology/anavi-light-controller-sw/issues/4">one exception</a>) to reflect the current LED state.</p>
<h3 id="addingthecontrollertohomeassistant">Adding the Controller to Home Assistant</h3>
<p>To add your controller to Home Assistant, just add the following YAML to your <code>configuration.yaml</code>, substituting your device ID from the earlier step (and changing the name to match your lights):</p>
<pre><code class="language-yaml">light:
  - platform: mqtt_json
    name: Lounge LEDs
    command_topic: &quot;cmnd/&lt;device-id&gt;/color&quot;
    state_topic: &quot;stat/&lt;device-id&gt;/color&quot;
    brightness: true
    rgb: true
</code></pre>
<p>You should then be able to restart your Home Assistant and you will have a new entry in your Home Assistant &quot;Lights&quot; panel.</p>
<h3 id="bonusround">Bonus Round</h3>
<p>Every single MQTT payload, complete with which topic it's published on, is actually also written directly to the serial console, so even if you are having trouble with your broker, you can watch the console. This output even includes all the configuration details such as the broker details, and any user/password configured (if you bought the Starter or Advanced Kits, you will have received a USB UART adapter in the kit).</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using QEMU to emulate a Raspberry Pi]]></title><description><![CDATA[A simple tutorial on using QEMU to emulate a Raspberry Pi on your PC! Covers installation and configuration of a virtual Pi on any QEMU-supported OS.]]></description><link>https://blog.agchapman.com/using-qemu-to-emulate-a-raspberry-pi/</link><guid isPermaLink="false">5efb05982228b500014be03d</guid><category><![CDATA[linux]]></category><category><![CDATA[virtualisation]]></category><category><![CDATA[how-to]]></category><category><![CDATA[raspberry-pi]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Mon, 28 Aug 2017 01:47:44 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2017/08/patrick-schneider-346887-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://blog.agchapman.com/content/images/2017/08/patrick-schneider-346887-sm.jpg" alt="Using QEMU to emulate a Raspberry Pi"><p>If you're building software for the Raspberry Pi (like I sometimes do), it can be a pain to have to constantly keep Pi hardware around and spotting Pi-specific problems can be difficult until too late.</p>
<p>One option (and the one I most like) is to emulate a Raspberry Pi locally before ever hitting the device. Why?</p>
<ul>
<li>Works anywhere you can install QEMU</li>
<li>No hardware setup needed (<em>no more scratching around for a power supply</em>)</li>
<li>Faster feedback cycle compared to hardware</li>
<li>I can use Pi software (like Raspbian) in a virtual context</li>
<li>I can prep my &quot;virtual Pi&quot; with all the tools I need regardless of my physical Pi's use case</li>
</ul>
<p>Given I'm next-to-useless at Python, that last one is pretty important as it allows me to install every Python debugging and testing tool known to man on my virtual Pi while my end-product hardware stays comparatively pristine.</p>
<h2 id="gettingstarted">Getting started</h2>
<p>First, you'll need a few prerequisites:</p>
<h4 id="qemumorespecificallyqemusystemarm">QEMU (more specifically <code>qemu-system-arm</code>)</h4>
<p>You can find all the packages for your chosen platform on the <a href="https://www.qemu.org/download/">QEMU website</a> and is installable across Linux, macOS and even Windows.</p>
<h4 id="raspbian">Raspbian</h4>
<p>Simply download the copy of Raspbian you need from the <a href="https://www.raspberrypi.org/downloads/raspbian/">official site</a>. Personally, I used the <code>2017-08-16</code> version of Raspbian Lite, since I don't need an X server.</p>
<h4 id="kernel">Kernel</h4>
<p>Since the standard RPi kernel can't be booted out of the box on QEMU, we'll need a custom kernel. We'll cover that in the next step.</p>
<h2 id="preparing">Preparing</h2>
<h4 id="getyourkernel">Get your kernel</h4>
<p>First, you'll need to download a kernel. Personally, I (along with most people) use the <a href="https://github.com/dhruvvyas90/qemu-rpi-kernel">dhruvvyas90/qemu-rpi-kernel</a> repository's kernels. Either clone the repo:</p>
<pre><code class="language-bash">git clone https://github.com/dhruvvyas90/qemu-rpi-kernel.git
</code></pre>
<p>or download a kernel directly:</p>
<pre><code class="language-bash">curl https://github.com/dhruvvyas90/qemu-rpi-kernel/raw/master/kernel-qemu-4.4.34-jessie
</code></pre>
<p>For the rest of these steps I'm going to be using the <code>kernel-qemu-4.4.34-jessie</code> kernel, so update the commands as needed if you're using another version.</p>
<h4 id="filesystemimage">Filesystem image</h4>
<blockquote>
<p>This step is optional, but recommended</p>
</blockquote>
<p>When you download the Raspbian image it will be in the <em>raw</em> format, a plain disk image (generally with an <code>.img</code> extension).</p>
<p>A more efficient option is to convert this to a <em>qcow2</em> image first. Use the <code>qemu-img</code> command to do this:</p>
<pre><code class="language-bash">qemu-img convert -f raw -O qcow2 2017-08-16-raspbian-stretch-lite.img raspbian-stretch-lite.qcow
</code></pre>
<p>Now we can also easily expand the image:</p>
<pre><code class="language-bash">qemu-img resize raspbian-stretch-lite.qcow +6G
</code></pre>
<blockquote>
<p>You can check on your image using the <code>qemu-img info</code> command</p>
</blockquote>
<h2 id="starting">Starting</h2>
<p>You've got everything you need now: a kernel, a disk image, and QEMU!</p>
<p>Actually running the virtual Pi is done using the <code>qemu-system-arm</code> command and it can be quite complicated. The full command is this (don't worry it's explained below):</p>
<pre><code class="language-bash">sudo qemu-system-arm \
-kernel ./kernel-qemu-4.4.34-jessie \
-append &quot;root=/dev/sda2 panic=1 rootfstype=ext4 rw&quot; \
-hda raspbian-stretch-lite.qcow \
-cpu arm1176 -m 256 \
-M versatilepb \
-no-reboot \
-serial stdio \
-net nic -net user \
-net tap,ifname=vnet0,script=no,downscript=no
</code></pre>
<p>So, in order:</p>
<ul>
<li><code>sudo qemu-system-arm</code>: you need to run QEMU as <code>root</code></li>
<li><code>-kernel</code>: this is the path to the QEMU kernel we downloaded in the previous step</li>
<li><code>-append</code>: here we are providing the boot args direct to the kernel, telling it where to find it's root filesytem and what type it is</li>
<li><code>-hda</code>: here we're attaching the disk image itself</li>
<li><code>-cpu</code>/<code>-m</code>: this sets the CPU type and RAM limit to match a Raspberry Pi</li>
<li><code>-M</code>: this sets the machine we are emulating. <code>versatilepb</code> is the 'ARM Versatile/PB' machine</li>
<li><code>-no-reboot</code>: just tells QEMU to exit rather than rebooting the machine</li>
<li><code>-serial</code>: redirects the machine's virtual serial port to our host's stdio</li>
<li><code>-net</code>: this configures the machine's network stack to attach a NIC, use the user-mode stack, connect the host's <code>vnet0</code> TAP device to the new NIC and don't use config scripts.</li>
</ul>
<p>If it's all gone well, you should now have a QEMU window pop up and you should see the familiar Raspberry Pi boot screen show up.</p>
<p>Now, go get yourself a drink to celebrate, because it might take a little while.</p>
<h2 id="networking">Networking</h2>
<p>Now, that's all well and good, but without networking, we may as well be back on hardware. When the machine started, it will have attached a NIC and connected it to the host's <code>vnet0</code> TAP device. If we configure that device with an IP and add it to a bridge on our host, you should be able to reliably access it like any other virtual machine.</p>
<h4 id="onhostfindabridgeandaddress">(on host) Find a bridge and address</h4>
<p>This will vary by host, but on my Fedora machine, for example, there is a pre-configured <code>virbr0</code> bridge interface with an address in the <code>192.168.122.0/24</code> space:</p>
<pre><code class="language-text">virbr0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 00:00:00:1e:77:43  txqueuelen 1000  (Ethernet)
</code></pre>
<p>I'm going to use this bridge and just pick a static address for my Pi: <code>192.168.122.200</code></p>
<blockquote>
<p>Reusing an existing (pre-configured) bridge means you won't need to sort your own routing</p>
</blockquote>
<h4 id="inguestconfigureinterface">(in guest) Configure interface</h4>
<p><em>NOTE</em>: I'm assuming Stretch here.</p>
<p>Open <code>/etc/dhcpcd.conf</code> in your new virtual Pi and configure the <code>eth0</code> interface with a static address in your bridge's subnet. For example, for my bridge:</p>
<pre><code># in /etc/dhcpcd.conf
interface eth0
static ip_address=192.168.122.200/24
static routers=192.168.122.254
static domain_name_servers=8.8.8.8 8.8.4.4
</code></pre>
<blockquote>
<p>You may need to reboot for this to take effect</p>
</blockquote>
<h4 id="inhostaddtaptobridge">(in host) Add TAP to bridge</h4>
<p>Finally, add the machine's TAP interface to your chosen bridge with the <code>brctl</code> command:</p>
<pre><code class="language-bash">sudo brctl addif virbr0 vnet0
</code></pre>
<p>Now, on your host, you should be able to ping <code>192.168.122.200</code> (or your Pi's address).</p>
<h4 id="setupssh">Set up SSH</h4>
<p>Now, in your machine, you can run <code>sudo raspi-config</code> and enable the SSH server (in the &quot;Interfacing Options&quot; menu at time of writing).</p>
<blockquote>
<p>Make sure you change the password from default while you're there!</p>
</blockquote>
<p>Finally, on your host, run <code>ssh-copy-id pi@192.168.122.200</code> to copy your SSH key into the Pi's <code>pi</code> user and you can now SSH directly into your Pi without a password prompt.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What's new in Downlink 0.2: Plugins]]></title><description><![CDATA[This post outlines Downlink's new plugin support, including building plugins, installing them in Downlink and even the experimental local plugin support!]]></description><link>https://blog.agchapman.com/whats-new-in-downlink-2-plugins/</link><guid isPermaLink="false">5efb05982228b500014be038</guid><category><![CDATA[downlink]]></category><category><![CDATA[c#]]></category><category><![CDATA[how-to]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Fri, 25 Aug 2017 07:58:30 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2017/08/gauravdeep-singh-bansal-345373-sm-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://blog.agchapman.com/content/images/2017/08/gauravdeep-singh-bansal-345373-sm-1.jpg" alt="What's new in Downlink 0.2: Plugins"><p>This is part of a series of posts on the new features added in Downlink 0.2! Check out <a href="https://blog.agchapman.com/announcing-downlink-2/">this post</a> for more or hit <a href="https://agc93.github.io/downlink/">the docs</a>.</p>
</blockquote>
<p>So all week we've been looking at all these cool new features available in Downlink like all the new extension points or the flexible routing behaviour, but a lot of it has required getting your hands dirty in your app's startup code and handling things like dependency injection in multiple places.</p>
<p>If that all seems a bit messy to you, you're quite right. To that end, Downlink has it's own powerful plugin system!</p>
<h2 id="whatsinaplugin">What's in a plugin?</h2>
<p>So a plugin, in Downlink-land, is just a type (often in it's own assembly) that can add new features or behaviours to Downlink in one easy motion. In fact, plugins in Downlink are essentially just a package/wrapper around all the cool new features we've been discussing this week.</p>
<p>Plugins allow you to ship, for example, support for a whole new storage service, including it's own storage backend, scheme client, and supported pattern matchers, all in one easy-to-consume package.</p>
<blockquote>
<p>Plugins are so handy that most of the moving parts included in Downlink out of the box are implemented as plugins!</p>
</blockquote>
<h2 id="usingpluginsinyourapp">Using plugins in your app</h2>
<p>Since plugins are a package of extension points, they need just one change to add to your app:</p>
<pre><code class="language-csharp">// in Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().AddDownlink(d =&gt; d.UsePlugin&lt;MyPlugin&gt;());
}
</code></pre>
<p>The <code>MyPlugin</code> plugin will then be loaded, added to your app, and any services it provides will be available in the app.</p>
<h4 id="pluginsinthedockerimage">Plugins in the Docker image</h4>
<p>If you want to use plugins without running your own host, you can also take advantage of highly experimental support for loading plugins from the local folder!</p>
<p>You can see how to get started with local plugin discovery in the <a href="https://agc93.github.io/downlink/doc/dev/plugins.html#default-plugin-discovery">online docs</a>.</p>
<h2 id="buildingplugins">Building plugins</h2>
<p>If you want to build a plugin for Downlink, it's pretty simple:</p>
<ul>
<li>Optionally, create and reference a class library</li>
<li>Add a class that implements <code>IDownlinkPlugin</code></li>
<li>Add it to your startup code</li>
</ul>
<p>That's it! Restart the app and your plugin will get invoked, your services will be added to the app container and you're good to go!</p>
<h2 id="thetechnicaldetails">The technical details</h2>
<p>Downlink's plugin system is essentially a wrapper over the existing DI-powered extension points. The <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Composition/IDownlinkPlugin.cs#L6"><code>IDownlinkPlugin</code></a> interface just provides a single &quot;hook&quot; method to add and configure services with direct access to the DI container (via the <code>IDownlinkBuilder</code>) that makes registering services easier.</p>
<p>Since they're much simpler, quite a few of Downlink's existing moving parts like the <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/DownlinkRoutingPlugin.cs">routing</a>, <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/DownlinkContextPlugin.cs">context</a> and all of the <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/DownlinkDefaultServices.cs">default services</a> are implemented as plugins that get loaded <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/MvcBuilderExtensions.cs#L67">first</a>.</p>
<p>Since plugins themselves are resolved out of DI, you can also use your plugin's constructor to import required services (<a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/DownlinkContextPlugin.cs#L13">example</a>). It's the <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Composition/IPluginLoader.cs"><code>IPluginLoader</code></a>/<a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Composition/PluginLoader.cs"><code>PluginLoader</code></a> that does the actual loading.</p>
<hr>
<h2 id="summary">Summary</h2>
<p>That's it for this week's journey through Downlink 0.2's new features! Hopefully you'll find some cool new uses for Downlink and get a chance to try out all it's new features.</p>
<p>As always, you can find help in <a href="https://agc93.github.io/downlink/">the docs</a>, <a href="https://github.com/agc93/downlink">on GitHub</a>, or <a href="https://gitter.im/agc93/downlink">on Gitter</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What's new in Downlink 0.2: Routing]]></title><description><![CDATA[This post outlines the new routing changes in Downlink 0.2 including how to add a static prefix or even change the logic used to build Downlink routes!]]></description><link>https://blog.agchapman.com/whats-new-in-downlink-2-routing/</link><guid isPermaLink="false">5efb05982228b500014be03c</guid><category><![CDATA[downlink]]></category><category><![CDATA[c#]]></category><category><![CDATA[how-to]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Thu, 24 Aug 2017 06:58:50 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2017/08/rory-hennessey-345296-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://blog.agchapman.com/content/images/2017/08/rory-hennessey-345296-sm.jpg" alt="What's new in Downlink 0.2: Routing"><p>This is part of a series of posts on the new features added in Downlink 0.2! Check out <a href="https://blog.agchapman.com/announcing-downlink-2/">this post</a> for more or hit <a href="https://agc93.github.io/downlink/">the docs</a>.</p>
</blockquote>
<p>On Tuesday, we saw how you can now self-host Downlink in an existing (or new) ASP.NET MVC Core app. However, Downlink's routing pattern is quite aggressive and you might find it kicking in for routes you'd prefer it didn't.</p>
<p>While before the rebuild, you had very little control over Downlink's internals, if you're self-hosting, you have much more power!</p>
<p>In fact, you can now easily add a route prefix to Downlink through no less than 3 different methods:</p>
<h3 id="1configuration">1. Configuration</h3>
<p>The easiest method is to define the <code>DownlinkPrefix</code> configuration key (using a config file, or environment variable). When you do so, that prefix will be automatically added to Downlink's routes, no further changes required!</p>
<blockquote>
<p>Since this is part of the default setup, this configuration also works the same for the Docker image!</p>
</blockquote>
<p>Add the prefix to your configuration, such as 'download':</p>
<pre><code class="language-json">{
  &quot;DownlinkPrefix&quot;: &quot;download&quot;
}
</code></pre>
<p>Now, instead of a request to <code>yourapp.com/v1.2/windows/x64</code> triggering your download, you'll want <code>yourapp.com/download/v1.2/windows/x64</code>. It's that simple!</p>
<h3 id="2code">2. Code</h3>
<p>If you're self-hosting you can also easily add a new route prefix in the startup code of your app. Use the fluent overload of the <code>AddDownlink()</code> method to call the <code>UseRoutePrefix</code> method. For example, to add the same 'download' prefix we used above:</p>
<pre><code class="language-csharp">// in Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().AddDownlink(d =&gt; d.UseRoutePrefix(&quot;download&quot;));
}
</code></pre>
<p>Now this works pretty well for simple &quot;add a string to the start&quot; routing scenarios, but there's another much more powerful option!</p>
<h3 id="3extensionpoint">3. Extension point</h3>
<p>We saw yesterday that Downlink 0.2 now supports extensibility for most of it's core and internal functionality. Fortunately, that even extends to the routing behaviour!</p>
<p>Internally, Downlink attempts to resolve an instance of <code>IRoutePrefixBuilder</code> to determine what route prefix (if any) to apply to Downlink routes when the app starts. In fact, the two methods above are using those same extension points (they're <code>ConfigurationRoutePrefixBuilder</code> and <code>StaticRoutePrefixBuilder</code>, respectively).</p>
<p>So, to control the routing prefix more directly, you can also use the builder to add your own prefix implementation with whatever logic you like. For example:</p>
<pre><code class="language-csharp">// in Startup.cs
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().AddDownlink(d =&gt; d.UseRouteBuilder&lt;MyAwesomeBuilder&gt;());
}
</code></pre>
<p>Then, Downlink will call your builder's <code>GetPrefix</code> method when the app starts to get the prefix to prepend to Downlink's routes</p>
<h2 id="thetechnicaldetails">The technical details</h2>
<p>The <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Infrastructure/IRoutePrefixBuilder.cs"><code>IRoutePrefixBuilder</code></a> that we use to resolve the route prefix (see above) is a very simple interface just used to get a prefix. The actual routing changes are done in the <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Infrastructure/DownlinkRouteConstraint.cs#L17"><code>DownlinkRouteConvention</code></a>. This type makes use of ASP.NET Core's new <code>IActionModelConvention</code> to apply an app-wide &quot;convention&quot; to any actions.</p>
<p>The default convention (which you can replace if you're feeling very adventurous) just goes through the actions in the <code>DownlinkController</code> and merges the prefix we resolved into the existing attribute routes to create our new route table entries.</p>
<hr>
<p>As you can see, Downlink's routing is now much more flexible and puts the control back in your hands! Check back tomorrow for <a href="https://blog.agchapman.com/tag/downlink/">more posts on new features</a> in the rebuilt Downlink 0.2!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What's new in Downlink 0.2: Extensibility]]></title><description><![CDATA[This post outlines the new extensibility support in Downlink 0.2 including all the built-in extension points or building your own logic on top of Downlink!]]></description><link>https://blog.agchapman.com/whats-new-in-downlink-2-extensibility/</link><guid isPermaLink="false">5efb05982228b500014be03a</guid><category><![CDATA[downlink]]></category><category><![CDATA[c#]]></category><category><![CDATA[how-to]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Wed, 23 Aug 2017 06:06:31 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2017/08/aperture-vintage-346923-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://blog.agchapman.com/content/images/2017/08/aperture-vintage-346923-sm.jpg" alt="What's new in Downlink 0.2: Extensibility"><p>This is part of a series of posts on the new features added in Downlink 0.2! Check out <a href="https://blog.agchapman.com/announcing-downlink-2/">this post</a> for more or hit <a href="https://agc93.github.io/downlink/">the docs</a>.</p>
</blockquote>
<p>While hosting Downlink (as we discussed yesterday) might be the big-ticket change in the rebuilt version, it wouldn't have been possible without this change: Downlink is now completely extensible!</p>
<p>That's right, you can now control and modify Downlink on the fly in your own code using any of a myriad of extension points. In fact, when self-hosting you get direct access to the services container it uses (shared with the hosting app) so you can register and resolve any service.</p>
<p>That's all good, but what does that look like in practice? Let's say you wanted to change how Downlink matches patterns in your app: simply implement the <code>IPatternMatcher</code> interface in a class (that we'll call <code>MyAwesomeMatcher</code>), and then in your startup code:</p>
<pre><code class="language-csharp">services.AddMvc().AddDownlink(d =&gt; d.AddPatternMatcher&lt;MyAwesomeMatcher&gt;());
</code></pre>
<p>Now, you can update your configuration and Downlink will automatically make sure your pattern matcher is available.</p>
<h3 id="dependencyinjection">Dependency Injection</h3>
<p>Since Downlink's extensibility is built around dependency injection, you can also use injection in your own extensions. Let's say that awesome pattern matcher from above needs access to the app configuration for some reason. Well, just add it to your constructor and Downlink will resolve it for you:</p>
<pre><code class="language-csharp">public class MyAwesomeMatcher : IPatternMatcher {
  public MyAwesomeMatcher(IConfiguration config) {
    // ...
  }
  // TRIMMED
}
</code></pre>
<p>If you need something that's not loaded by default, just add it in the startup code:</p>
<pre><code class="language-csharp">public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().AddDownlink(d =&gt; {
        d.Services.AddSingleton&lt;MyExtraService&gt;();
        d.AddPatternMatcher&lt;MyAwesomeMatcher&gt;();
    });
}
</code></pre>
<p>Now you can resolve <code>MyExtraService</code> in your constructor!</p>
<h3 id="disablingbuiltins">Disabling built-ins</h3>
<p>Downlink out of the box includes a lot of &quot;default&quot; implementations for moving parts like storage backends, scheme clients and pattern matchers. If you don't want those active in your app, you can use the new extensibility to also disable them in your startup code:</p>
<pre><code>services.AddMvc().AddDownlink(
  d =&gt; d.AddPatternMatcher&lt;MyAwesomeMatcher&gt;(),
  DownlinkBuilderOptions.SkipDefaultPatterns
);
// now only your pattern matcher is active!
</code></pre>
<h3 id="usecases">Use Cases</h3>
<p>In reality, almost every moving part of Downlink can be replaced or controlled using the extensibility points in Downlink, but the main ones (which have helpers provided) include:</p>
<ul>
<li>Storage backends</li>
<li>Pattern matchers</li>
<li>Scheme clients</li>
<li>Route builders</li>
<li>Plugins (more on that on Friday!)</li>
</ul>
<p>Note that the pre-built app (as included in Docker) has experimental plugin support for extensibility instead, but you'll need to wait for the post on plugins to see that in action!</p>
<h3 id="thetechnicaldetails">The technical details</h3>
<p>Okay, so for all the fancy talk this really comes down to a particularly heavy refactoring of the app code to be interface-driven and almost completely reliant on DI and IoC.</p>
<p>Basically every moving part in Downlink now implements an interface that gets resolved out of the DI container (currently using <code>Microsoft.Extensions.DependencyInjection</code>) and the app startup code registers all the sane defaults using a plugin. This way, self-hosting scenarios can replace any Downlink component by registering it with the DI container in the <code>AddDownlink</code> method!</p>
<hr>
<blockquote>
<p>You can find detailed documentation on Downlink's new extensibility <a href="https://agc93.github.io/downlink/doc/dev/developers.html">in the developer guide</a> (or check out all <a href="https://agc93.github.io/downlink/">the online docs</a>)</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[What's new in Downlink 0.2: Hosting]]></title><description><![CDATA[This post outlines the new hosting changes in Downlink 0.2 including using the Docker image or hosting Downlink in your own ASP.NET Core apps]]></description><link>https://blog.agchapman.com/whats-new-in-downlink-2-hosting/</link><guid isPermaLink="false">5efb05982228b500014be039</guid><category><![CDATA[downlink]]></category><category><![CDATA[how-to]]></category><category><![CDATA[c#]]></category><dc:creator><![CDATA[Alistair Chapman]]></dc:creator><pubDate>Tue, 22 Aug 2017 06:11:04 GMT</pubDate><media:content url="https://blog.agchapman.com/content/images/2017/08/jenu-prasad-347241-sm.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><blockquote>
<img src="https://blog.agchapman.com/content/images/2017/08/jenu-prasad-347241-sm.jpg" alt="What's new in Downlink 0.2: Hosting"><p>This is part of a series of posts on the new features added in Downlink 0.2! Check out <a href="https://blog.agchapman.com/announcing-downlink-2/">this post</a> for more or hit <a href="https://agc93.github.io/downlink/">the docs</a>.</p>
</blockquote>
<p>So one of the biggest cornerstones of the rebuilt Downlink is that it is no longer a monolithic Web API! Downlink is now a modular MVC-based <em>component</em> that you can add to any ASP.NET Core app (targeting .NET Core 2.0).</p>
<h2 id="usingtheprebuiltversion">Using the pre-built version</h2>
<p>Just because we've made all these changes doesn't make Downlink any harder to get started with!</p>
<p>If you've already got a config file (like <code>config.yml</code>), you can run it just like this:</p>
<pre><code class="language-bash">docker run -it --rm -p 80:80 -v $PWD/config.yml:/downlink/config.yml agc93/downlink:latest
</code></pre>
<p>Even if you don't use a config file, you can use environment variables instead:</p>
<pre><code class="language-bash">docker run -it --rm \
-p 80:80 \
-e DOWNLINK:Storage=GitHub \
-e DOWNLINK:GitHubStorage:Repository=gohugoio/hugo \
agc93/downlink:latest
</code></pre>
<p><em>(we're just using Hugo as an example here)</em></p>
<p>The trick is, you might not <strong>want</strong> to run a whole separate app/image just for your downloads. What if you want to add Downlink's awesome magic to your existing apps?</p>
<h2 id="hostingdownlinkinanotherapp">Hosting Downlink in another app</h2>
<p>Downlink 0.2 has now been re-built as an MVC component, so you can install it into an existing app much more easily.</p>
<p>In fact, when you run the <a href="https://hub.docker.com/r/agc93/downlink/">Docker image</a> above, you're actually running a super-lightweight ASP.NET Core app with Downlink pre-installed!</p>
<p>So, how do you host it yourself? Just create a new app (say, with <code>dotnet new api</code> for example) and follow the steps below:</p>
<h4 id="1installdownlink">1. Install Downlink</h4>
<p>Just run <code>dotnet add package Downlink</code> or add the <code>PackageReference</code> to your <code>csproj</code>:</p>
<pre><code class="language-xml">&lt;PackageReference Include=&quot;Downlink&quot; Version=&quot;0.2.0&quot; /&gt;
</code></pre>
<h4 id="2addtheconfigurationtoyourprogramcs">2. Add the configuration to your <code>Program.cs</code></h4>
<p>In <code>Program.cs</code>, just update your <code>BuildWebHost</code> to add the required configuration:</p>
<pre><code class="language-csharp">WebHost.CreateDefaultBuilder(args)
            .ConfigureDownlink() // &lt;-- add this line!
            .UseStartup&lt;Startup&gt;()
            .Build();
</code></pre>
<blockquote>
<p>You'll need to provide configuration such as from environment variables or config files, just like when running directly.</p>
</blockquote>
<h4 id="3adddownlinktoyourappinstartupcs">3. Add Downlink to your app (in <code>Startup.cs</code>)</h4>
<p>Now in <code>Startup.cs</code>, just add a single call to your <code>ConfigureServices</code> method:</p>
<pre><code class="language-csharp">services.AddMvc().AddDownlink();
</code></pre>
<p>That's it! Downlink is now installed and when you run your app (such as using <code>dotnet run</code>), Downlink will register itself with MVC and be listening for requests.</p>
<h2 id="anoteofcaution">A note of caution</h2>
<p>It's worth noting that Downlink's routing is quite aggressive so I recommend using Downlink 0.2's new routing support! Check back on Thursday for the full details on controlling your routing.</p>
<h2 id="thetechnicaldetails">The technical details</h2>
<p>Since Downlink uses a whole bunch of the built-in MVC features, it's not as easy as adding a new  middleware and standing back. Instead, Downlink uses MVC's new <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/advanced/app-parts"><em>Application Part</em></a> abstraction.</p>
<p>You can see <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/MvcBuilderExtensions.cs#L48">here</a> that we only need to register the hosting assembly as an application part and MVC will use it to pull resources (controllers etc) and add them to the hosting app. Downlink's internal <code>DownlinkBuilder</code> type is then responsible for configuration and extensions (see tomorrow's post for more on that!).</p>
<p>The only other important part is that <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink/Hosting/WebHostBuilderExtensions.cs#L8"><code>ConfigureDownlink()</code> method</a> that we added earlier. That just adds in the configuration that Downlink expects!</p>
<p>And just to prove the point you can see that <a href="https://github.com/agc93/downlink/blob/develop/src/Downlink.Host">the app we package in the Docker image</a> is built like any other hosting app, no magic here!</p>
<hr>
<blockquote>
<p>You can find detailed documentation on hosting Downlink <a href="https://agc93.github.io/downlink/doc/dev/hosting.html">here</a> (or check out all <a href="https://agc93.github.io/downlink/">the online docs</a>)</p>
</blockquote>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>