<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Tobias Moe Thorstensen]]></title><description><![CDATA[..thoughts on software development]]></description><link>https://moethorstensen.no/</link><generator>Ghost 3.41</generator><lastBuildDate>Sun, 11 May 2025 18:48:03 GMT</lastBuildDate><atom:link href="https://moethorstensen.no/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[My favorite features in .NET 10]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>.NET 10 is set to launch in November 2025 as the next LTS release having free support and patches for three years from its release date. Here are three of my favorite new features that I’ll definitely use in my day-to-day work.</p>
<h2 id="extensionmembers">Extension Members</h2>
<p>I’ve used extension methods</p>]]></description><link>https://moethorstensen.no/net10/</link><guid isPermaLink="false">682050e9ed97a700014e1e63</guid><category><![CDATA[.NET10]]></category><category><![CDATA[NET10]]></category><category><![CDATA[C#14]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Sun, 11 May 2025 09:40:53 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2025/05/header-10.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2025/05/header-10.jpg" alt="My favorite features in .NET 10"><p>.NET 10 is set to launch in November 2025 as the next LTS release having free support and patches for three years from its release date. Here are three of my favorite new features that I’ll definitely use in my day-to-day work.</p>
<h2 id="extensionmembers">Extension Members</h2>
<p>I’ve used extension methods extensively since their introduction in C# 3 back in 2007. Now, with C# 14, this concept is being further expanded to include <em>extension members</em>. Before the release of C# 14, an extension method would typically look like this:</p>
<pre><code class="language-csharp">public static void Subtract(this int number, int subtractBy) 
    =&gt; number - subtractBy;
</code></pre>
<p>But now, with C# 14 we can define an extension method and extension members like this:</p>
<pre><code class="language-csharp">public static class Extensions 
{
    extension(string sentence) 
    {
        public int CountWords() 
            =&gt; sentence.Split(' ', StringSplitOptions.RemoveEmptyEntries).Length;
        
        public bool FirstWord 
            =&gt; sentence.Split(' ', StringSplitOptions.RemoveEmptyEntries).FirstOrDefault() ?? string.Empty;
    }
}
</code></pre>
<h2 id="typedresultsforserversenteventssse">TypedResults for Server Sent Events (SSE)</h2>
<p>In client-server communication, a pull-based approach is commonly used where the client requests resources via a RESTful API or a similar mechanisms. The issue with this model is that it can generate excessive server traffic when clients repeatedly poll for resources that aren’t yet available.</p>
<p>To address this, we can shift from a pull-based model to a push-based one, where the server notifies the client as soon as the resource becomes available.</p>
<p>ASP.NET Core already offers several options for push-based communication. You can use SignalR, which leverages <code>WebSockets</code> to notify clients when something happens on the backend. Alternatively, you can use <code>Server-Sent Events (SSE)</code> for one-way, realtime updates from server to client over HTTP. The only requirement for an HTTP response to be recognized as an event stream is that the <code>Content-Type</code> header is set to <code>text/event-stream</code>.</p>
<p>Server Sent Events isn't something new in ASP.NET Core, but with the release of .NET 10 its <strong>really</strong> easy to implement. When .NET Core 3 was released back in 2019 a new type called <code>IAsyncEnumerable</code> where introduced. <code>IAsyncEnumerable&lt;T&gt;</code> is an interface in C# that allows you to iterate over a sequence of data asynchronously without blocking. This interface can now be passed in as a parameter to the new method in <code>TypedResults</code>:</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);

var app = builder.Build();
app.MapGet(&quot;/results, (CancellationToken token) =&gt; 
{    
    IAsyncEnumerable&lt;SseItem&lt;Result&gt;&gt; results = GetResults(token);
    return TypedResults.ServerSentEvents(results);
});    
</code></pre>
<h2 id="validationsupportinminimalapis">Validation Support in Minimal APIs</h2>
<p>When it comes to validation, ASP.NET offers excellent support through the <code>System.ComponentModel.DataAnnotations</code> namespace. It includes a wide range of built-in attributes and also allows you to define custom ones.</p>
<p>Until now, validation hasn’t been built into Minimal APIs by default, but this will change with the release of .NET 10. When I say ‘built-in,’ I don’t mean it wasn’t possible before, but it required you to manually implement validation logic within each endpoint or create a custom filter.</p>
<p>Beginning with <code>ASP.NET 10</code> you can simply write</p>
<pre><code class="language-csharp">var builder = WebApplication.CreateBuilder(args);
builder.Services.AddValidation();

var app = builder.Build();

app.MapPatch(&quot;/update-email&quot;, [EmailAddress] string address) =&gt;
{
   ....
   return TypedResults.Ok();
}
</code></pre>
<p>The call to <code>AddValidation</code> will ensure that all neccessary services to support validation is being registered within the container. Pretty easy!</p>
<p>Those are my top three new features in .NET 10 that I believe will have a real impact on my day-to-day work as a .NET developer. What are yours?</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Persist data to an Azure Disk from a Prometheus instance running in Azure Kubernetes Service]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>My most recent posts have been in the landscape of &quot;How to monitor applications running in Kubernetes&quot;. This time is no different. Today we'll look at how we can persist prometheus data to an Azure Disk using a custom build Storage Class.</p>
<p>In Kubernetes, a Storage Class defines</p>]]></description><link>https://moethorstensen.no/aks-prometheus-azure-disk/</link><guid isPermaLink="false">66570abced97a700014e1d24</guid><category><![CDATA[AKS]]></category><category><![CDATA[Azure Kubernetes Services]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Wed, 29 May 2024 12:50:22 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2024/05/k8s-storage.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2024/05/k8s-storage.png" alt="Persist data to an Azure Disk from a Prometheus instance running in Azure Kubernetes Service"><p>My most recent posts have been in the landscape of &quot;How to monitor applications running in Kubernetes&quot;. This time is no different. Today we'll look at how we can persist prometheus data to an Azure Disk using a custom build Storage Class.</p>
<p>In Kubernetes, a Storage Class defines the &quot;classes&quot; of storage available in a cluster. Administrators use Storage Classes to describe different types of storage offered, such as HDDs or network-attached storage (NAS), or in our case an Azure Disk backed by a SSD disk.<br>
When applications request persistent storage for their applications, they can specify a Storage Class in their Persistent Volume Claim (PVC). The Kubernetes control plane then uses the specified Storage Class to dynamically provision a Persistent Volume (PV) that meets the requirements. Storage Classes allow for flexible and efficient management of storage resources, ensuring that the right type of storage is allocated based on application needs.</p>
<p>Before we'll dive into the world of yaml files, we need to provision our Azure Disk. Its doesnt really matter how this resource is provisioned, I prefer using Bicep to deploy my Azure resources. This will create a Locally Redudant SSD disk with 32 GiB of storage.</p>
<pre><code class="language-yaml">param location string
param name string
param tags object

resource diskprometheus 'Microsoft.Compute/disks@2023-10-02' = {
  name: name
  location: location
  tags: tags
  sku: {
    name: 'Premium_LRS'
  }
  properties: {
    creationData: {
      createOption: 'Empty'
    }
    diskSizeGB: 32
    diskIOPSReadWrite: 120
    diskMBpsReadWrite: 25
    encryption: {
      type: 'EncryptionAtRestWithPlatformKey'
    }
    tier: 'P4'
  }
}
</code></pre>
<p>Once you have provisioned the resource, we can go ahead crafting our beloved yaml-files.</p>
<h3 id="craftingouryamlfiles">Crafting our yaml files</h3>
<pre><code class="language-yaml">apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: azuredisk-prometheus
provisioner: disk.csi.azure.com
parameters:
    skuName: PremiumSSD_LRS
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
</code></pre>
<p>A few things to note down:</p>
<ul>
<li>
<p>We are using the Azure Disk Container Storage Interface (CSI). CSI drivers in Kubernetes enable the integration of storage systems with Kubernetes clusters. The CSI standardizes how storage vendors can expose their storage systems to containerized workloads.</p>
</li>
<li>
<p>We specify <code>Retain</code> as our <code>ReclaimPolicy</code>. This policy dictates the default lifetime of the <code>Persistent Volume</code> when the associated <code>Persistent Volume Claims</code> are deleted. In our case we'll keep the data when the associated PVCs are deleted.</p>
</li>
<li>
<p>Our <code>VolumeBindingMode</code> is set to <code>Immediate</code>. In the Immediate binding mode, the <code>Persistent Volume</code> is bound to the <code>Persistent Volume Claim</code> as soon as the latter is created. The binding and provisioning occur immediately without waiting for a pod to be scheduled as it would be in the <code>WaitForFirstConsumer</code> mode.</p>
</li>
<li>
<p>Setting <code>AllowVolumeExpansion</code> to <code>true</code> enables the Persistent Volume to be dynamically resized without disrupting the running workloads.</p>
</li>
</ul>
<p>Still with me? Lets go ahead and create the specifications for our Persistent Volume and Persistent Volum Claim.</p>
<pre><code class="language-yaml">apiVersion: v1
kind: PersistentVolume
metadata:
    annotations:
        pv.kubernetes.io/provisioned-by: disk.csi.azure.com
    name: azuredisk-prometheus
spec:
    capacity:
        storage: 32Gi
    accessModes:
        - ReadWriteOnce
    persistentVolumeReclaimPolicy: Retain
    storageClassName: azuredisk-prometheus
    csi:
        driver: disk.csi.azure.com
        readOnly: false
        volumeHandle: /subscriptions/{subscription-id}/resourceGroups/{resourceGroupName}/providers/providers/Microsoft.Compute/disks/{disk-name}
        volumeAttributes:
            cachingMode: ReadOnly
            fsType: ext4
</code></pre>
<p>The important thing to note is how we reference our Azure Disk under <code>volumeHandle</code> by specifying the <code>Subscription Id</code>, name of the <code>Resource Group</code> and lastly the name of the provisioned disk.</p>
<pre><code class="language-yaml">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: prometheus
    prometheus: prometheus-kube-prometheus
  name: prometheus-prometheus-kube-prometheus-prometheus-db-prometheus-prometheus-kube-prometheus-prometheus-0
  namespace: k8s-namespace-for-prometheus
spec:
    storageClassName: azuredisk-prometheus
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 32Gi
</code></pre>
<p>Pay attention to name of this PVC. This is a workaround, outlined in this <a href="https://github.com/helm/charts/issues/9288">GitHub Issue</a>, if you want to attach an existing PVC to Prometheus.</p>
<p>We can now apply these three <code>yaml</code>-files to our AKS Cluster by running<br>
<code>kubectl apply</code> on each and every file, or by running <code>kubectl apply -f</code> on the dictionary where these files are located.</p>
<h3 id="installprometheus">Install Prometheus</h3>
<p>We'll install Prometheus from the package manager Helm. Before we'll do that, lets configure the <code>values.yml</code> in order the parameterize the Prometheus <code>ConfigMap</code>.</p>
<pre><code class="language-yaml">prometheus:
    enabled: true
    prometheusSpec:
        storageSpec:
            volumeClaimTemplate:
                spec:
                    volumeName: azuredisk-prometheus
                    storageClassName: azuredisk-prometheus
                    accessModes: [&quot;ReadWriteOnce&quot;]
                    resources:
                        requests:
                            storage: 32Gi
</code></pre>
<p>Lets go ahead and install the Helm Chart with our values.yml file</p>
<pre><code class="language-bash">helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --namespace k8s-namespace-for-prometheus prometheus prometheus-community/kube-prometheus-stack -f values.yaml
</code></pre>
<p>To verify that it works you can navigate to the AKS resource, then go to Storage and look for our created <code>Storage Class</code>, <code>Persistent Volumes</code> and <code>Persistent Volume Claims</code>. Verify that the <code>Persistent Volume Claim</code> has the status <code>Bound</code>.</p>
<p>If you are stuck, I highly recommend that you'll running <code>kubectl describe</code>on the various resources and <code>kubectl logs</code> on the prometheus Pod.</p>
<p>If you have any questions, you can reach out to me on <a href="https://x.com/thorstensen">Twitter/X</a>.</p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running Seq in AKS with Storage Account Persistence]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h3 id="introduction">Introduction</h3>
<p><a href="https://www.moethorstensen.no/running-seq-in-azure/">Nine months back I wrote a post on how you can run seq in Azure</a>. Back then we used Azure Container Instances as our hosting platform. A few months back we migrated most of our workloads to Azure Kubernetes Services and it didn't make any sense to continue running</p>]]></description><link>https://moethorstensen.no/running-seq-in-azure-kubernetes-services-with/</link><guid isPermaLink="false">6544d01fed97a700014e1b9c</guid><category><![CDATA[AKS]]></category><category><![CDATA[Azure Kubernetes Services]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[seq]]></category><category><![CDATA[storage]]></category><category><![CDATA[storage account]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Fri, 03 Nov 2023 12:47:34 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/11/growtika-9cXMJHaViTM-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h3 id="introduction">Introduction</h3>
<img src="https://moethorstensen.no/content/images/2023/11/growtika-9cXMJHaViTM-unsplash.jpg" alt="Running Seq in AKS with Storage Account Persistence"><p><a href="https://www.moethorstensen.no/running-seq-in-azure/">Nine months back I wrote a post on how you can run seq in Azure</a>. Back then we used Azure Container Instances as our hosting platform. A few months back we migrated most of our workloads to Azure Kubernetes Services and it didn't make any sense to continue running Seq on Container Instances when we had a Kubernetes Cluster. So we figured out that we could easily install Seq as a <a href="https://helm.sh/">Helm Chart</a> and <a href="https://github.com/helm/charts/blob/master/stable/seq/values.yaml">customize the deployment</a> to fit our needs.</p>
<p>We were quickly up and running with an instance of Seq in our cluster, but one issue remained to be solved: How can we store data such as api keys, configuration and the actual log data outside the cluster so we easily can redeploy the cluster without having to worry about losing configuration and rotating api keys for all of our applications?</p>
<h3 id="configureastorageclasspvcandpv">Configure a Storage Class, PVC and PV.</h3>
<p>In the configuration file for the helm chart, <a href="https://github.com/helm/charts/blob/master/stable/seq/values.yaml">values.yaml</a>, a section named <code>persistence</code> let's us configure persistence (doh) by using something call a <code>PVC</code>, short for <code>Persistent Volume Claim</code>.</p>
<p>A <code>PVC</code> works in conjuction with a <code>PV</code>, short for <code>Persistent Volume</code>. A <code>PVC</code> allows <code>Pods</code> in your cluster to access a <code>PV</code>. A <code>PV</code> is assigned to a <code>Storage Class</code> which essentially abstracts away various storage solutions such as <code>Azure Storage Account</code>. The illustration below sums up how everything is connected</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://moethorstensen.no/content/images/2023/11/AKS_PV_PVC-1.png" class="kg-image" alt="Running Seq in AKS with Storage Account Persistence"><figcaption>Overview of Persistent Volume, Persistent Volum Claim and Storage Class</figcaption></figure><!--kg-card-begin: markdown--><p>For this to work you'll need an Azure Storage Account with at least <code>Premium_LRS</code> set as the SKU. If you need zone redundancy, you can choose for <code>Premium_ZRS</code>. The storage account needs to be of kind <code>FileStorage</code> and have a <code>File Share</code> Configured. Please see this <a href="https://www.moethorstensen.no/running-seq-in-azure/">post</a> for Bicep templates. Please make sure that the <code>Managed Identity</code> configured as <code>Kubelet Identity</code> has access to this Storage Account.</p>
<p>Lets start by defining our <code>StorageClass.yaml</code>:</p>
<pre><code class="language-yaml">kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: seq-azurefile
provisioner: file.csi.azure.com
allowVolumeExpansion: true
mountOptions:
 - dir_mode=0777
 - file_mode=0777
parameters:
  skuName: Premium_LRS
  storageAccount: &lt;storage-account-name&gt;
  location: &lt;storage-account-geographic-location&gt;
</code></pre>
<p>Then we'll create our <code>PersistentVolume.yaml</code></p>
<pre><code class="language-yaml">apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: file.csi.azure.com
  name: seq-azurefile
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: seq-azurefile
  csi:
    driver: file.csi.azure.com
    readOnly: false
    volumeHandle: replace_with_unique_id_across_cluster
    volumeAttributes:
      resourceGroup: replace_with_storage_account_rg
      shareName: replace_with_file_share_name
      storageAccount: replace_with_storage_account_name
  mountOptions:
    - dir_mode=0777
    - file_mode=0777
    - uid=0
    - gid=0
    - mfsymlinks
    - cache=strict
    - nosharesock
    - nobrl
</code></pre>
<p>And finally lets create our <code>PersistentVolumeClaim.yaml</code></p>
<pre><code class="language-yaml">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: seq-azurefile-pvc
  namespace: your-aks-namespace
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: seq-azurefile
  volumeName: seq-azurefile
  resources:
    requests:
      storage: 100Gi
</code></pre>
<p>Apply these changes to the cluster by running <code>kubectl apply -f &lt;file-name&gt;</code> or for all yaml-files in the current folder <code>kubectl apply -f</code>.</p>
<p>To verify that everything works as expected you can either run a series of kubectl commands, or by navigating to your cluster in the Azure Portal and select the tab 'Storage'. You should see your <code>PVC</code> and <code>PV</code> with the status <code>Bound</code>. In case you prefer to verify this via your terminal you can run the following commands:</p>
<ul>
<li><code>kubectl get pvc --all-namespaces</code></li>
<li><code>kubectl get pv --all-namespaces</code></li>
<li><code>kubectl get sc --all-namespaces</code></li>
</ul>
<h3 id="configurevaluesyamlforseq">Configure values.yaml for Seq</h3>
<p>In Seq <code>values.yaml</code> we'll need to reference our existing <code>PVC</code></p>
<pre><code class="language-yaml">persistence:
  enabled: true
  existingClaim: seq-azurefile-pvc
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
</code></pre>
<p>Finnaly we can now install Seq as a Helm Chart with the provided configuration:</p>
<p><code>helm upgrade --install -f values.yaml --namespace logs seq datalust/seq</code></p>
<p>Verify that everthing works by describing the pod in your cluster and look for events in case something is wrong:</p>
<p><code>kubectl get pods -n logs</code> and look for Seq. Copy the name of the <code>Pod</code> and run <code>kubectl describe pod &lt;pod-name&gt; -n logs</code></p>
<p>Thanks for reading, and if you have any questions, you can find me on X with the handle <a href="https://twitter.com/thorstensen">@thorstensen</a></p>
<!--kg-card-end: markdown--><p></p>]]></content:encoded></item><item><title><![CDATA[Debug Nodes in an AKS Cluster]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>So you have created and configured an AKS cluster and life is good. You think. As time goes by, you keep adding features to your cluster and eventually something doesnt work as expected. Sounds familiar?</p>
<p>I my particular case I was trying to mount an Azure File Share to my</p>]]></description><link>https://moethorstensen.no/debug-nodes-in-an-aks-cluster/</link><guid isPermaLink="false">65428a04ed97a700014e1b3d</guid><category><![CDATA[AKS]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Azure Kubernetes Services]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Wed, 01 Nov 2023 17:47:50 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/11/k8s-debugging-1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2023/11/k8s-debugging-1.png" alt="Debug Nodes in an AKS Cluster"><p>So you have created and configured an AKS cluster and life is good. You think. As time goes by, you keep adding features to your cluster and eventually something doesnt work as expected. Sounds familiar?</p>
<p>I my particular case I was trying to mount an Azure File Share to my AKS cluster with the <code>NFS protocol</code>, but without any success. I was getting a pretty useful error message when running <code>kubectl describe</code> on the problematic pod:</p>
<p><code>Output: mount.nfs: access denied by server while mounting storageaccountname.file.core.windows.net:/storageaccountname/fileshareName</code></p>
<p>Microsoft troubleshooting guide pointed me in the direction that the node probably was missing <code>nfs-common</code>.</p>
<p>To install packages to the underlying Ubuntu <code>Virtual Machine Scale Set</code> I needed <code>SSH access</code> to the node. Pretty straight forward you think? Think again.</p>
<p>This is where this neat little trick comes in handy:</p>
<ol>
<li>List all your nodes by running <code>kubectl get nodes</code></li>
<li>Find the node you want to SSH into</li>
<li>Run <code>kubectl debug node/node-name -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11</code></li>
<li>Enjoy root access.</li>
</ol>
<p>Happy debugging!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Live in harmony with Kubernetes Secrets using Azure Key Vault]]></title><description><![CDATA[<h2 id="introduction">Introduction</h2><p>If you have worked with Kubernetes you probably know that Secrets are not really secrets. They are just base64 encoded key value pairs stored in <a href="https://etcd.io/">etcd</a>. Anyone with access to the cluster will have access to these secrets by running a simple command</p><!--kg-card-begin: markdown--><pre><code>kubectl get secrets --all-namespaces
kubectl get</code></pre>]]></description><link>https://moethorstensen.no/live-in-harmony-with-kubernetes-secrets-using-azure-key-vault/</link><guid isPermaLink="false">650a954eed97a700014e1919</guid><category><![CDATA[AKS]]></category><category><![CDATA[Azure Kubernetes Services]]></category><category><![CDATA[Azure Key Vault]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Wed, 20 Sep 2023 10:16:03 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/09/jason-pofahl-6AQY7pO1lS0-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://moethorstensen.no/content/images/2023/09/jason-pofahl-6AQY7pO1lS0-unsplash.jpg" alt="Live in harmony with Kubernetes Secrets using Azure Key Vault"><p>If you have worked with Kubernetes you probably know that Secrets are not really secrets. They are just base64 encoded key value pairs stored in <a href="https://etcd.io/">etcd</a>. Anyone with access to the cluster will have access to these secrets by running a simple command</p><!--kg-card-begin: markdown--><pre><code>kubectl get secrets --all-namespaces
kubectl get secrets/database -n my-namespace -t={{.data.password-key}} | base64 -D
</code></pre>
<!--kg-card-end: markdown--><p>In order to safely store these secrets you can enable encryption at rest or configure role based access control(RBAC). This article won't cover these two alternatives as we'll rather look at a helm chart that will injecting secrets stored in Azure Key Vault as environment variables to a pod running <a href="https://grafana.com/">Grafana</a>. That being said, this approach can be used for other scenarios where you'll need secrets from Azure Key vault.</p><h2 id="prerequisites">Prerequisites </h2><p>To follow along you'll need to have the following up and running:</p><ul><li><a href="https://helm.sh/docs/intro/install/">Helm installed</a></li><li>An AKS Cluster</li><li>An Azure Key Vault instance in the same subscription</li><li>A User Assigned Managed Identity with the role <a href="https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli">Key Vault Secrets Officer </a>to the aforementioned Key Vault. This Managed Identity also needs to be associated with the AKS Cluster. </li></ul><h2 id="configure-kubernetes-cluster">Configure Kubernetes Cluster</h2><p>We are using a Helm Chart named <a href="https://akv2k8s.io/">akv2k8s</a> to get secrets from Azure Key Vault. Once this chart is installed, two pods will run in our cluster fetching and monitoring changes to specific secrets. This chart will only monitor changes in namespaces with the label</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">azure-key-vault-env-injection: enabled
</code></pre>
<!--kg-card-end: markdown--><p></p><p>To label your namespace in an idempotent way, you can run the following command</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">kubectl create namespace grafana --dry-run=client -o json|jq '.metadata += {&quot;labels&quot;:{&quot;azure-key-vault-env-injection&quot;:&quot;enabled&quot;}}' | kubectl apply -f -
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Now we'll need to add a reference to our Key Vault. </p><!--kg-card-begin: markdown--><pre><code class="language-yaml">apiVersion: spv.no/v2beta1
kind: AzureKeyVaultSecret
metadata:
  name: azure-keyvault-secret-inject 
  namespace: grafana
spec:
  vault:
    name: someKeyVaultName # name of key vault
    object:
      name: someKeyVaultName # name of the akv object
      type: secret # akv object type
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Apply the changes to our cluster:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">kubectl apply -f &lt;name-of-file-above-yml&gt;
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Now that we have labeled our namespace and created a reference to a secret in our Azure Key Vault, lets install the Helm Charts! </p><h2 id="installing-helm-charts">Installing Helm Charts </h2><p>Helm is a package managing tool for Kubernets and packages are called Charts. We'll use two Helm Charts to achieve our goal by running Grafana with secrets injected as environment variables to the pod. </p><ul><li><a href="https://github.com/grafana/helm-charts">Grafana</a> </li><li><a href="https://akv2k8s.io/">akv2k8s</a></li></ul><p>Lets start by adding these two charts:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">helm repo add grafana https://grafana.github.io/helm-charts
helm repo add spv-charts https://charts.spvapi.no
helm repo update
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Lets install these charts </p><!--kg-card-begin: markdown--><pre><code class="language-yaml">helm upgrade --install akv2k8s spv-charts/akv2k8s --namespace akv2k8s
helm upgrade --install --namespace grafana grafana grafana/grafana
</code></pre>
<!--kg-card-end: markdown--><p></p><p>You should now have three pods running. Verify that by running the following commands:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">kubectl get pods -n grafana
kubectl get pods -n akv2k8s
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Should result in:</p><ul><li>grafana-xyz</li><li>akv2k8s-controller-xyz</li><li>akv2k8s-envinjector-xyz</li></ul><h2 id="configuring-grafana">Configuring Grafana</h2><p>Lets take a step back and configure Grafana with the values we configured in <em>AzureKeyVaultSecret. </em></p><p>First we'll need to delete Grafana Helm Chart.</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">helm delete grafana -n grafana
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Now we need to customize Grafana by adding the -f option. This option will allow us to pass in a yml file with custom values. We will be using the <a href="https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml">template</a> found in Grafana's GitHub Repository. </p><p>To achieve reading values from Azure Key Vault, we'll be setting the default administrator password for Grafana by passing it in as an environment variable. Find the section for environment variables in the yml file and add the following:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">env:
  GF_SECURITY_ADMIN_USER: &quot;admin&quot;
  GF_SECURITY_ADMIN_PASSWORD: &quot;azure-keyvault-secret-inject@azurekeyvault&quot;
</code></pre>
<!--kg-card-end: markdown--><p></p><p><em>GF_SECURITY_ADMIN_PASSWORD</em> will now refer to the name we specified for <em>AzureKeyVaultSecret.</em></p><p>This is unfortunetly not enough as Grafana won't set this environment variable to the administrator password. We'll need to run a command when the pod has started to set a new password. </p><p>Under the specificed environment variables add the following: </p><!--kg-card-begin: markdown--><pre><code class="language-yaml">command:
  - sh
  - -c
  - grafana-cli admin reset-admin-password ${GF_SECURITY_ADMIN_PASSWORD} &amp;&amp; /run.sh
</code></pre>
<!--kg-card-end: markdown--><p></p><p>This is a workaround, and it seems that wont be fixed in the near future according to this <a href="https://github.com/grafana/grafana/issues/49055">GitHub Issue</a>.</p><p>Depending on your configuring, like Ingress, Persist Volum Claims etc, your values.yml file should look something like this:</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">env:
  GF_SECURITY_ADMIN_USER: &quot;admin&quot;
  GF_SECURITY_ADMIN_PASSWORD: &quot;azure-keyvault-secret-inject@azurekeyvault&quot;

command:
  - sh
  - -c
  - grafana-cli admin reset-admin-password ${GF_SECURITY_ADMIN_PASSWORD} &amp;&amp; /run.sh
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Lets install Grafanas Helm Chart once more with these values:</p><!--kg-card-begin: markdown--><pre><code class="language-bash">helm upgrade --install --namespace grafana grafana grafana/grafana -f values.yml
</code></pre>
<!--kg-card-end: markdown--><p></p><p>You should now see an instance of Grafana running in your cluster. Lets inspect the pod by running </p><!--kg-card-begin: markdown--><pre><code class="language-bash">kubectl describe pod &lt;name-of-pod&gt; -n grafana
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Scroll all the way up to Environment and here you should see your secret being injected:</p><figure class="kg-card kg-image-card"><img src="https://moethorstensen.no/content/images/2023/09/Screenshot-2023-09-20-at-12.02.28.png" class="kg-image" alt="Live in harmony with Kubernetes Secrets using Azure Key Vault"></figure><p></p><p>Now you should be able to access grafana depending on your configuration, by port forwarding or via the ingress, with the password stored in Azure Key Vault. </p><p>If you face any issues I would recommend looking at the events section when you describe the pod, or alternatively get the logs by running</p><!--kg-card-begin: markdown--><pre><code class="language-bash">kubectl logs &lt;pod-name&gt; -n grafana
</code></pre>
<!--kg-card-end: markdown--><p></p><p>Hope you found the post useful, and thank you for reading!</p>]]></content:encoded></item><item><title><![CDATA[Azure AD workload identity with AKS]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Managed Identities is a key concept in Microsoft Azure which makes it easy to manage access to various services in Azure without having to worry about rotating secrets. In this article we'll look at how we can use a specific managed identity connected to pods in Azure Kubernetes Services.</p>
<p>Before</p>]]></description><link>https://moethorstensen.no/azure-ad-workload-identity-with-aks/</link><guid isPermaLink="false">64f72558ed97a700014e17e4</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[AKS]]></category><category><![CDATA[Azure Kubernetes Services]]></category><category><![CDATA[Azure]]></category><category><![CDATA[Workload Identity]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Tue, 05 Sep 2023 13:47:01 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/09/growtika-pr5lUMgocTs-unsplash--1-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2023/09/growtika-pr5lUMgocTs-unsplash--1-.jpg" alt="Azure AD workload identity with AKS"><p>Managed Identities is a key concept in Microsoft Azure which makes it easy to manage access to various services in Azure without having to worry about rotating secrets. In this article we'll look at how we can use a specific managed identity connected to pods in Azure Kubernetes Services.</p>
<p>Before we start, I'll assume that you have a certain degree of knowledge about Azure and AKS. I will also assume that you have setup a Subscription, a Resource Group, a User Assigned Managed Identity and also the AKS Cluster we are deploy to. So, lets jump into it!</p>
<p>For many years pod-managed identity has been the way to connect a AKS Pod with a Managed Identity. This Open Source project has been replaced by Azure AD Workload Identity. Unfortunately there arent many good articles/blog posts/official documentation on how to setup and configure this is a good way. Hopefully this one will make you succeed.</p>
<p>First of all, lets create our <code>serviceaccount.yml</code> file</p>
<pre><code class="language-yaml">apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    azure.workload.identity/client-id: managed_identity_client_id
  labels:
    azure.workload.identity/use: &quot;true&quot;
  name: serviceaccount_name
</code></pre>
<p>Replace <code>managed_identity_client_id</code> with the clientId of your Managed Identity. This can be found in the portal, or by running the following CLI command</p>
<pre><code>az identity show -n name-of-managed-identity \
-g name-of-resource-group \
-o tsv --query clientId
</code></pre>
<p>You also need to replace the name with something meaningful. We'll need this name later on when we'll change our deployment.yml and setting up federation for the Managed Identity</p>
<p>Next up you need to change you <code>deployment.yml</code> to include <code>azure.workload.identity/use: &quot;true&quot;</code> under <code>template/metadata/labels</code> and refering to the <code>serviceaccount_name</code> under <code>template/spec/serviceAccountName</code>. Your deployment.yml should look something like this:</p>
<pre><code class="language-yml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloworld
  labels:
    app: helloworld
spec:
  replicas: 2
  selector:
    matchLabels:
      app: helloworld
  template:
    metadata:
      labels:
        app: helloworld
        azure.workload.identity/use: &quot;true&quot;
    spec:
      serviceAccountName: serviceaccount_name
      containers:
        - name: helloworld
          image: mcr.microsoft.com/dotnet/samples:aspnetapp
          ports:
          - containerPort: 80
</code></pre>
<p>Finally we'll need to run an <code>az command</code> to create federated credentials for the Managed Identity</p>
<pre><code>issuerUrl=&quot;$(az aks show -g rg-name -n aks-clusterName --query oidcIssuerProfile.issuerUrl -o tsv)&quot;
                   
az identity federated-credential create \ 
--name serviceaccount_name \ 
--identity-name mi_name \ 
--resource-group mi_rg \
--issuer $issuerUrl \
--subject system:serviceaccount:your-aks-namespace:serviceaccount_name
</code></pre>
<p>Navigating to the overview of your Managed Identity should show you a new element in the menu named 'Federated Credentials'.</p>
<p>You're now all set to apply changes to your AKS Cluster by running <code>kubectl apply</code> on your configuration files.</p>
<p><strong>As a bonus</strong>; Please make sure that you have updated to at least version 1.9 of the <code>Azure.Identity</code> package if your application is a .NET app. This package includes <code>WorkloadIdentityCredential</code> when your application uses the <code>DefaultAzureCredentials</code> authentication flow.</p>
<!--kg-card-end: markdown--><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Use semantic versioning for NuGet packages built from Azure DevOps]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>When developing open source software published to nuget feeds, E.G nuget.org, versioning becomes really important. Semantic versioning is a popular versioning scheme that introduces three positive numbers that all together forms the version number. Those number are formatted as <code>MAJOR.MINOR.PATCH</code>.</p>
<ul>
<li><code>MAJOR</code> increases when incompatible changes are</li></ul>]]></description><link>https://moethorstensen.no/use-sematic-versioning-for-nuget-packages-built-from-azure-devops/</link><guid isPermaLink="false">64904371ed97a700014e1741</guid><category><![CDATA[Azure DevOps]]></category><category><![CDATA[SemVer]]></category><category><![CDATA[Semantic Versioning]]></category><category><![CDATA[NuGet]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Mon, 19 Jun 2023 12:25:31 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/06/mika-baumeister-Wpnoqo2plFA-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2023/06/mika-baumeister-Wpnoqo2plFA-unsplash.jpg" alt="Use semantic versioning for NuGet packages built from Azure DevOps"><p>When developing open source software published to nuget feeds, E.G nuget.org, versioning becomes really important. Semantic versioning is a popular versioning scheme that introduces three positive numbers that all together forms the version number. Those number are formatted as <code>MAJOR.MINOR.PATCH</code>.</p>
<ul>
<li><code>MAJOR</code> increases when incompatible changes are made.</li>
<li><code>MINOR</code> increases when changes are made in a backward compatible manner</li>
<li><code>PATCH</code> increases when you make backward compatible bug fixes</li>
</ul>
<p>The first two numbers are always changed manually when creating the package. For the number that represents <code>PATCH</code>, we can do this automatically with a little hidden gem in Azure DevOps.</p>
<p>Given that you are using the NugetCommand@2 task from Azure DevOps marketplace, you can select versioning scheme <code>byEnvVar</code> that tells the task to fetch the version number from the Environment Variables by the key <code>versionEnvVar</code>.</p>
<pre><code class="language-yml">- task: NuGetCommand@2
  displayName: &quot;Push&quot;
  inputs:
  command: &quot;push&quot;
  feedsToUse: &quot;select&quot; 
  packagesToPush: &quot;nupackages-to-push&quot;
  nuGetFeedType: &quot;internal&quot;
  publishVstsFeed: &quot;myPrivateFeed&quot;
  versioningScheme: &quot;byEnvVar&quot;
  versionEnvVar: PackageVersion
  allowPackageConflicts: false
</code></pre>
<p>To set the variables, head over to the Variables section for your pipeline and declare four variables:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Major</td>
<td>1</td>
</tr>
<tr>
<td>Minor</td>
<td>0</td>
</tr>
<tr>
<td>Patch</td>
<td>$[counter(format('{0}.{1}', variables['Major'], variables['Minor']), 0)]</td>
</tr>
<tr>
<td>PackageVersion</td>
<td>$(Major).$(Minor).$(Patch)</td>
</tr>
</tbody>
</table>
<p>Note that the variable <code>PackageVersion</code> is the one we refer to in our <code>NugetCommand@2</code> task.</p>
<p>Happy Coding!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Running Seq in Azure]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Seq is one of my favorite logging and monitoring tools out there. Unfortunately Azure does not have built-in support to run Seq as a SaaS offering. But don't worry, this guide will walk you through configuring and deploying Seq as a Docker container running in Azure Container Instances using Bicep.</p>]]></description><link>https://moethorstensen.no/running-seq-in-azure/</link><guid isPermaLink="false">63e6273ded97a700014e1657</guid><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Fri, 10 Feb 2023 12:12:05 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/02/seq-chart-dark1.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2023/02/seq-chart-dark1.png" alt="Running Seq in Azure"><p>Seq is one of my favorite logging and monitoring tools out there. Unfortunately Azure does not have built-in support to run Seq as a SaaS offering. But don't worry, this guide will walk you through configuring and deploying Seq as a Docker container running in Azure Container Instances using Bicep.</p>
<h2 id="definingbicepresources">Defining Bicep resources</h2>
<p>To run Seq in Azure we'll three resources:</p>
<ul>
<li>Azure Container Instance hosting our docker image</li>
<li>Azure Storage Account storing log data</li>
<li>Azure File Share mounted to the storage account</li>
</ul>
<p>Before we start defining our resources, let's create a new directory and two bicep files.</p>
<p><code>mkdir infrastructure &amp;&amp; cd infrastructure &amp;&amp; touch seq.bicep &amp;&amp; touch main.bicep</code></p>
<p>For demonstration purposes <code>seq.bicep</code> will contain all our resources:</p>
<pre><code class="language-yml">param location string = resourceGroup().location
var dnsLabelName = 'aci-seq-test'
var fqdn = 'http://${dnsLabelName}.${location}.azurecontainer.io'

resource storageAccount 'Microsoft.Storage/storageAccounts@2022-05-01' = {
  name: 'saseqtest'
  location: location
  sku: {
    name: 'Premium_LRS'
  }
  kind: 'FileStorage'
}

resource fileShare 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-04-01' = {
  name: '${storageAccount.name}/default/${shareseqtest}'
}

resource containerGroup 'Microsoft.ContainerInstance/containerGroups@2021-09-01' = {
  name: 'aci-seq-test'
  location: location
  properties: {
    containers: [
      {
        name: 'aci-seq-test'
        properties: {
          image: 'datalust/seq:latest'
          ports: [
            {
              port: 80
              protocol: 'TCP'
            }
          ]
          resources: {
            requests: {
              cpu: 1
              memoryInGB: 1
            }
          }
          volumeMounts: [
            {
              mountPath: '/data'
              name: 'seq-data'
            }
          ]
          environmentVariables: [
            {
              name: 'ACCEPT_EULA'
              value: 'Y'
            }
            {
              name: 'SEQ_API_CANONICALURI'
              value: fqdn
            }
          ]
        }
      }
    ]
    volumes: [
      {
        name: 'seq-data'
        azureFile: {
          shareName: fileShareName
          storageAccountName: storageAccount.name
          storageAccountKey: storageAccount.listKeys().keys[0].value
        }
      }
    ]
    osType: 'Linux'
    restartPolicy: 'OnFailure'
    ipAddress: {
      type: 'Public'
      dnsNameLabel: dnsLabelName
      ports: [
        {
          port: 80
          protocol: 'TCP'
        }
      ]
    }
  }
}
output fqdn string = containerGroup.properties.ipAddress.fqdn
</code></pre>
<p>Lets define our <code>main.bicep</code> file</p>
<pre><code class="language-yml">module seq 'seq.bicep' = {
  name: 'seq'
}

output seqFqdn string = seq.outputs.fqdn
</code></pre>
<p>Thats everything you'll need to create an instance of Seq running in Azure with a File share mounted to it. To create these resources, make sure you are logged into your Azure Tenant and the correct subscription is selected. To deploy these resource you can either use the portal or the CLI by running:</p>
<p><code>az deployment group create --resource-group &lt;resource-group-name&gt; --template-file main.bicep</code></p>
<p>The cost of running these resources in Azure adds up to around 50 USD/month. Most of the cost is related to running the Container Instance 24/7. For a more detailed overview of the pricing structure, I would recommend to check out the <a href="https://azure.microsoft.com/en-us/pricing/calculator/">Azure Pricing Calculator</a>.</p>
<p>Make sure that you enable authentication by accessing the seq dashboard!</p>
<p>Happy logging!</p>
<!--kg-card-end: markdown--><p></p><p></p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Substitute environment variables for a frontend app served by Kestrel]]></title><description><![CDATA[Reading environment variables from your frontend application served by Kestrel]]></description><link>https://moethorstensen.no/substitute-environment-variables-for-react-app-served-by-kestrel/</link><guid isPermaLink="false">63ce830ded97a700014e14e3</guid><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Mon, 23 Jan 2023 14:05:31 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2023/01/markus-spiske-Skf7HxARcoc-unsplash--1-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2023/01/markus-spiske-Skf7HxARcoc-unsplash--1-.jpg" alt="Substitute environment variables for a frontend app served by Kestrel"><p>When serving a website from a docker container, we usually create a multi staged Dockerfile using <a href="https://www.nginx.com/">nginx</a> as our web server. The <code>Dockerfile</code> probably looks something like this:</p>
<pre><code class="language-yml">... left out for brevity...

FROM nginx:alpine AS deploy
WORKDIR /usr/share/nginx/html
COPY --from=build /app/dist .
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80 443
</code></pre>
<p>If your frontend application ever need to use environment variables passed into your docker container during runtime, you'll probably googled your way to several solutions suggesting that <code>envsubst</code> might help you out.</p>
<pre><code class="language-yml">... left out for brevity...

FROM nginx:alpine AS deploy
WORKDIR /usr/share/nginx/html
COPY --from=build /app/dist .
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80 443

CMD [&quot;/bin/bash&quot;,  &quot;-c&quot;,  &quot;envsubst &lt; /usr/share/nginx/html/env.template.js &gt; /usr/share/nginx/html/env.js &amp;&amp; exec nginx -g 'daemon off;'&quot;]
</code></pre>
<p>But, what if you want to swap out <code>nginx</code> and serve your frontend app using <code>ASP.NET Core</code>'s web server, <code>Kestrel</code>?</p>
<h3 id="executeenvsubstwhenrunningaaspnetcorecontainarizedapplication">Execute envsubst when running a ASP.NET Core containarized application</h3>
<p>Initally our Dockerfile used the <code>mcr.microsoft.com/dotnet/aspnet:7.0</code> as our runtime base image.</p>
<pre><code class="language-yml">FROM mcr.microsoft.com/dotnet/sdk:7.0 as build-env
WORKDIR /app

# steps to build the application
# is left out..

FROM mcr.microsoft.com/dotnet/aspnet:7.0 as runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [&quot;dotnet&quot;, &quot;MyExecutingAssembly.dll&quot;]
</code></pre>
<p>What happens if we just replace the ENTRYPOINT with</p>
<pre><code class="language-yml">CMD [&quot;/bin/sh&quot;,  &quot;-c&quot;,  &quot;envsubst &lt; /some-path/env.template.js &gt; /some-other-path/env.js &amp;&amp; exec nginx -g 'daemon off;'&quot;]
</code></pre>
<p>You're obviously able to build the image, but will fail when you try to run a container with this image</p>
<blockquote>
<p>/bin/bash: line 1: envsubst: command not found</p>
</blockquote>
<p>Okay, that makes sense given that the image we're using <code>mcr.microsoft.com/dotnet/aspnet:7.0</code> doesnt support <code>envsubst</code>.</p>
<p>The solution to this problem is to replace the base image with the alpine version, in our case <a href="https://hub.docker.com/_/microsoft-dotnet-aspnet?tab=description">mcr.microsoft.com/dotnet/aspnet:7.0.2-alpine3.17-amd64</a>.</p>
<p>The entrypoint for the container will now look something like this:</p>
<pre><code class="language-yml">CMD [&quot;/bin/sh&quot;, &quot;-c&quot;, &quot;apk add gettext &amp;&amp; envsubst &lt; /some-path/env.template.js &gt; /some-path/env.js &amp;&amp; dotnet MyExecutingAssembly.dll&quot;]
</code></pre>
<br>
<br>
<p>Using <code>envsubst</code> is a powerful way to read environment variables passed into a docker container and replace the values served as a javascript file. This articled showed how we can use this command when serving our frontend as static files from ASP.NET Cores web server Kestrel.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[TWIL: Serializing DateOnly and TimeOnly with System.Text.Json]]></title><description><![CDATA[<p>This week I learned that serializing and deserialzing the new structs included with .NET 6, DateOnly and TimeOnly, is not supported by System.Text.Json.JsonSerializer. You´ll need to write custom converters and include them in the settings. This an example implementation of these converters:</p><figure class="kg-card kg-image-card"><img src="https://moethorstensen.no/content/images/2022/03/carbon--49--1.png" class="kg-image" alt></figure><p>You´ll then need</p>]]></description><link>https://moethorstensen.no/serializing-timeonly-dateonly-dotnet6/</link><guid isPermaLink="false">623d83774d457e00016dca26</guid><category><![CDATA[.NET 6]]></category><category><![CDATA[Json]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Fri, 25 Mar 2022 09:31:16 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2022/03/ferenc-almasi-HfFoo4d061A-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://moethorstensen.no/content/images/2022/03/ferenc-almasi-HfFoo4d061A-unsplash.jpg" alt="TWIL: Serializing DateOnly and TimeOnly with System.Text.Json"><p>This week I learned that serializing and deserialzing the new structs included with .NET 6, DateOnly and TimeOnly, is not supported by System.Text.Json.JsonSerializer. You´ll need to write custom converters and include them in the settings. This an example implementation of these converters:</p><figure class="kg-card kg-image-card"><img src="https://moethorstensen.no/content/images/2022/03/carbon--49--1.png" class="kg-image" alt="TWIL: Serializing DateOnly and TimeOnly with System.Text.Json"></figure><p>You´ll then need to include them in the JsonSerializerOptions </p><figure class="kg-card kg-image-card"><img src="https://moethorstensen.no/content/images/2022/03/carbon--51-.png" class="kg-image" alt="TWIL: Serializing DateOnly and TimeOnly with System.Text.Json"></figure><p>Or you could mark the properties in your type with the JsonConverterAttribute:   </p><figure class="kg-card kg-image-card"><img src="https://moethorstensen.no/content/images/2022/03/carbon--50-.png" class="kg-image" alt="TWIL: Serializing DateOnly and TimeOnly with System.Text.Json"></figure><p>Thanks for reading. </p>]]></content:encoded></item><item><title><![CDATA[A look at Azure Web Pub Sub]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Microsoft recently announced that a new service for publish and subscribe scenarios has been made available in Azure. The new service has been branded Azure Web Pub Sub. The service is a managed, distributed version of SignalR that uses WebSockets to perform full duplex communication over a single TCP connection.</p>]]></description><link>https://moethorstensen.no/azure-web-pub-sub/</link><guid isPermaLink="false">608c03833cfe460001ec0478</guid><category><![CDATA[Azure Web Pub Sub]]></category><category><![CDATA[Pub Sub]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Fri, 30 Apr 2021 14:47:11 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2021/04/adam-solomon-WHUDOzd5IYU-unsplash--1-.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2021/04/adam-solomon-WHUDOzd5IYU-unsplash--1-.jpg" alt="A look at Azure Web Pub Sub"><p>Microsoft recently announced that a new service for publish and subscribe scenarios has been made available in Azure. The new service has been branded Azure Web Pub Sub. The service is a managed, distributed version of SignalR that uses WebSockets to perform full duplex communication over a single TCP connection.</p>
<p>In terms of pricing the service comes with two different options. A Free tier which, according to Microsoft, should be used for Dev/testing purposes and a Standard Tier. The illustration below shows the difference between these two tiers as of 30/4-2021.<br>
<img src="https://moethorstensen.no/content/images/2021/04/Screenshot-2021-04-30-at-15.28.04.png" alt="A look at Azure Web Pub Sub"></p>
<p>To create the service head over to the portal and search for <code>Web PubSub Service</code>. As always you need to select a resource group for the service, a name, a region and pricing tier. As of now the service is only available for deployment to <code>East US</code>, <code>North Europe</code>, <code>Southeast Asia</code>, <code>West Europe</code> and <code>West US 2</code>.<br>
<img src="https://moethorstensen.no/content/images/2021/04/Screenshot-2021-04-30-at-15.35.44.png" alt="A look at Azure Web Pub Sub"></p>
<h2 id="showmethecode">Show me the code!</h2>
<p>Let's create two simple .NET 6 console applications, one server and one client. We will also use two NuGet packages, <a href="https://www.nuget.org/packages/Azure.Messaging.WebPubSub/1.0.0-beta.1">Azure.Messaging.WebPubSub</a> and <a href="https://www.nuget.org/packages/Websocket.Client/">WebSockets.Client</a>.</p>
<p>Both applications will connect to a specific <code>Hub</code>, a concept of separating messages, in the Azure Web Pub Sub service. The server will publish messages and the client will listen for messages in this Hub.</p>
<pre><code class="language-csharp">namespace Server
{
    public class Program
    {
        public static async Task Main(string[] args)
        {
            var connectionString = &quot;&lt;connection-string&gt;&quot;;
            var hub = &quot;test_hub&quot;;

            var client = new WebPubSubServiceClient(&lt;connectionString, hub);
            await client.SendToAllAsync(&quot;Welcome to Azure Web PubSub!&quot;);
        }
    }
}
</code></pre>
<pre><code class="language-csharp">namespace Client
{
    public class Program
    {
        public static async Task Main(string[] args)
        {
            var connectionString = &quot;&quot;;
            var hub = &quot;test_hub&quot;;
           
            var client = new WebPubSubServiceClient(connectionString, hub);
            var url = client.GetClientAccessUri();
            
            using var websocketClient = new WebsocketClient(url);
            websocketClient.MessageReceived.Subscribe(message =&gt;
            {
                Console.WriteLine($&quot;Received message: {message.Text}&quot;);
            });

            await websocketClient.Start();
        }
    }
}
</code></pre>
<p>The connection string can be found under <code>Keys</code> in the service.</p>
<p><img src="https://moethorstensen.no/content/images/2021/04/image--1-.png" alt="A look at Azure Web Pub Sub"></p>
<p>Start by running the client application: <code>dotnet run Client.csproj</code> and then the Server to publish a single message. <code>dotnet run Server.csproj</code>.<br>
If everything is setup correct, the Client application will now print out <code>Received message: Welcome to Azure Web PubSub!</code>. We can also take a look in the portal to view traffic passing through the service</p>
<p><img src="https://moethorstensen.no/content/images/2021/04/Screenshot-2021-04-30-at-16.33.21.png" alt="A look at Azure Web Pub Sub"></p>
<h2 id="wrappingup">Wrapping up</h2>
<p>This was a very brief introduction to Azure Web Pub Sub Service which Microsoft released yesterday, 29/4-2021. The service is a great for implement chat functionality, notifications, live dashboard and so on in your application.</p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Azure Front Door: A practical example. Part 2.]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In part two exploring Azure Front door we'll take a look at routing rules and load balancing services in our backend pool. This article is based upon the application and resources we created in <a href="https://moethorstensen.no/azure-front-door-a-practical-example-part-1/">part one</a>.</p>
<p>Lets do a quick recap and list out the resources we created in part</p>]]></description><link>https://moethorstensen.no/azure-front-door-a-practical-example-part-2/</link><guid isPermaLink="false">6066d6943cfe460001ec0145</guid><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Sun, 11 Apr 2021 19:54:13 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2021/04/denny-muller-JyRTi3LoQnc-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2021/04/denny-muller-JyRTi3LoQnc-unsplash.jpg" alt="Azure Front Door: A practical example. Part 2."><p>In part two exploring Azure Front door we'll take a look at routing rules and load balancing services in our backend pool. This article is based upon the application and resources we created in <a href="https://moethorstensen.no/azure-front-door-a-practical-example-part-1/">part one</a>.</p>
<p>Lets do a quick recap and list out the resources we created in part 1. Running <code>az resource list --resource-group frontdoor-example-rg -o table</code> lists out all the resources we've created in our resource group</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>ResourceGroup</th>
<th>Location</th>
<th>Type</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>frontdoor-example</td>
<td>frontdoor-example-rg</td>
<td>global</td>
<td>Microsoft.Network/frontdoors</td>
<td></td>
</tr>
<tr>
<td>frontdoor-example-appserviceplan</td>
<td>frontdoor-example-rg</td>
<td>westeurope</td>
<td>Microsoft.Web/serverFarms</td>
<td></td>
</tr>
<tr>
<td>frontdoorwebapp01</td>
<td>frontdoor-example-rg</td>
<td>westeurope</td>
<td>Microsoft.Web/sites</td>
<td></td>
</tr>
<tr>
<td>frontdoorwebapp02</td>
<td>frontdoor-example-rg</td>
<td>westeurope</td>
<td>Microsoft.Web/sites</td>
<td></td>
</tr>
</tbody>
</table>
<p>In the resource group we find four services: Azure Front Door, an appservice plan and two web apps hosting our .NET 5 application. <em>Have you noticed Azure Front Door´s location?</em> Its global and this isn't something we have to specify when we create the resource. This is really the core of Azure Front Door, to redirect your requests to the closest available service in terms of latency.</p>
<p>Azure Front Door has a concept of <code>Backends</code>, <code>Backend Pools</code> and <code>Health Probes</code>. A <em>backend</em> can be an Azure hosted web service, On-Prem service or even a non Azure service such as a web service running on Amazon EC2 that receives similar traffic.</p>
<p>A <em>backend pool</em> is a collection of backends. A backend pool controls how to load balance traffic based weight, priority and availablity. Availability are evaluated by configured <code>Health Probes</code> probing endpoints in each backend.</p>
<p>A <em>health probe</em> determines whether a backend is available or not by periodically sending probe requests over HTTP(S) to a specific endpoint in the backend. An example could be an API hosting a <code>/health</code> endpoint returning <code>HTTP 200</code> if all dependent services are up and running as expected. If other status codes are returned, the service is considered unhealth and traffic will not be sent to this backend instance.</p>
<p>Let's list all health probes in our Azure Front Door service by running the following command on Azure CLI:</p>
<p><code>az network front-door probe list --front-door-name frontdoor-example --resource-group frontdoor-example-rg</code></p>
<pre><code class="language-json">[
  {
    &quot;enabledState&quot;: &quot;Enabled&quot;,
    &quot;healthProbeMethod&quot;: &quot;Head&quot;,
    &quot;id&quot;: &quot;/subscriptions/&lt;subscription-guid&gt;/resourcegroups/frontdoor-example-rg/providers/Microsoft.Network/Frontdoors/frontdoor-example/HealthProbeSettings/DefaultProbeSettings&quot;,
    &quot;intervalInSeconds&quot;: 30,
    &quot;name&quot;: &quot;DefaultProbeSettings&quot;,
    &quot;path&quot;: &quot;/&quot;,
    &quot;protocol&quot;: &quot;Https&quot;,
    &quot;resourceGroup&quot;: &quot;frontdoor-example-rg&quot;,
    &quot;resourceState&quot;: &quot;Enabled&quot;,
    &quot;type&quot;: &quot;Microsoft.Network/Frontdoors/HealthProbeSettings&quot;
  }
]
</code></pre>
<p>We can see that this health probe targets the root-path for our backends and that the health probe is the default one created when we created the Azure Front Door service. The default health probe will probe our backends every 30 seconds to determine health of the backend.</p>
<p>We can also see our load balancer by running the following command on Azure CLI</p>
<p><code>az network front-door load-balancing list --front-door-name frontdoor-example --resource-group frontdoor-example-rg</code></p>
<pre><code class="language-json">[
  {
    &quot;additionalLatencyMilliseconds&quot;: 0,
    &quot;id&quot;: &quot;/subscriptions/&lt;subscription-id&gt;/resourcegroups/frontdoor-example-rg/providers/Microsoft.Network/Frontdoors/frontdoor-example/LoadBalancingSettings/DefaultLoadBalancingSettings&quot;,
    &quot;name&quot;: &quot;DefaultLoadBalancingSettings&quot;,
    &quot;resourceGroup&quot;: &quot;frontdoor-example-rg&quot;,
    &quot;resourceState&quot;: &quot;Enabled&quot;,
    &quot;sampleSize&quot;: 4,
    &quot;successfulSamplesRequired&quot;: 2,
    &quot;type&quot;: &quot;Microsoft.Network/Frontdoors/LoadBalancingSettings&quot;
  }
]
</code></pre>
<p>There are a couple of things that we need to pay attention to here. <em>AdditionalLatencyMilliseconds</em> defines if requests will be sent to the fastest backend in terms of latency or if Azure Front Door should use <a href="https://www.studytonight.com/operating-system/round-robin-scheduling">Round Robin</a> to schedule traffic between the fastest and the second fastests backend. In this particular case, with the value set to <code>0</code>, traffic will always be routed to the fastes backend. <em>SampleSize</em> defines how many samples the health probe needs to send before we can evalute the health of a <code>backend</code> in a <code>backend pool</code>. <em>SuccessfulSamplesRequired</em> defines how many of the <em>sampleSize</em> that need to result in a HTTP 200 from the health probe to consider it health. An example:</p>
<p>Our health probe probes our backends <code>health</code>-endpoint every 15 seconds, our <em>SampleSize</em> is set to 4 and <em>SucessfulSamplesRequired</em> is set to 2. To mark our backend as healthy, <strong>at least</strong> 2 out of the 4 samples performed every 60 seconds(15 x 4) has to return HTTP 200.</p>
<h3 id="routingandloadbalancingrequests">Routing and load balancing requests</h3>
<p>Lets take a look at the different routing and load balancing mechanisms that Azure Front Door has to offer.</p>
<h3 id="weightedandprioritizedrouting">Weighted and prioritized routing</h3>
<p>Within a <code>backend pool</code> we can load balance requests based on priority and/or a specified weight. Priority can be a value between 1 and 5. A lower value means higher priority. This is also refered to as <code>Active/Passive</code> deployment. In case a backend in the backend pool with the highest priority is marked as unhealth by the Health Probe, traffic will be routed to the backend with the seconds highest configured priority.</p>
<p>Weighted routing on the other hand is a concept of distributing traffic based on a value between 0 and 1000 with a Round Robin scheduling mechanism. The higher the weight is, the more traffic will be routed to the backend.</p>
<p><em>But what happends if multiple backends in a backend pool have configured the same priority, but different weight?</em></p>
<p>As long as our health probe has marked all backends as healthy and available in terms of meeting the accepted latency, requests will distributed with round robin based on the weight. To demonstrate this, let's list out our default backend pool and the configuration for each backend:</p>
<pre><code class="language-bash">az network front-door backend-pool backend list \
--pool-name DefaultBackendPool \
--front-door-name frontdoor-example \
--resource-group frontdoor-example-rg \
--output table
</code></pre>
<p>Running this command from Azure CLI will list the following:</p>
<table>
<thead>
<tr>
<th>Address</th>
<th>BackendHostHeader</th>
<th>EnabledState</th>
<th>HttpPort</th>
<th>HttpsPort</th>
<th>Priority</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>frontdoorwebapp01.azurewebsites.net</td>
<td>frontdoorwebapp01.azurewebsites.net</td>
<td>Enabled</td>
<td>80</td>
<td>443</td>
<td>1</td>
<td>100</td>
</tr>
<tr>
<td>frontdoorwebapp02.azurewebsites.net</td>
<td>frontdoorwebapp02.azurewebsites.net</td>
<td>Enabled</td>
<td>80</td>
<td>443</td>
<td>1</td>
<td>1000</td>
</tr>
</tbody>
</table>
<p>As you can see, both of the backends share the same priority, but differs in weight. For this particular configuration approx. 1/10th of the traffic will be distributed to <code>frontdoorwebapp01.azurewebsites.net</code> and the rest to <code>frontdoorwebapp02.azurewebsites.net</code>. Running for instance <code>cURL</code> 20 times on our Azure Front Door url will hit <code>frontdoorwebapp01</code> approx. two times.</p>
<h3 id="routingbasedoncontentinthehttprequest">Routing based on content in the HTTP request</h3>
<p>Another feature of Azure Front Door is to route traffic to different backend pools based on the http request. Both of our backends are located in the same backend pool, so lets create a new backend pool and move one of the backends to this backend pool.</p>
<p><strong>Create a new backend pool</strong></p>
<pre><code class="language-bash">az network front-door backend-pool create \
--address frontdoorwebapp02.azurewebsites.net \
--front-door-name frontdoor-example \
--load-balancing DefaultLoadBalancingSettings \
--name second-backend-pool \
--probe DefaultProbeSettings \
--resource-group frontdoor-example-rg
</code></pre>
<p>In this particular example we are re-using the load balancer and health probe from the default backend pool. You might want to create separate load balancers and probe settings per backend pool. We've also added the FQDN for our web app to this backend pool. Lets remove this one from the other default backend pool</p>
<p><em>OBS: the parameter <code>--index</code> starts at 1, not 0.</em></p>
<pre><code class="language-bash">az network front-door backend-pool backend remove \
--front-door-name frontdoor-example \
--resource-group frontdoor-example-rg \
--pool-name DefaultBackendPool \
--index 1
</code></pre>
<p>Until now we have create a new backend pool and moved <code>frontdoorwebapp02</code>to this new backend pool. Lets start by creating a new <code>Rule</code> from the <code>Rules Engine Configuration</code><br>
<img src="https://moethorstensen.no/content/images/2021/04/Screenshot-2021-04-09-at-18.27.55.png" alt="Azure Front Door: A practical example. Part 2."></p>
<p>These two rules are pretty straight forward. If the request has a http header with key <code>x-backend-name</code> equal <code>App01</code>, the request will be forwarded to backend pool <code>DefaultBackendPool</code>. If the header equals to <code>App02</code> the request will be redirected to the backendpool with name <code>second-backend-pool</code>.</p>
<p>As a last thing we'll need to associate this rule with a routing rule and then assign the routing rule to a backend pool. Lets do that now:</p>
<pre><code class="language-bash">az network front-door routing-rule create \
--front-door-name frontdoor-example \
--name forwardRequestRoutingRule \ 
--resource-group frontdoor-example-rg \
--route-type Forward \
--backend-pool second-backend-pool \
--rules-engine HttpHeaderForwardRule \
--frontend-endpoints DefaultFrontendEndpoint
</code></pre>
<p>Lets try it out!</p>
<pre><code class="language-bash">curl -s -H &quot;x-backend-name: App02&quot; -s https://frontdoor-example.azurefd.net
&gt; Greetings from 'Web App 02'
</code></pre>
<pre><code class="language-bash">curl -s -H &quot;x-backend-name: App01&quot; -s https://frontdoor-example.azurefd.net
&gt; Greetings from 'Web App 01'
</code></pre>
<h3 id="wrappingup">Wrapping up</h3>
<p>In this article series we've discovered how to load balance and to create routing rules with Azure Front Door. We've looked at how to override routing to different backend pools by create a custom rule that inspects the HTTP header and forwards the request.</p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Azure Front Door: A practical example. Part 1.]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Azure Front Door is a global and scalable entry point that sits between your application and the internet. The service uses edge networks to route traffic to your fastest and and most available backend service. In this article series, in two parts, we'll discover the capabilities and possibilities by using</p>]]></description><link>https://moethorstensen.no/azure-front-door-a-practical-example-part-1/</link><guid isPermaLink="false">605f12de3cfe460001ebfe72</guid><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Sun, 28 Mar 2021 08:58:26 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2021/03/distributed.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2021/03/distributed.png" alt="Azure Front Door: A practical example. Part 1."><p>Azure Front Door is a global and scalable entry point that sits between your application and the internet. The service uses edge networks to route traffic to your fastest and and most available backend service. In this article series, in two parts, we'll discover the capabilities and possibilities by using Azure Front Door as a load balancer and routing service in front of two Azure Web Apps hosting a simple application running .NET 5.</p>
<p><img src="https://moethorstensen.no/content/images/2021/03/front-door-visual-diagram.png" alt="Azure Front Door: A practical example. Part 1."></p>
<p>Azure Front door is one of four load balancing solutions available in Azure. However, only two of the solutions are meant to load balance HTTP requests, Azure Front Door and Application Gateway. One of the most important differences between Azure Application Gateway and Azure Front Door is that the latter is globally scaled by default and can very quickly failover to a secondary resource in case of a regional outage. Application Gateway on the other hand needs to be deployed in different regions within your subscription meaning that it operates regionally. Another significant difference is that network traffic passes through Azure Front Door and therefore you can capture and create routing rules based properties within the HTTP request such as the HTTP Headers, request protocol, request body etc.</p>
<p>If you are on the verge of choosing one of the four Load balancing  solutions Microsoft Azure has to offer, but can't really decide which suits your use case the best, I really recommend this flow chart from Microsoft:</p>
<p><img src="https://moethorstensen.no/content/images/2021/03/load-balancing-decision-tree.png" alt="Azure Front Door: A practical example. Part 1."></p>
<h3 id="showmethecode">Show me the code!</h3>
<p>To demonstrate Azure Front Door, we will create a simple .NET 5 application, deploy two web app instances running the app, create and configure our Azure Front Door service to point to the Web Apps. Our application reads an environment variable and returns it in case of a HTTP GET request</p>
<pre><code class="language-csharp">public class Startup
{
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        app.Use(async (ctx, next) =&gt;
        {
            if(ctx.Request.Method.ToLower() == &quot;get&quot;)
            {
                var appName = Environment.GetEnvironmentVariable(&quot;AppName&quot;);
                await ctx.Response.WriteAsync($&quot;Greetings from '{appName}'&quot;);
            }
        });
    }
}
</code></pre>
<p>The application gets deployed to Azure whenever a push is done to main. The workflow file is based on my <a href="https://moethorstensen.no/deploy-a-net-5-web-application-to-azure-app-service-using-github-actions">article about GitHub Actions</a> and looks like this:</p>
<pre><code class="language-yml">
name: &quot;Build app and deploy to Azure&quot;
on: [push]

jobs:
  build:
    name: 'Build and Deploy a ASP.NET 5 App to Azure App Service'
    runs-on: 'ubuntu-latest'
    steps:
      - name: 'Checkout Code from Repository' 
        uses: actions/checkout@v2
      - name: 'Set up .NET 5.0.x'
        uses: actions/setup-dotnet@v1
        with:
          dotnet-version: '5.0.x'
      - name: 'Publish a Release of the App'
        run: |
          cd src/
          dotnet publish -c Release --output '../published-app'
      - name: 'Deploy to Azure App Service: Frontdoor-web-app'
        uses: azure/webapps-deploy@v2
        with:
          app-name: 'frontdoorwebapp01'
          publish-profile: ${{ secrets.PUBLISH_PROFILE_APP1 }}
          package: ./published-app
      - name: 'Deploy to Azure App Service: Frontdoor-secondary-web-app'
        uses: azure/webapps-deploy@v2
        with:
          app-name: 'frontdoorwebapp02'
          publish-profile: ${{ secrets.PUBLISH_PROFILE_APP2 }}
          package: ./published-app
</code></pre>
<p>Before we can run the workflow, we'll need to create the resources in Azure. Lets do that now.</p>
<h3 id="createresourcesinazure">Create resources in Azure</h3>
<p>We'll first start by signing into the Azure Portal by running the following command in Azure CLI <code>az login</code>. Resources in Azure lives in Resource Groups. Lets create one:</p>
<pre><code class="language-bash">az group create --name 'frontdoor-example-rg'
</code></pre>
<p>In our resource group we'll create four resources. An App Service Plan, two Azure Web Apps and Azure Front Door. Lets start by creating the <em>App Service Plan</em></p>
<pre><code class="language-bash">az appservice plan create \
--name 'frontdoor-example-appserviceplan' \
--resource-group 'frontdoor-example-rg' \
--sku FREE
</code></pre>
<p>Then we'll two web apps in our App Service Plan.</p>
<pre><code class="language-bash">az webapp create \
--name 'frontdoorwebapp01' \
--resource-group 'Frontdoor-example-rg' \ 
--plan 'frontdoor-example-appserviceplan'
</code></pre>
<pre><code class="language-bash">az webapp create \
--name 'frontdoorwebapp02' \
--resource-group 'Frontdoor-example-rg' \ 
--plan 'frontdoor-example-appserviceplan'
</code></pre>
<p>Lets wait until the deployment have completed and then update the environment variables for both apps:</p>
<pre><code class="language-bash">az webapp config appsettings set \
-g Frontdoor-example-rg \
-n frontdoorwebapp01 \
--settings AppName=&quot;Web App 01&quot;

az webapp config appsettings set \
-g Frontdoor-example-rg \
-n frontdoorwebapp02 \
--settings AppName=&quot;Web App 02&quot;
</code></pre>
<p>Lets test our apps by running a cURL operation against both endpoints.</p>
<pre><code class="language-bash">curl -s https://frontdoorwebapp01.azurewebsites.net
&gt; Greetings from 'Web App 01'

curl -s https://frontdoorwebapp02.azurewebsites.net
&gt; Greetings from 'Web App 02'
</code></pre>
<p>Perfect! Lets create and configure Azure Front Door!</p>
<h3 id="createandconfigureazurefrontdoor">Create and configure Azure Front Door</h3>
<p>Again we are working from our terminal and Azure CLI. Until now we have created a resource group with an app service plan with two web apps. Lets create and configure Azure Front Door which will act as a Load Balancer between internet and our web apps. Before we start you probably need to add the front-door extension to your Azure CLI. <code>az extension add --name front-door</code>.</p>
<p>We're ready to create an Azure Front Door resource:</p>
<pre><code class="language-bash">az network front-door create \
--resource-group frontdoor-example-rg \
--name frontdoor-example \
--accepted-protocols &quot;Http&quot; &quot;Https&quot; \
--backend-address frontdoorwebapp01.azurewebsites.net
</code></pre>
<p>Azure Front Door uses the term <code>backend pool</code> which consists of several <code>backend-addresses</code> that points to other services, in our case two Azure Web Apps. Deployment of Azure Front Door can take several minutes to complete. Once the service has been created we can browse it from <code>http(s)://&lt;front-door-name&gt;.azurefd.net</code></p>
<p>Lets try to perform a cURL operation against the Front Door Service.</p>
<pre><code class="language-bash">curl -s https://frontdoor-example.azurefd.net/
&gt; Greetings from 'Web App 01'
</code></pre>
<p>As we only have added <code>WebApp01</code> to the backend-pool, we'll never get routed to <code>WebApp02</code>. Lets do something about it!</p>
<pre><code class="language-bash">az network front-door backend-pool backend add \
--address frontdoorwebapp02.azurewebsites.net \
--front-door-name &quot;frontdoor-example&quot; \
--pool-name &quot;DefaultBackendPool&quot; \
--resource-group &quot;frontdoor-example-rg&quot;
</code></pre>
<p>Now we've added both of our apps to the backend pool. Lets try to stop <code>WebApp01</code> and see what happens:</p>
<pre><code class="language-bash">az webapp stop --name frontdoorwebapp01 \
--resource-group Frontdoor-example-rg
</code></pre>
<p>Perform another cURL operation against our Front Door:</p>
<pre><code class="language-bash">curl -s https://frontdoor-example.azurefd.net/
&gt; Greetings from 'Web App 02'
</code></pre>
<p>As we can see Front Door redirected to our secondary app, as we stopped <code>WebApp01</code>. This is also visible from the portal:</p>
<p><img src="https://moethorstensen.no/content/images/2021/03/Azure-Frontdoor-availability.png" alt="Azure Front Door: A practical example. Part 1."></p>
<p>Thats it. In this article we've demonstrated a really simple way to load balance and automatically failover to a secondary application. In the next article we'll take a look at how to configure routing rules and create custom routing rules in Azure Front Door.</p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown--><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Deploy a .NET 5 web application to Azure App Service using GitHub Actions]]></title><description><![CDATA[Deploy your .NET 5 app within minutes to Azure App Service for free.]]></description><link>https://moethorstensen.no/deploy-a-net-5-web-application-to-azure-app-service-using-github-actions/</link><guid isPermaLink="false">604ce95a3cfe460001ebfac6</guid><category><![CDATA[GitHub Actions]]></category><category><![CDATA[.NET 5]]></category><category><![CDATA[Azure App Service]]></category><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Sat, 13 Mar 2021 18:30:56 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2021/03/DL-V2-LinkedIn_FB.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="introduction">Introduction</h2>
<img src="https://moethorstensen.no/content/images/2021/03/DL-V2-LinkedIn_FB.png" alt="Deploy a .NET 5 web application to Azure App Service using GitHub Actions"><p>GitHub Actions is probably my favourite DevOps tool out there due to its extensibility and tight integration with platform itself. In this article we will take a look at how deploy a simple .NET 5 web application to an App Service in Azure. This article assume that you have both dotnet and Azure CLI installed on your system. If thats not the case, stop here and install it.</p>
<h2 id="thenet5application">The .NET 5 application</h2>
<p>Our application is probably the simplest .NET web application you have ever created. Lets open our terminal and create the following folder structure for our project.</p>
<pre><code>project-name
└───src
└───.github
│   └───workflows
│       │   deployment.yml
</code></pre>
<p>Create a new <code>.NET 5</code> project inside the src folder by running the following command. <code>dotnet new web -n 'my-project-name' -f net5.0</code></p>
<p>Open the project in your favorite code editor tool, find <code>Startup.cs</code> and change the content to:</p>
<pre><code class="language-csharp">public class Startup
{
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        app.Use(async (ctx, next) =&gt;
        {
            if(ctx.Request.Method.ToLower() == &quot;get&quot;)
            {
                await ctx.Response.WriteAsync(&quot;Hello, world!&quot;);
            }
        });
    }
}
</code></pre>
<p>Like I said; the simplest app you've ever created. It's time to push everything to your GitHub repository. The next step is to create our App Service in Azure.</p>
<h2 id="createtheazureappservice">Create the Azure App Service</h2>
<ol>
<li>
<p>Log into your Azure Account by typing <code>az login</code> to bring up the sign-in window to the <a href="http://portal.azure.com">Azure Portal</a>. Select your account and login.</p>
</li>
<li>
<p>Create a new Resource Group <code>az group create -n 'your-name-rg' -l 'westeurope'</code></p>
</li>
<li>
<p>Create an App Service Plan for your web app <code>az appservice plan create --name 'my-appservice-plan-name' --resource-group 'your-name-rg' --sku FREE</code></p>
</li>
<li>
<p>Create an Web App <code>az webapp create --name 'web-app-name' --resource-group 'your-name-rg' --plan 'my-appservice-plan-name'</code></p>
</li>
<li>
<p>We need to get the publish profile for our web app. Open your browser, navigate to the <a href="http://portal.azure.com">Azure Portal</a> and find your WebApp. Press the 'Get Publish Profile' button to download it.<br>
<img src="https://moethorstensen.no/content/images/2021/03/Screenshot-2021-03-13-at-17.56.52.png" alt="Deploy a .NET 5 web application to Azure App Service using GitHub Actions"></p>
</li>
</ol>
<h2 id="createsecretsforgithubactions">Create secrets for GitHub Actions</h2>
<p>Before we start editing our <code>.github/deployment.yml</code> file, lets upload the Publish profile file to GitHub secrets so that we can access it from our workflow.</p>
<ol>
<li>Find your GitHub Repository Settings tab</li>
<li>Select 'Secrets' from the menu.</li>
<li>Press 'New Repository Secret'</li>
<li>Under 'Name' enter <code>PUBLISH_PROFILE</code> and paste the content of the downloaded publish profile file into 'Value' section.</li>
</ol>
<h2 id="createtheworkflowfile">Create the workflow file</h2>
<p>Open the file we created under <code>.github/workflows</code> in your prefered code editor or from GitHub and past in the following.</p>
<pre><code class="language-yml">name: &quot;Build app and deploy to Azure&quot;
on: [push]

jobs:
  build:
    name: 'Build and Deploy a ASP.NET 5 App to Azure App Service'
    runs-on: 'ubuntu-latest'
    steps:
      - name: 'Checkout Code from Repository' 
        uses: actions/checkout@v2
      - name: 'Set up .NET 5.0.x'
        uses: actions/setup-dotnet@v1
        with:
          dotnet-version: '5.0.x'
      - name: 'Publish a Release of the App'
        run: |
          cd src/
          dotnet publish -c Release --output '../published-app'
      - name: 'Deploy to Azure App Service'
        uses: azure/webapps-deploy@v2
        with:
          app-name: 'web-app-name'
          publish-profile: ${{ secrets.PUBLISH_PROFILE }}
          package: ./published-app
</code></pre>
<h4 id="letsgothroughthefilestepbystep">Lets go through the file step by step.</h4>
<ol>
<li>First we define a name for this workflow. Secondly we define when the workflow should run. In this particular case, everytime a push is done against <em>any</em> branches within the repository. You can read more about the <code>on:</code>-keyword from the <a href="https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#on">GitHub docs</a></li>
<li>The next step is to define jobs. A GitHub Actions Workflow can consist of many jobs, whereas one job can have multiple steps and each step can run either a shell command or an <em>action</em>. You can either create your own actions or use <em>action</em> created by others. All of the <em>actions</em> can be found <a href="https://github.com/marketplace?type=actions">on GitHub Marketplace</a>. In this particular example we will use a combination of shell commands and <em>actions</em> to deploy our app to Azure App Service.</li>
<li>A job has a name, a machine to run on, either self-hosted or GitHub-hosted, and steps. We choose to run this workflow on a GitHub hosted machine running the latest supported version of <code>ubuntu</code>.</li>
<li>Our first step is to checkout code from the repository</li>
<li>Next we install <code>dotnet 5.0.x</code> onto the machine running our build by utilizing <em>actions</em> from GitHub Marketplace.</li>
<li>Then we run a shell command to navigate to the <code>src</code>-folder and then running <code>dotnet publish</code> which will restore NuGet-packages and create a deployable artifact to the specified directory.</li>
<li>Its now time to publish our app to Azure. Again, we're using an <em>action</em> from the GitHub Marketplace called <a href="https://github.com/marketplace/actions/azure-webapp">Azure WebApp</a>. This <em>action</em> has three parameters specificed by the <code>'with:'-keyword</code>. The name of our App Service in Azure, the publish profile we added to GitHub Secrets and a package artifact which was the result from the `dotnet publish´.</li>
</ol>
<p>Thats it. Remember that each time you do a push to any branch in the repository, this worflow will run. You probably want to change this at a later stage to specify a particular branch. Another element to consider is to run your tests as a step before deploying the app to Azure. This can be achieved by running <code>dotnet test</code> as a shell command.</p>
<h2 id="theresult">The result</h2>
<p>Go back to your App Service in Azure, Select the 'Overview' tab and find the url for your app. Browse this url and you will see the text 'Hello, world' outputted in your browser.</p>
<p><img src="https://moethorstensen.no/content/images/2021/03/Screenshot-2021-03-13-at-19.15.08-1.png" alt="Deploy a .NET 5 web application to Azure App Service using GitHub Actions"></p>
<p>The source code can be found <a href="https://github.com/Thorstensen/azure-app-service-github-actions">here</a></p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown--><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Error handling with ASP.NET Core]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Lets face it: Exceptions happens, it's about how we deal with them. There are obviously several ways to propagate the error back to consumers of our api. This article demonstrates my preferred way.<br>
I've seen a lot of code similar to this:</p>
<pre><code class="language-csharp">public async Task&lt;SomeClass&gt; GetByIdAsync(Guid</code></pre>]]></description><link>https://moethorstensen.no/aspnet-core-exceptionfilter/</link><guid isPermaLink="false">604a40153cfe460001ebfa49</guid><dc:creator><![CDATA[Tobias Moe Thorstensen]]></dc:creator><pubDate>Thu, 11 Mar 2021 17:04:34 GMT</pubDate><media:content url="https://moethorstensen.no/content/images/2021/03/69-http-errors-guide-400-hero-1.jpg" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://moethorstensen.no/content/images/2021/03/69-http-errors-guide-400-hero-1.jpg" alt="Error handling with ASP.NET Core"><p>Lets face it: Exceptions happens, it's about how we deal with them. There are obviously several ways to propagate the error back to consumers of our api. This article demonstrates my preferred way.<br>
I've seen a lot of code similar to this:</p>
<pre><code class="language-csharp">public async Task&lt;SomeClass&gt; GetByIdAsync(Guid id) 
{
    SomeClass item = null;
    try 
    {
        item = await _somePersistence.GetAsync(id);
    }
    catch(Exception e)
    {
        Logger.LogException(e);
    }

    return item;
}

</code></pre>
<p>There are several things that worries me about such code. null can mean different things in the context of this method. You cannot tell whether an exception occured or that _somePersistence returned null. Lets just be more explicit in the way we communicate our intentions.</p>
<pre><code class="language-csharp">public async Task&lt;SomeClass&gt; GetByIdAsync(Guid id) 
{
    SomeClass item = null;
    try 
    {
        item = await _somePersistence.GetAsync(id);
        if(item == null)
            thrown new NotFoundException($&quot;Could not find item {id}&quot;);
    }
    catch(Exception e)
    {
        Logger.LogException(e);
    }

    return item;
}

</code></pre>
<p>The intention are now expressed in a more explicity way by throwing an exception. The only problem now is that the exception doesn't reach the caller of this method as our catch-statement catches every exception. Again, be more specific. Do you know how to handle any exceptions that can occur from the <code>_somePersistence.GetAsync(id)</code> method? No? Why bother to handle it? If the answer is &quot;I dont want my application to crash&quot;, use a global exception handler. For ASP.NET Core Web API we have a filter that can help us with that.</p>
<h2 id="iexceptionfilterwhyandwheretouseit">IExceptionFilter - Why and where to use it</h2>
<p>Instead of dealing with HTTP Status Codes in your controller methods we can take advantage of the extensibility offered by the framework.<br>
Lets assume that your api uses the GetById method above, but its rewritten to be expressive.</p>
<pre><code class="language-csharp">public async Task&lt;SomeClass&gt; GetByIdAsync(Guid id) 
{
    var item = await _somePersistence.GetAsync(id);
    if(item == null)
        thrown new NotFoundException($&quot;Could not find item {id}&quot;); 
    
    return item;
}
</code></pre>
<p>Your controller method uses this method the following way:</p>
<pre><code class="language-csharp">[HttpGet]
[Route(&quot;{id}&quot;)]
public Task&lt;IHttpActionResult&gt; Get([FromRoute]Guid id)
{
   try 
   {
       return Ok(await _repository.GetByIdAsync(id));
   }
   catch(NotFoundException e)
   {
       return NotFound();
   }
}
</code></pre>
<p>Handling errors this way is not very flexible, nor is it convenient. Remove all such exception-handling and introduce an implementation of <code>IExceptionFilter</code>.</p>
<h2 id="anexampleimplementation">An example implementation</h2>
<p>Create a new class implementation `IExceptionFilter``</p>
<pre><code class="language-csharp">public class ExceptionFilter : IExceptionFilter
{
    public void OnException(ExceptionContext context)
    {
        var exception = context.Exception;
        if(exception is NotFoundException e)
        {
            context.Result = new JsonResult(e.Message)
            {
                StatusCode = (int)HttpStatusCode.NotFound;
            };
        }
    }
}
</code></pre>
<p>Register the filter in your <code>Startup.cs</code> class</p>
<pre><code class="language-csharp">services.AddMvc(options =&gt;
{
    options.Filters.Add&lt;ExceptionFilter&gt;();
});
</code></pre>
<p>Now you can remove try-catches from your api-controllers and leave the exception logic to be handled by this filter. Happy days!</p>
<p>PS: I would also recommend to log exceptions in this filter. You can resolve implementation registered in you dependency injection container of choice by writing <code>context.HttpContext.RequestServices.GetService(typeof(...));</code></p>
<p>Thanks for reading!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>