Persist data to an Azure Disk from a Prometheus instance running in Azure Kubernetes Service
My most recent posts have been in the landscape of "How to monitor applications running in Kubernetes". This time is no different. Today we'll look at how we can persist prometheus data to an Azure Disk using a custom build Storage Class.
In Kubernetes, a Storage Class defines the "classes" of storage available in a cluster. Administrators use Storage Classes to describe different types of storage offered, such as HDDs or network-attached storage (NAS), or in our case an Azure Disk backed by a SSD disk.
When applications request persistent storage for their applications, they can specify a Storage Class in their Persistent Volume Claim (PVC). The Kubernetes control plane then uses the specified Storage Class to dynamically provision a Persistent Volume (PV) that meets the requirements. Storage Classes allow for flexible and efficient management of storage resources, ensuring that the right type of storage is allocated based on application needs.
Before we'll dive into the world of yaml files, we need to provision our Azure Disk. Its doesnt really matter how this resource is provisioned, I prefer using Bicep to deploy my Azure resources. This will create a Locally Redudant SSD disk with 32 GiB of storage.
param location string
param name string
param tags object
resource diskprometheus 'Microsoft.Compute/disks@2023-10-02' = {
name: name
location: location
tags: tags
sku: {
name: 'Premium_LRS'
}
properties: {
creationData: {
createOption: 'Empty'
}
diskSizeGB: 32
diskIOPSReadWrite: 120
diskMBpsReadWrite: 25
encryption: {
type: 'EncryptionAtRestWithPlatformKey'
}
tier: 'P4'
}
}
Once you have provisioned the resource, we can go ahead crafting our beloved yaml-files.
Crafting our yaml files
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: azuredisk-prometheus
provisioner: disk.csi.azure.com
parameters:
skuName: PremiumSSD_LRS
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
A few things to note down:
-
We are using the Azure Disk Container Storage Interface (CSI). CSI drivers in Kubernetes enable the integration of storage systems with Kubernetes clusters. The CSI standardizes how storage vendors can expose their storage systems to containerized workloads.
-
We specify
Retain
as ourReclaimPolicy
. This policy dictates the default lifetime of thePersistent Volume
when the associatedPersistent Volume Claims
are deleted. In our case we'll keep the data when the associated PVCs are deleted. -
Our
VolumeBindingMode
is set toImmediate
. In the Immediate binding mode, thePersistent Volume
is bound to thePersistent Volume Claim
as soon as the latter is created. The binding and provisioning occur immediately without waiting for a pod to be scheduled as it would be in theWaitForFirstConsumer
mode. -
Setting
AllowVolumeExpansion
totrue
enables the Persistent Volume to be dynamically resized without disrupting the running workloads.
Still with me? Lets go ahead and create the specifications for our Persistent Volume and Persistent Volum Claim.
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: disk.csi.azure.com
name: azuredisk-prometheus
spec:
capacity:
storage: 32Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: azuredisk-prometheus
csi:
driver: disk.csi.azure.com
readOnly: false
volumeHandle: /subscriptions/{subscription-id}/resourceGroups/{resourceGroupName}/providers/providers/Microsoft.Compute/disks/{disk-name}
volumeAttributes:
cachingMode: ReadOnly
fsType: ext4
The important thing to note is how we reference our Azure Disk under volumeHandle
by specifying the Subscription Id
, name of the Resource Group
and lastly the name of the provisioned disk.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: prometheus
prometheus: prometheus-kube-prometheus
name: prometheus-prometheus-kube-prometheus-prometheus-db-prometheus-prometheus-kube-prometheus-prometheus-0
namespace: k8s-namespace-for-prometheus
spec:
storageClassName: azuredisk-prometheus
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
Pay attention to name of this PVC. This is a workaround, outlined in this GitHub Issue, if you want to attach an existing PVC to Prometheus.
We can now apply these three yaml
-files to our AKS Cluster by running
kubectl apply
on each and every file, or by running kubectl apply -f
on the dictionary where these files are located.
Install Prometheus
We'll install Prometheus from the package manager Helm. Before we'll do that, lets configure the values.yml
in order the parameterize the Prometheus ConfigMap
.
prometheus:
enabled: true
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
volumeName: azuredisk-prometheus
storageClassName: azuredisk-prometheus
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 32Gi
Lets go ahead and install the Helm Chart with our values.yml file
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install --namespace k8s-namespace-for-prometheus prometheus prometheus-community/kube-prometheus-stack -f values.yaml
To verify that it works you can navigate to the AKS resource, then go to Storage and look for our created Storage Class
, Persistent Volumes
and Persistent Volume Claims
. Verify that the Persistent Volume Claim
has the status Bound
.
If you are stuck, I highly recommend that you'll running kubectl describe
on the various resources and kubectl logs
on the prometheus Pod.
If you have any questions, you can reach out to me on Twitter/X.
Thanks for reading!