June 12, 2025·9 min read
This post shows how to export the Prometheus/Grafana stack that MicroK8s deploys, then restructure it into a Helm chart you can version, reuse, and customize. We’ll dump the manifests with
kubectl, split them by resource kind, add Helm templating, and ship the chart to an OCI registry.
Prerequisites
- MicroK8s installed and working (
microk8s statusshows running). - You previously enabled the Observability addon (Prometheus/Grafana/etc.).
- Helm 3+ installed.
- A shell with
kubectl(usemicrok8s kubectlif you don’t have a separate kubeconfig). - Optional:
yq(v4+) for YAML processing (makes splitting easier).
Confirm cluster access and the label we’ll filter on:
microk8s status --wait-ready
# See the resources and confirm the instance label.
microk8s kubectl get all -A -l app.kubernetes.io/instance=kube-prom-stackIf your addon uses a different label, adjust commands accordingly. Common variations include
app=prometheus,app.kubernetes.io/part-of=kube-prometheus-stack, or a customreleaselabel. The examples below useapp.kubernetes.io/instance=kube-prom-stackthroughout.
Step 1 — Dump everything to a single YAML
We’ll export all core workload types plus config and storage objects so nothing is missed.
# One-shot dump (workloads + config + storage), across all namespaces
microk8s kubectl get deploy,sts,ds,job,cronjob,svc,ep,ingress,cm,secret,sa,role,rolebinding,clusterrole,clusterrolebinding,pvc -A -l app.kubernetes.io/instance=kube-prom-stack -o yaml > kube-prom-stack.dump.yamlSome stacks also install CRDs and CRD-backed resources such as ServiceMonitor, PodMonitor, and PrometheusRule. Dump them too if present:
# Optional but recommended (won't fail if kinds don't exist)
for kind in servicemonitor.monitoring.coreos.com podmonitor.monitoring.coreos.com prometheusrule.monitoring.coreos.com; do
microk8s kubectl get "$kind" -A -l app.kubernetes.io/instance=kube-prom-stack -o yaml >> kube-prom-stack.dump.yaml 2>/dev/null || true
doneWhy a single file first? It’s easier to archive and review. We’ll split it in the next step.
Step 2 — Split the dump into logical files
You can do this by hand, but yq + a few lines of shell makes it painless.
# Make a staging folder
mkdir -p export-split
# Split the multi-doc YAML into numbered chunks
csplit -z -f export-split/chunk- kube-prom-stack.dump.yaml '/^---$/' '{*}'
# For each chunk, detect kind/name/ns and write a smart filename
for f in export-split/chunk-*; do
kind=$(yq -r '.kind // ""' "$f")
name=$(yq -r '.metadata.name // ""' "$f")
ns=$(yq -r '.metadata.namespace // "default"' "$f")
[ -z "$kind" ] && continue
out="export-split/${ns}_${kind}_${name}.yaml"
mv "$f" "$out"
doneNow you have a pile of files like monitoring_Deployment_prometheus.yaml, default_Service_grafana.yaml, etc. This is our source material for Helm templating.
Step 3 — Create a Helm chart skeleton
helm create kube-prom-stack
cd kube-prom-stack
# Remove the example templates Helm generated; we’ll add our own
rm -f templates/*
# Add helpers for names/labels
cat > templates/_helpers.tpl <<'EOF'
{{/*
Expand the chart name.
*/}}
{{- define "kube-prom-stack.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
*/}}
{{- define "kube-prom-stack.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{/*
Common labels
*/}}
{{- define "kube-prom-stack.labels" -}}
app.kubernetes.io/name: {{ include "kube-prom-stack.name" . }}
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
EOFThe helpers consolidate name/label logic across templates so you don’t repeat yourself.
Step 4 — Scaffold values.yaml with tunables
We’ll define images, persistence, service types, and Grafana credentials in values.yaml.
# values.yaml
global:
namespace: monitoring
prometheus:
image: prom/prometheus
tag: v2.54.0
replicas: 1
service:
type: ClusterIP
port: 9090
persistence:
enabled: true
storageClass: microk8s-hostpath
size: 20Gi
grafana:
image: grafana/grafana
tag: "10.4.1"
replicas: 1
service:
type: ClusterIP
port: 3000
adminUser: admin
adminPassword: admin123 # consider overriding via values or an existingSecret
ingress:
enabled: false
className: ""
hosts: []
tls: []
alertmanager:
image: prom/alertmanager
tag: v0.27.0
replicas: 1
service:
type: ClusterIP
port: 9093
rbac:
create: true
serviceMonitors:
enabled: true
prometheusRules:
enabled: trueKeep credentials out of Git by using
--set-fileor a privatevalues-prod.yaml. You can also supportexistingSecretpatterns if you prefer K8s-managed secrets.
Step 5 — Start templating the core Deployments
Prometheus (Deployment)
# templates/prometheus-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-prometheus
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.prometheus.replicas }}
selector:
matchLabels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: prometheus
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
serviceAccountName: {{ include "kube-prom-stack.fullname" . }}-prometheus
containers:
- name: prometheus
image: "{{ .Values.prometheus.image }}:{{ .Values.prometheus.tag }}"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
ports:
- name: http
containerPort: 9090
volumeMounts:
- name: data
mountPath: /prometheus
volumes:
- name: data
persistentVolumeClaim:
claimName: {{ include "kube-prom-stack.fullname" . }}-prometheus-pvcWe match on labels we control and keep PVC names deterministic via fullname + suffix.
Prometheus Service
# templates/prometheus-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-prometheus
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
type: {{ .Values.prometheus.service.type }}
ports:
- name: http
port: {{ .Values.prometheus.service.port }}
targetPort: http
selector:
app.kubernetes.io/name: prometheus
app.kubernetes.io/instance: {{ .Release.Name }}Services should select by pod template labels, not metadata labels, so scaling/restarts don’t break routing.
Prometheus PVC
# templates/prometheus-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-prometheus-pvc
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: {{ .Values.prometheus.persistence.size }}
storageClassName: {{ .Values.prometheus.persistence.storageClass }}Step 6 — Grafana (Deployment, Service, Secret, optional Ingress)
Secret (admin creds)
# templates/grafana-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-grafana-auth
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
type: Opaque
stringData:
admin-user: {{ .Values.grafana.adminUser | quote }}
admin-password: {{ .Values.grafana.adminPassword | quote }}In production, handle secrets via
existingSecretor external secret managers.
Deployment
# templates/grafana-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-grafana
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.grafana.replicas }}
selector:
matchLabels:
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: grafana
image: "{{ .Values.grafana.image }}:{{ .Values.grafana.tag }}"
env:
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ include "kube-prom-stack.fullname" . }}-grafana-auth
key: admin-user
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "kube-prom-stack.fullname" . }}-grafana-auth
key: admin-password
ports:
- name: http
containerPort: 3000Service
# templates/grafana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-grafana
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
type: {{ .Values.grafana.service.type }}
ports:
- name: http
port: {{ .Values.grafana.service.port }}
targetPort: http
selector:
app.kubernetes.io/name: grafana
app.kubernetes.io/instance: {{ .Release.Name }}Optional Ingress
# templates/grafana-ingress.yaml
{{- if .Values.grafana.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-grafana
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
annotations:
{{- if .Values.grafana.ingress.className }}
kubernetes.io/ingress.class: {{ .Values.grafana.ingress.className | quote }}
{{- end }}
spec:
ingressClassName: {{ .Values.grafana.ingress.className | default nil }}
rules:
{{- range .Values.grafana.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ include "kube-prom-stack.fullname" $ }}-grafana
port:
number: {{ $.Values.grafana.service.port }}
{{- end }}
tls:
{{- toYaml .Values.grafana.ingress.tls | nindent 4 }}
{{- end }}Step 7 — Alertmanager, kube-state-metrics, node-exporter
These follow the same pattern: Deployment/DaemonSet + Service + (optional) PVC/ConfigMaps. For node-exporter, you’ll likely have a DaemonSet with host mounts and privileged mode.
# templates/node-exporter-ds.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ include "kube-prom-stack.fullname" . }}-node-exporter
namespace: {{ .Values.global.namespace | default .Release.Namespace }}
labels:
{{- include "kube-prom-stack.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: node-exporter
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: node-exporter
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
hostPID: true
hostNetwork: true
containers:
- name: node-exporter
image: prom/node-exporter:v1.8.2
args: ["--path.rootfs=/host"]
volumeMounts:
- name: host
mountPath: /host
readOnly: true
volumes:
- name: host
hostPath:
path: /
type: DirectoryValidate security context and host mounts against your environment and security policies.
Step 8 — CRD-backed resources (ServiceMonitor, PodMonitor, PrometheusRule)
If MicroK8s deployed the Prometheus Operator CRDs, you’ll see ServiceMonitor/PodMonitor/PrometheusRule objects. Keep them templated and togglable.
# templates/servicemonitors.yaml
{{- if .Values.serviceMonitors.enabled }}
{{- range $i, $sm := .Values.serviceMonitors.items | default list }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ include "kube-prom-stack.fullname" $ }}-{{ $sm.name }}
namespace: {{ $.Values.global.namespace | default $.Release.Namespace }}
spec:
{{- toYaml $sm.spec | nindent 2 }}
{{- end }}
{{- end }}Then in values.yaml you can define serviceMonitors.items as raw snippets you copy from your dump.
Same for rules:
# templates/prometheusrules.yaml
{{- if .Values.prometheusRules.enabled }}
{{- range $i, $rule := .Values.prometheusRules.items | default list }}
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: {{ include "kube-prom-stack.fullname" $ }}-{{ $rule.name }}
namespace: {{ $.Values.global.namespace | default $.Release.Namespace }}
spec:
{{- toYaml $rule.spec | nindent 2 }}
{{- end }}
{{- end }}This approach avoids hardcoding dozens of CRD resources in templates; you paste them into
values.yamland keep control with feature flags.
Step 9 — Install safely (two migration options)
Option A — New namespace (safest)
# create a fresh namespace
microk8s kubectl create ns monitoring
# install your chart there
helm install kube-prom --namespace monitoring ./kube-prom-stack
# verify
microk8s kubectl get pods -n monitoringOption B — Replace the addon in-place
- Scale down or disable the addon first to avoid name collisions:
microk8s disable observability # or delete only the labeled resources microk8s kubectl delete all,cm,secret,sa,role,rolebinding,clusterrole,clusterrolebinding,pvc -A -l app.kubernetes.io/instance=kube-prom-stack - Install your Helm chart using the same names (via
fullnameOverrideif needed) so dashboards and PVCs align.
Helm cannot “adopt” existing resources; either install into a clean namespace or delete the old ones first.
Step 10 — Test & validate
# Port-forward Prometheus
microk8s kubectl -n monitoring port-forward svc/kube-prom-kube-prom-stack-prometheus 9090:9090
# Port-forward Grafana
microk8s kubectl -n monitoring port-forward svc/kube-prom-kube-prom-stack-grafana 3000:3000Check targets, alert rules, and dashboards. Confirm PVCs are bound against your storageClass (MicroK8s typically uses microk8s-hostpath).
Step 11 — Package and publish the chart
# from the chart root
helm lint
helm package .
# push to an OCI registry (example: Azure Container Registry)
export HELM_EXPERIMENTAL_OCI=1
helm push ./kube-prom-stack-0.1.0.tgz oci://myacr.azurecr.io/helmFor GitOps, push the chart and a
values-prod.yamlto your repo; let Argo CD or Flux manage the release lifecycle.
Troubleshooting
- Pods won’t start / CrashLoopBackOff — Check mismatched selectors (Service
selectormust match pod template labels). Verify volumes and security contexts. - No dashboards/targets — If the addon used CRD-backed resources, make sure you imported
ServiceMonitor/PodMonitor/PrometheusRuledefinitions intovalues.yamlor templates. - PVC Pending — StorageClass name typo or missing MicroK8s hostpath storage:
microk8s enable storage. - 403 listing targets — Grafana/Prometheus RBAC scoped too narrowly; enable
rbac.createor add specific ClusterRoles. - Name collisions — You didn’t uninstall the addon before installing the chart with same names. Use a clean namespace or delete originals.
Appendix A — Quick splitter with yq only
#!/usr/bin/env bash
set -euo pipefail
SRC="kube-prom-stack.dump.yaml"
OUT="export-split"
mkdir -p "$OUT"
count=$(yq eval 'select(documentIndex >= 0) | length' "$SRC" >/dev/null 2>&1 || true)
awk 'BEGIN{n=0}/^---$/{n++}{print > sprintf("%s/chunk-%04d.yaml","'"$OUT"'",n)}' "$SRC"
for f in "$OUT"/chunk-*.yaml; do
kind=$(yq -r '.kind // empty' "$f"); [ -z "$kind" ] && { rm -f "$f"; continue; }
name=$(yq -r '.metadata.name // "noname"' "$f")
ns=$(yq -r '.metadata.namespace // "default"' "$f")
mv "$f" "$OUT/${ns}_${kind}_${name}.yaml"
doneAppendix B — Example tree after templating
kube-prom-stack/
├── Chart.yaml
├── values.yaml
└── templates/
├── _helpers.tpl
├── alertmanager-deploy.yaml
├── grafana-deploy.yaml
├── grafana-ingress.yaml
├── grafana-secret.yaml
├── grafana-svc.yaml
├── node-exporter-ds.yaml
├── prometheus-deploy.yaml
├── prometheus-pvc.yaml
├── prometheus-svc.yaml
├── prometheusrules.yaml
└── servicemonitors.yamlFinal thoughts
The MicroK8s addon is fantastic for quick starts, but moving the stack into Helm gives you repeatability and control. The recipe above keeps your exported resources intact while layering on the Helm features you actually need: sane naming, configurable values, and optional CRD resources controlled in one values.yaml.
Enjoyed this post? Give it a clap!
Comments