CloudNativePG: Run PostgreSQL inside Kubernetes

Using CloudNativePG you can run your own PostgreSQL database inside your Kubernetes cluster. Install CloudNativePG To install CloudNativePG, first you should check the latest version on GitHub. Once you know what's the latest version, adapt the following command to it: kubectl apply --server-side -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.25/releases/cnpg-1.25.1.yaml You can verify the installation running: kubectl get deploy -n cnpg-system cnpg-controller-manager Deploy a PostgreSQL cluster To deploy a PostgreSQL cluster, we need to create a yaml file with the basic deployment. vi postgres-basic-deployment.yaml This is an example from the official documentation: # Example of PostgreSQL cluster apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: cluster-example spec: instances: 3 # Example of rolling update strategy: # - unsupervised: automated update of the primary once all # replicas have been upgraded (default) # - supervised: requires manual supervision to perform # the switchover of the primary primaryUpdateStrategy: unsupervised # Require 1Gi of space storage: size: 1Gi Next, we apply the deployment. kubectl apply -f cluster-example.yaml Once applied we can check the pods status. kubectl get pods Production-ready cluster To deploy CloudNativePG on production we've to make some changes. The deployment yaml file should be modified to include the configuration to enable backups. I'm going to use a S3 bucket to store the backups, but you can use another storage solution. Here is my production-ready CloudNativePG deployment example: apiVersion: postgresql.cnpg.io/v1 kind: Cluster metadata: name: postgresql-cluster namespace: postgresql spec: instances: 3 imageName: ghcr.io/cloudnative-pg/postgresql:17.4-13 primaryUpdateStrategy: unsupervised # Storage configuration storage: size: 1Gi # Monitoring monitoring: enablePodMonitor: true # Affinity rules to distribute pods across nodes affinity: enablePodAntiAffinity: true topologyKey: topology.kubernetes.io/zone # Backup configuration backup: barmanObjectStore: &barmanObjectStore destinationPath: s3://postgresql-k8s/ endpointURL: https://minio:9000 # Note: serverName version needs to be incremented # when recovering from an existing cnpg cluster serverName: &currentCluster postgres17-v2 s3Credentials: accessKeyId: name: minio-credentials key: ACCESS_KEY_ID secretAccessKey: name: minio-credentials key: ACCESS_SECRET_KEY wal: compression: bzip2 encryption: AES256 data: compression: bzip2 encryption: AES256 jobs: 2 retentionPolicy: "14d" # Keep backups for 14 days # Note: previousCluster needs to be set to the name of the previous # cluster when recovering from an existing cnpg cluster bootstrap: recovery: source: &previousCluster postgres17-v1 # Note: externalClusters is needed when recovering from an existing cnpg cluster externalClusters: - name: *previousCluster barmanObjectStore:

May 5, 2025 - 09:26
 0
CloudNativePG: Run PostgreSQL inside Kubernetes

Using CloudNativePG you can run your own PostgreSQL database inside your Kubernetes cluster.

Install CloudNativePG

To install CloudNativePG, first you should check the latest version on GitHub.

Once you know what's the latest version, adapt the following command to it:

kubectl apply --server-side -f \
  https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.25/releases/cnpg-1.25.1.yaml

You can verify the installation running:

kubectl get deploy -n cnpg-system cnpg-controller-manager

Deploy a PostgreSQL cluster

To deploy a PostgreSQL cluster, we need to create a yaml file with the basic deployment.

vi postgres-basic-deployment.yaml

This is an example from the official documentation:

# Example of PostgreSQL cluster
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-example
spec:
  instances: 3

  # Example of rolling update strategy:
  # - unsupervised: automated update of the primary once all
  #                 replicas have been upgraded (default)
  # - supervised: requires manual supervision to perform
  #               the switchover of the primary
  primaryUpdateStrategy: unsupervised

  # Require 1Gi of space
  storage:
    size: 1Gi

Next, we apply the deployment.

kubectl apply -f cluster-example.yaml

Once applied we can check the pods status.

kubectl get pods

Production-ready cluster

To deploy CloudNativePG on production we've to make some changes.

The deployment yaml file should be modified to include the configuration to enable backups.
I'm going to use a S3 bucket to store the backups, but you can use another storage solution.

Here is my production-ready CloudNativePG deployment example:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: postgresql-cluster
  namespace: postgresql
spec:
  instances: 3
  imageName: ghcr.io/cloudnative-pg/postgresql:17.4-13
  primaryUpdateStrategy: unsupervised

  # Storage configuration
  storage:
    size: 1Gi

  # Monitoring
  monitoring:
    enablePodMonitor: true

  # Affinity rules to distribute pods across nodes
  affinity:
    enablePodAntiAffinity: true
    topologyKey: topology.kubernetes.io/zone

  # Backup configuration
  backup:
    barmanObjectStore: &barmanObjectStore
      destinationPath: s3://postgresql-k8s/
      endpointURL: https://minio:9000
      # Note: serverName version needs to be incremented
      # when recovering from an existing cnpg cluster
      serverName: ¤tCluster postgres17-v2
      s3Credentials:
        accessKeyId:
          name: minio-credentials
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: minio-credentials
          key: ACCESS_SECRET_KEY
      wal:
        compression: bzip2
        encryption: AES256
      data:
        compression: bzip2
        encryption: AES256
        jobs: 2
    retentionPolicy: "14d"  # Keep backups for 14 days

  # Note: previousCluster needs to be set to the name of the previous
  # cluster when recovering from an existing cnpg cluster
  bootstrap:
    recovery:
      source: &previousCluster postgres17-v1

  # Note: externalClusters is needed when recovering from an existing cnpg cluster
  externalClusters:
    - name: *previousCluster
      barmanObjectStore:
        <<: *barmanObjectStore
        serverName: *previousCluster

  resources:
    requests:
      memory: "512Mi"
      cpu: "1"
    limits:
      memory: "1Gi"
      cpu: "2"
---
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
  name: postgresql-cluster-scheduled-backup
  namespace: postgresql
spec:
  schedule: "0 0 0 * * 0" # Every Sunday at midnight
  backupOwnerReference: self
  cluster:
    name: postgresql-cluster

I want to deploy the CloudNativePG cluster on a namespace named postgres. So I run the following command:

kubectl create namespace postgresql

Also before applying the deployment, we should set the ACCESS_KEY_ID and ACCESS_SECRET_KEY secrets.

kubectl create secret generic minio-credentials \
  --namespace postgresql \
  --from-literal=ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID \
  --from-literal=ACCESS_SECRET_KEY=YOUR_ACCESS_SECRET_KEY

Running the following command we can verify the secrets have been created:

kubectl get secret minio-credentials -n postgresql

On-demand backup

To perform a manual backup of the cluster, we can deploy a backup service:

apiVersion: postgresql.cnpg.io/v1
kind: Backup
metadata:
  name: postgresql-cluster-backup
  namespace: postgresql
spec:
  cluster:
    name: postgresql-cluster
kubectl apply -f postgesql-backup.yaml

Once deployed the backup of the cluster will start.

kubectl describe backup -n postgresql postgresql-cluster-backup

Expose cluster via a node port

If we want to expose the PostgreSQL cluster to the outside, one option is to define a node port.

apiVersion: v1
kind: Service
metadata:
  name: postgresql-nodeport
  namespace: postgresql
spec:
  type: NodePort
  selector:
    cnpg.io/cluster: postgresql-cluster
    role: primary
  ports:
    - name: postgres
      port: 5432
      targetPort: 5432
      nodePort: 30432

Deploying this service will expose our postgresql-cluster at the port 30432. To connect we can use any node IP, as the port will be available at all nodes.