Bootstrap cheatsheet

Showcase various configurations of Flux controllers at bootstrap time.

How to customize Flux

To customize the Flux controllers during bootstrap, first you’ll need to create a Git repository and clone it locally.

Create the file structure required by bootstrap with:

mkdir -p clusters/my-cluster/flux-system
touch clusters/my-cluster/flux-system/gotk-components.yaml \
    clusters/my-cluster/flux-system/gotk-sync.yaml \
    clusters/my-cluster/flux-system/kustomization.yaml

The Flux controller deployments, container command arguments, node affinity, etc can be customized using Kustomize strategic merge patches and JSON patches.

You can make changes to all controllers using a single patch or target a specific controller:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: # manifests generated during bootstrap
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  # target all controllers
  - patch: | 
      # strategic merge or JSON patch
    target:
      kind: Deployment
      labelSelector: "app.kubernetes.io/part-of=flux"
  # target controllers by name
  - patch: |
            # strategic merge or JSON patch
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller)"
  # add a command argument to a single controller
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --aws-autologin-for-ecr=true      
    target:
      kind: Deployment
      name: "image-reflector-controller"

Push the changes to main branch:

git add -A && git commit -m "init flux" && git push

And run the bootstrap for clusters/my-cluster:

flux bootstrap git \
  --url=ssh://git@<host>/<org>/<repository> \
  --branch=main \
  --path=clusters/my-cluster

To make further amendments, pull the changes locally, edit the kustomization.yaml file, push the changes upstream and rerun bootstrap.

Customization examples

Safe to evict

Allow the cluster autoscaler to evict the Flux controller pods:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: all
      spec:
        template:
          metadata:
            annotations:
              cluster-autoscaler.kubernetes.io/safe-to-evict: "true"      
    target:
      kind: Deployment
      labelSelector: app.kubernetes.io/part-of=flux

Node affinity and tolerations

Pin the Flux controllers to specific nodes:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: all
      spec:
        template:
          spec:
            affinity:
              nodeAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                  nodeSelectorTerms:
                    - matchExpressions:
                        - key: role
                          operator: In
                          values:
                            - flux
            tolerations:
              - effect: NoSchedule
                key: role
                operator: Equal
                value: flux      
    target:
      kind: Deployment
      labelSelector: app.kubernetes.io/part-of=flux

The above configuration pins Flux to nodes tainted and labeled with:

kubectl taint nodes <my-node> role=flux:NoSchedule
kubectl label nodes <my-node> role=flux

Increase the number of workers

If Flux is managing hundreds of applications, you may want to increase the number of reconciliations that can be performed in parallel and bump the resources limits:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --concurrent=20
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --kube-api-qps=500
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --kube-api-burst=1000
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --requeue-dependency=5s      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller|source-controller)"
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: all
      spec:
        template:
          spec:
            containers:
              - name: manager
                resources:
                  limits:
                    cpu: 2000m
                    memory: 2Gi      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller|source-controller)"

Enable Helm repositories caching

For large Helm repository index files, you can enable caching to reduce the memory footprint of source-controller:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --helm-cache-max-size=10
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --helm-cache-ttl=60m
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --helm-cache-purge-interval=5m      
    target:
      kind: Deployment
      name: source-controller

When helm-cache-max-size is reached, an error is logged and the index is instead read from file. Cache hits are exposed via the gotk_cache_events_total Prometheus metrics. Use this data to fine-tune the configuration flags.

Using HTTP/S proxy for egress traffic

If your cluster must use an HTTP proxy to reach GitHub or other external services, you must set NO_PROXY=.cluster.local.,.cluster.local,.svc to allow the Flux controllers to talk to each other:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: all
      spec:
        template:
          spec:
            containers:
              - name: manager
                env:
                  - name: "HTTPS_PROXY"
                    value: "http://proxy.example.com:3129"
                  - name: "NO_PROXY"
                    value: ".cluster.local.,.cluster.local,.svc"      
    target:
      kind: Deployment
      labelSelector: app.kubernetes.io/part-of=flux

Test release candidates

To test release candidates, you can patch the container image tags:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
images:
  - name: ghcr.io/fluxcd/source-controller
    newTag: rc-254ba51d
  - name: ghcr.io/fluxcd/kustomize-controller
    newTag: rc-ca0a9b8

OpenShift compatibility

Allow Flux controllers to run as non-root:

#!/usr/bin/env bash
set -e

FLUX_NAMESPACE="flux-system"
FLUX_CONTROLLERS=(
"source-controller"
"kustomize-controller"
"helm-controller"
"notification-controller"
"image-reflector-controller"
"image-automation-controller"
)

for i in ${!FLUX_CONTROLLERS[@]}; do
  oc adm policy add-scc-to-user nonroot system:serviceaccount:${FLUX_NAMESPACE}:${FLUX_CONTROLLERS[$i]}
done

Set the user to nobody and delete the seccomp profile from the security context:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: all
      spec:
        template:
          spec:
            containers:
              - name: manager
                securityContext:
                  runAsUser: 65534
                  seccompProfile:
                    $patch: delete      
    target:
      kind: Deployment
      labelSelector: app.kubernetes.io/part-of=flux

IAM Roles for Service Accounts

To allow Flux access to an AWS service such as KMS or S3, after setting up IRSA, you can annotate the controller service account with the role ARN:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: kustomize-controller
        annotations:
          eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<KMS-ROLE-NAME>      
    target:
      kind: ServiceAccount
      name: kustomize-controller
  - patch: |
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: source-controller
        annotations:
          eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<S3-ROLE-NAME>      
    target:
      kind: ServiceAccount
      name: source-controller

Multi-tenancy lockdown

Lock down Flux on a multi-tenant cluster by disabling cross-namespace references and Kustomize remote bases, and by setting a default service account:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
patches:
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --no-cross-namespace-refs=true      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller|notification-controller|image-reflector-controller|image-automation-controller)"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --no-remote-bases=true      
    target:
      kind: Deployment
      name: "kustomize-controller"
  - patch: |
      - op: add
        path: /spec/template/spec/containers/0/args/-
        value: --default-service-account=default      
    target:
      kind: Deployment
      name: "(kustomize-controller|helm-controller)"
  - patch: |
      - op: add
        path: /spec/serviceAccountName
        value: kustomize-controller      
    target:
      kind: Kustomization
      name: "flux-system"