Upgrade Advisory

This documentation is for Flux (v1) and Helm Operator (v1). Both projects are in maintenance mode and will soon reach end-of-life. We strongly recommend you familiarise yourself with the newest Flux and start looking at your migration path.

For documentation regarding the latest Flux, please refer to this section.

Helm chart

The Helm operator chart bootstraps the Helm Operator on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes >=v1.13

Installation

Add the Flux CD Helm repository:

helm repo add fluxcd https://charts.fluxcd.io

Install the HelmRelease Custom Resource Definition. By adding this CRD it will be possible to define HelmRelease resources on the cluster:

kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/{{ version }}/deploy/crds.yaml

Install the Helm operator using the chart:

Chart defaults (Helm 2 and 3):

# Default with support for Helm 2 and 3 enabled
# NB: the presence of Tiller is a requirement when
# Helm 2 is enabled.
helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux

Helm 3:

# Only Helm 3 support enabled using helm.versions
helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set helm.versions=v3

Helm 2:

# Only Helm 2 support enabled using helm.versions
# NB: the presence of Tiller is a requirement when
# Helm 2 is enabled.
helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set helm.versions=v2

Configuration

The following tables lists the configurable parameters of the Helm Operator chart and their default values.

ParameterDefaultDescription
image.repositorydocker.io/fluxcd/helm-operatorImage repository
image.tag{{ version }}Image tag
image.pullPolicyIfNotPresentImage pull policy
image.pullSecretNoneImage pull secret
resources.requests.cpu50mCPU resource requests for the deployment
resources.requests.memory64MiMemory resource requests for the deployment
resources.limits.cpuNoneCPU resource limits for the deployment
resources.limits.memoryNoneMemory resource limits for the deployment
nodeSelector{}Node Selector properties for the deployment
tolerations[]Tolerations properties for the deployment
affinity{}Affinity properties for the deployment
extraVolumeMounts[]Extra volume mounts to be added to the Helm Operator pod(s)
extraVolumes[]Extra volume to be added to the Helm Operator pod(s)
priorityClassName""Set priority class for Helm Operator
terminationGracePeriodSeconds300Set terminationGracePeriod in seconds for Helm Operator
extraEnvs[]Extra environment variables for the Helm Operator pod(s)
podAnnotations{}Additional pod annotations
podLabels{}Additional pod labels
rbac.createtrueIf true, create and use RBAC resources
rbac.pspEnabledfalseIf true, create and use a restricted pod security policy for Helm Operator pod(s)
serviceAccount.createtrueIf true, create a new service account
serviceAccount.annotations{}Additional Service Account annotations
serviceAccount.namefluxService account to be used
clusterRole.createtrueIf false, Helm Operator will be restricted to the namespace where is deployed
clusterRole.nameNoneThe name of a cluster role to bind to
createCRDfalseInstall the HelmRelease CRD. Setting this value only has effect for Helm 2, as Helm 3 uses --skip-crds and will skip installation if the CRD is already present. Managing CRDs outside of Helm is recommended, also see the Helm best practices
service.typeClusterIPService type to be used (exposing the Helm Operator API outside of the cluster is not advised)
service.port3030Service port to be used
updateChartDepstrueUpdate dependencies for charts
git.pollInterval5mPeriod on which to poll git chart sources for changes
git.timeout20sDuration after which git operations time out
git.defaultRefmasterRef to clone chart from if ref is unspecified in a HelmRelease
git.ssh.secretNameNoneThe name of the kubernetes secret with the SSH private key, supercedes git.secretName
git.ssh.known_hostsNoneThe contents of an SSH known_hosts file, if you need to supply host key(s)
git.ssh.configMapNameNoneThe name of a kubernetes config map containing the ssh config
git.ssh.configMapKeyconfigThe name of the key in the kubernetes config map specified above
git.config.enabledfalseIf true, mount the .gitconfig into the Helm Operator pod created from the git.config.data
git.config.secretNameNoneThe name of the kubernetes secret with .gitconfig data. It can be created manually or automatically using git.config.createSecret and git.config.data
git.config.createSecrettrueIf true, create the kubernetes secret with the value of git.config.data
git.config.dataNoneThe .gitconfig to be mounted into the home directory of the Helm Operator pod
chartsSyncInterval3mPeriod on which to reconcile the Helm releases with HelmRelease resources
statusUpdateInterval30sPeriod on which to update the Helm release status in HelmRelease resources
workers4Number of workers processing releases
logFormatfmtLog format (fmt or json)
logReleaseDiffsfalseHelm Operator should log the diff when a chart release diverges (possibly insecure)
allowNamespaceNoneIf set, this limits the scope to a single namespace. If not specified, all namespaces will be watched
helm.versionsv2,v3Helm versions supported by this operator instance, if v2 is specified then Tiller is required
tillerNamespacekube-systemNamespace in which the Tiller server can be found
tillerSidecar.enabledfalseWhether to deploy Tiller as a sidecar (and listening on localhost only).
tillerSidecar.image.repositorygcr.io/kubernetes-helm/tillerImage repository to use for the Tiller sidecar.
tillerSidecar.image.tagv2.16.1Image tag to use for the Tiller sidecar.
tillerSidecar.storagesecretStorage engine to use for the Tiller sidecar.
tls.enablefalseEnable TLS for communicating with Tiller
tls.verifyfalseVerify the Tiller certificate, also enables TLS when set to true
tls.secretNamehelm-client-certsName of the secret containing the TLS client certificates for communicating with Tiller
tls.keyFiletls.keyName of the key file within the k8s secret
tls.certFiletls.crtName of the certificate file within the k8s secret
tls.caContentNoneCertificate Authority content used to validate the Tiller server certificate
tls.hostnameNoneThe server name used to verify the hostname on the returned certificates from the Tiller server
configureRepositories.enablefalseEnable volume mount for a repositories.yaml configuration file and repository cache
configureRepositories.volumeNamerepositories-yamlName of the volume for the repositories.yaml file
configureRepositories.secretNameflux-helm-repositoriesName of the secret containing the contents of the repositories.yaml file
configureRepositories.cacheVolumeNamerepositories-cacheName for the repository cache volume
configureRepositories.repositoriesNoneList of custom Helm repositories to add. If non empty, the corresponding secret with a repositories.yaml will be created
initPlugins.enablefalseEnable the initialization of Helm plugins using init containers
initPlugins.cacheVolumeNameplugins-cacheName for the plugins cache volume
initPlugins.pluginsNoneList of Helm plugins to initialize before starting the operator. If non empty, an init container will be added for every entry
kube.configNoneOverride for kubectl default config in the Helm Operator pod(s)
prometheus.enabledfalseIf enabled, adds prometheus annotations to Helm Operator pod(s)
prometheus.serviceMonitor.createfalseSet to true if using the Prometheus Operator
prometheus.serviceMonitor.intervalNoneInterval at which metrics should be scraped
prometheus.serviceMonitor.scrapeTimeoutNoneThe timeout to configure the service monitor scrape task e.g 5s
prometheus.serviceMonitor.namespaceNoneThe namespace where the ServiceMonitor is deployed
prometheus.serviceMonitor.additionalLabels{}Additional labels to add to the ServiceMonitor
livenessProbe.initialDelaySeconds1The initial delay in seconds before the first liveness probe is initiated
livenessProbe.periodSeconds10The number of seconds between the liveness probe is checked
livenessProbe.timeoutSeconds5The number of seconds after which the liveness probe times out
livenessProbe.successThreshold1The minimum number of consecutive successful probe results for the liveness probe to be considered successful
livenessProbe.failureThreshold3The number of times the liveness probe can failed before restarting the container
readinessProbe.initialDelaySeconds1The initial delay in seconds before the first readiness probe is initiated
readinessProbe.periodSeconds10The number of seconds between the readiness probe is checked
readinessProbe.timeoutSeconds5The number of seconds after which the readiness probe times out
readinessProbe.successThreshold1The minimum number of consecutive successful probe results for the readiness probe to be considered successful
readinessProbe.failureThreshold3The number of times the readiness probe can failed before the container is marked as unready
initContainers[]Init containers and their specs
hostAliases{}Host aliases allow the modification of the hosts file (/etc/hosts) inside Helm Operator container. See https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
dashboards.enabledfalseIf enabled, helm-operator will create a configmap with a dashboard in json that’s going to be picked up by grafana (see sidecar.dashboards.enabled)
dashboards.namespace``The namespace where the dashboard is deployed, defaults to the installation namespace
dashboards.nameprefixflux-dashboardsThe prefix of the generated configmaps
securityContext{}Adding securityContext options to the pod
containerSecurityContext.helmOperator{}Adding securityContext options to the helm operator container
containerSecurityContext.tiller{}Adding securityContext options to the tiller container
sidecarContainers{}Sidecar containers along with the specifications.

How-to

Use a custom Helm repository

Public Helm chart repositories that do not require any authentication do not have to be configured and can just be referenced by their URL in a HelmRelease resource. However, for Helm chart repositories that do require authentication repository entries with the credentials need to be added so the Helm Operator is able to authenticate against the repository.

Helm chart repository entries can be added with the chart using the configureRepositories.repositories value, which accepts an array of objects with the following keys:

KeyDescription
nameThe name (alias) for the Helm chart repository
urlThe URL of the Helm chart repository
usernameHelm chart repository username
passwordHelm chart repository password
certFileThe path to a SSL certificate file used to identify the HTTPS client
keyFileThe path to a SSL key file used to identify the HTTPS client
caFileThe path to a CA bundle used to verify HTTPS enabled servers

For example, to add an Helm chart repository with username and password protection:

helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set configureRepositories.enable=true \
    --set 'configureRepositories.repositories[0].name=example' \
    --set 'configureRepositories.repositories[0].url=https://charts.example.com' \
    --set 'configureRepositories.repositories[0].username=john' \
    --set 'configureRepositories.repositories[0].password=s3cr3t!'

After adding the entry, the Helm chart in the repository can then be referred to by the URL of the repository as usual:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: awesome-example
spec:
  chart:
    repository: https://charts.example.com
    version: 1.0.0
    name: awesome

Use a private Git server

When using a private Git server to host your charts, setting the git.ssh.known_hosts variable is required for enabling successful key matches because StrictHostKeyChecking is enabled during git pull operations.

By setting the git.ssh.known_hosts variable, a configmap will be created called helm-operator-ssh-config which in turn will be mounted into a volume named sshdir at /root/.ssh/known_hosts.

Get the known hosts keys by running the following command:

ssh-keyscan <your_git_host_domain> > /tmp/flux_known_hosts

Generate a SSH key named identity and create a secret with it:

ssh-keygen -q -N "" -f /tmp/identity
kubectl create secret generic helm-operator-ssh \
    --from-file=/tmp/identity
    --namespace flux

Add identity.pub as a read-only deployment key in your Git repo and install the Helm Operator:

helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set git.ssh.secretName=helm-operator-ssh \
    --set-file git.ssh.known_hosts=/tmp/flux_known_hosts

You can refer to a chart from your private Git with:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: some-app
  namespace: default
spec:
  releaseName: some-app
  chart:
    git: git@your_git_host_domain:org/repo
    ref: master
    path: charts/some-app
  values:
    replicaCount: 1

Use Flux’s Git deploy key

You can configure the Helm Operator to use the Git SSH key generated by Flux.

Assuming you have installed Flux with:

helm upgrade -i flux fluxcd/flux \
    --namespace flux \
    --set git.url=git@github.com:org/repo

When installing Helm Operator, you can refer to the Flux deploy key by the name of the Kubernetes Secret:

helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set git.ssh.secretName=flux-git-deploy

The deploy key naming convention is <Flux Release Name>-git-deploy.

Use Helm downloader plugins

Helm downloader plugins like hypnoglow/helm-s3 and hayorov/helm-gcs make it possible to extend the protocols Helm recognizes to e.g. pull charts from a S3 bucket.

The chart offers an utility to install plugins before starting the operator using init containers:

helm upgrade -i helm-operator fluxcd/helm-operator \
    --namespace flux \
    --set initPlugins.enable=true \
    --set 'initPlugins.plugins[0].plugin=https://github.com/hypnoglow/helm-s3.git' \
    --set 'initPlugins.plugins[0].version=0.9.2' \
    --set 'initPlugins.plugins[0].helmVersion=v3'

Note: Most plugins assume credentials are available on the system they run on, make sure those are available at the expected paths using e.g. extraVolumes and extraVolumeMounts.

You should now be able to make use of the protocol added by the plugin:

cat <<EOF | kubectl apply -f -
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: chart-from-s3
  namespace: default
spec:
  chart:
    repository: s3://bucket-name/charts
    name: chart
    version: 0.1.0
  values:
    replicaCount: 1
EOF

Uninstall

To uninstall/delete the helm-operator Helm release:

helm delete --purge helm-operator

The command removes all the Kubernetes components associated with the chart and deletes the release.

Note: helm delete will not remove the HelmRelease CRD. Deleting the CRD will trigger a cascade delete of all Helm release objects.