Deploying Unifi Controller on Kubernetes

Ryan Welch / September 21, 2021

5 min read– views

On my network I have several Ubiquity devices: a Unifi Security Gateway, two Unifi APs and a Switch Flex Mini. To control these devices Ubiquity recommendeds to get a Ubiquity Cloud Key or self-host the controller on the same local network. However, I already have a kubernetes managed cluster for some self-hosted services so why get yet another device.

note

The current config has a few issues. Most important are the captive portal, STUN service and discovery service do not work.

The STUN service is primarily an issue with my hosting service, DigitalOcean does not currently support UDP on their load balancers.

Prerequisites

  • Kubernetes cluster
  • Terraform with cluster setup
  • Helm

I currently use DigitalOcean which offer a managed kubernetes control plane for just the cost of the compute nodes. However most of the major cloud providers now have some managed kubernetes or container service which should also work.

Terraform config

First lets setup terraform with kubernetes and helm providers. Note that I am using the DigitalOcean provider to fetch the kubernetes config as my cluster is already defined in terraform.

providers.tf
1
provider "kubernetes" {
2
host = digitalocean_kubernetes_cluster.main_cluster.endpoint
3
token = digitalocean_kubernetes_cluster.main_cluster.kube_config[0].token
4
cluster_ca_certificate = base64decode(
5
digitalocean_kubernetes_cluster.main_cluster.kube_config[0].cluster_ca_certificate
6
)
7
}
8
9
provider "helm" {
10
kubernetes {
11
host = digitalocean_kubernetes_cluster.main_cluster.endpoint
12
token = digitalocean_kubernetes_cluster.main_cluster.kube_config[0].token
13
cluster_ca_certificate = base64decode(
14
digitalocean_kubernetes_cluster.main_cluster.kube_config[0].cluster_ca_certificate
15
)
16
}
17
}
18

Next, we will create a namespace and a helm release in the namespace to deploy the unifi helm chart.

cluster.tf
1
resource "kubernetes_namespace" "unifi" {
2
metadata {
3
name = "unifi"
4
}
5
}
6
7
resource "helm_release" "unifi_controller" {
8
name = "unifi"
9
repository = "https://k8s-at-home.com/charts"
10
chart = "unifi"
11
# Latest version from 'helm search repo unifi'
12
version = "1.5.1"
13
namespace = kubernetes_namespace.unifi.metadata[0].name
14
15
values = [
16
file("config/unifi-controller.values.yaml")
17
]
18
}
19

Unifi Controller config

Finally we want to setup the config for the unifi controller helm chart.

There are a few caveats with this setup.

The discovery service will not work and does not make sense to, as it will not be on the same LAN anyway. It may be possible to enable by adding some forwarding rules in your router to forward to the service but I have not tested this. It is not too difficult to manually adopt new devices, you can even use the unifi app on the same network to aid with this.

The STUN service will also not work which will cause some delay in device actions as well as the unifi UI to show a warning however there should be no loss in functionality. Both the discovery service and the STUN service are UDP services which my kubernetes provider, DigitalOcean, currently does not support on their load balancers.

Lastly, the captive portal service will also not work, I have not spent much time looking into the issue though as I do not use the feature.

config/unifi-controller.yaml
1
image:
2
repository: jacobalberty/unifi
3
tag: 6.2.25 # Version of unifi controller image
4
pullPolicy: IfNotPresent
5
6
# Seperate services
7
unifiedService:
8
enabled: false
9
10
# UI service
11
guiService:
12
type: ClusterIP
13
port: 8443
14
15
# Captive portal service
16
captivePortalService:
17
enabled: false
18
19
# Controller service
20
controllerService:
21
type: ClusterIP
22
port: 8080
23
ingress:
24
enabled: false
25
26
# STUN service
27
stunService:
28
type: NodePort
29
port: 3478 # udp
30
# nodePort: 31478
31
32
# Discovery service
33
discoveryService:
34
type: NodePort
35
port: 10001 # udp
36
# nodePort:
37
38
39
ingress:
40
enabled: true
41
annotations:
42
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
43
kubernetes.io/ingress.class: nginx
44
kubernetes.io/tls-acme: "true"
45
path: /
46
hosts:
47
- unifi.example.com
48
tls:
49
- secretName: unifi-tls
50
hosts:
51
- unifi.example.com
52
53
persistence:
54
enabled: true
55
accessMode: ReadWriteOnce
56
size: 5Gi
57
info

The controller service is the service used by the unifi devices to actually communicate with the controller.

The default port for the controller service is 8080 which I ran into a few problems when changing this to use a custom domain and the standard port 80/443 via ingress. To avoid this headache I bypass the ingress config which would expose the service on port 80/443 and instead my cluster nginx ingress controller exposes the port via a tcp service on the special 8080 port on each ingress node.

nginx-ingress.values.yaml
1
# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ <service:port:PROXY decode:PROXY encode>
2
tcp:
3
8080: "unifi/unifi-controller:controller:PROXY"
4
# I have the proxy protocol enabled in digitaloceans load balancer so I have set the decode flag before forwarding traffic to the service
5
caution

It is actually mostly possible to manually set the controller endpoint on most devices incluing a custom port but it is required to ssh in to each device. Unfortunately the Switch Flex Mini does not support ssh and hence has to be adopted via the app or USG broadcasting the unifi controller host which does not support custom ports.

Conclusion

It is possible to run unifi controller in the cloud, for most homes though it probably makes more sense to get a unifi cloud key or run the controller on a Raspberry Pi. Then again if you are reading this you are probably not the average user.

© 2020 Ryan Welch. All rights reserved.