Typhoon for Azure is alpha. For production, use AWS, Google Cloud, or bare-metal. As Azure matures, check errata for known shortcomings.

In this tutorial, we'll create a Kubernetes v1.16.1 cluster on Azure with Container Linux.

We'll declare a Kubernetes cluster using the Typhoon Terraform module. Then apply the changes to create a resource group, virtual network, subnets, security groups, controller availability set, worker scale set, load balancer, and TLS assets.

Controller hosts are provisioned to run an etcd-member peer and a kubelet service. Worker hosts run a kubelet service. Controller nodes run kube-apiserver, kube-scheduler, kube-controller-manager, and coredns, while kube-proxy and calico (or flannel) run on every node. A generated kubeconfig provides kubectl access to the cluster.


  • Azure account
  • Azure DNS Zone (registered Domain Name or delegated subdomain)
  • Terraform v0.12.x and terraform-provider-ct installed locally

Terraform Setup

Install Terraform v0.12.x on your system.

$ terraform version
Terraform v0.12.9

Add the terraform-provider-ct plugin binary for your system to ~/.terraform.d/plugins/, noting the final name.

tar xzf terraform-provider-ct-v0.4.0-linux-amd64.tar.gz
mv terraform-provider-ct-v0.4.0-linux-amd64/terraform-provider-ct ~/.terraform.d/plugins/terraform-provider-ct_v0.4.0

Read concepts to learn about Terraform, modules, and organizing resources. Change to your infrastructure repository (e.g. infra).

cd infra/clusters


Install the Azure az command line tool to authenticate with Azure.

az login

Configure the Azure provider in a file.

provider "azurerm" {
  version = "1.35.0"

provider "ct" {
  version = "0.4.0"

Additional configuration options are described in the azurerm provider docs.


Define a Kubernetes cluster using the module azure/container-linux/kubernetes.

module "ramius" {
  source = "git::"

  # Azure
  cluster_name   = "ramius"
  region         = "centralus"
  dns_zone       = ""
  dns_zone_group = "example-group"

  # configuration
  ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
  asset_dir          = "/home/user/.secrets/clusters/ramius"

  # optional
  worker_count    = 2
  host_cidr       = ""

Reference the variables docs or the source.


Initial bootstrapping requires bootstrap.service be started on one controller node. Terraform uses ssh-agent to automate this step. Add your SSH private key to ssh-agent.

ssh-add ~/.ssh/id_rsa
ssh-add -L


Initialize the config directory if this is the first use with Terraform.

terraform init

Plan the resources to be created.

$ terraform plan
Plan: 86 to add, 0 to change, 0 to destroy.

Apply the changes to create the cluster.

$ terraform apply
... Still creating... (6m50s elapsed) Still creating... (7m0s elapsed) Creation complete after 7m8s (ID: 3961816482286168143)

Apply complete! Resources: 86 added, 0 changed, 0 destroyed.

In 4-8 minutes, the Kubernetes cluster will be ready.


Install kubectl on your system. Use the generated kubeconfig credentials to access the Kubernetes cluster and list nodes.

$ export KUBECONFIG=/home/user/.secrets/clusters/ramius/auth/kubeconfig
$ kubectl get nodes
NAME                  STATUS  ROLES   AGE  VERSION
ramius-controller-0   Ready   <none>  24m  v1.16.1
ramius-worker-000001  Ready   <none>  25m  v1.16.1
ramius-worker-000002  Ready   <none>  24m  v1.16.1

List the pods.

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY  STATUS    RESTARTS  AGE
kube-system   coredns-7c6fbb4f4b-b6qzx                    1/1    Running   0         26m
kube-system   coredns-7c6fbb4f4b-j2k3d                    1/1    Running   0         26m
kube-system   flannel-bwf24                               2/2    Running   0         26m
kube-system   flannel-ks5qb                               2/2    Running   0         26m
kube-system   flannel-tq2wg                               2/2    Running   0         26m
kube-system   kube-apiserver-ramius-controller-0          1/1    Running   0         26m
kube-system   kube-controller-manager-ramius-controller-0 1/1    Running   0         26m
kube-system   kube-proxy-j4vpq                            1/1    Running   0         26m
kube-system   kube-proxy-jxr5d                            1/1    Running   0         26m
kube-system   kube-proxy-lbdw5                            1/1    Running   0         26m
kube-system   kube-scheduler-ramius-controller-0          1/1    Running   0         26m

Going Further

Learn about maintenance and addons.


On Container Linux clusters, install the CLUO addon to coordinate reboots and drains when nodes auto-update. Otherwise, updates may not be applied until the next reboot.


Check the source.


Name Description Example
cluster_name Unique cluster name (prepended to dns_zone) "ramius"
region Azure region "centralus"
dns_zone Azure DNS zone ""
dns_zone_group Resource group where the Azure DNS zone resides "global"
ssh_authorized_key SSH public key for user 'core' "ssh-rsa AAAAB3NZ..."
asset_dir Absolute path to a directory where generated assets should be placed (contains secrets) "/home/user/.secrets/clusters/ramius"


Regions are shown in docs or with az account list-locations --output table.

DNS Zone

Clusters create a DNS A record ${cluster_name}.${dns_zone} to resolve a load balancer backed by controller instances. This FQDN is used by workers and kubectl to access the apiserver(s). In this example, the cluster's apiserver would be accessible at

You'll need a registered domain name or delegated subdomain on Azure DNS. You can set this up once and create many clusters with unique names.

# Azure resource group for DNS zone
resource "azurerm_resource_group" "global" {
  name     = "global"
  location = "centralus"

# DNS zone for clusters
resource "azurerm_dns_zone" "clusters" {
  resource_group_name =

  name      = ""
  zone_type = "Public"

Reference the DNS zone with and its resource group with "

If you have an existing domain name with a zone file elsewhere, just delegate a subdomain that can be managed on Azure DNS (e.g. and update nameservers.


Name Description Default Example
controller_count Number of controllers (i.e. masters) 1 1
worker_count Number of workers 1 3
controller_type Machine type for controllers "Standard_B2s" See below
worker_type Machine type for workers "Standard_DS1_v2" See below
os_image Channel for a Container Linux derivative "coreos-stable" coreos-stable, coreos-beta, coreos-alpha
disk_size Size of the disk in GB 40 100
worker_priority Set priority to Low to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time Regular Low
controller_clc_snippets Controller Container Linux Config snippets [] example
worker_clc_snippets Worker Container Linux Config snippets [] example
networking Choice of networking provider "flannel" "flannel" or "calico"
host_cidr CIDR IPv4 range to assign to instances "" ""
pod_cidr CIDR IPv4 range to assign to Kubernetes pods "" ""
service_cidr CIDR IPv4 range to assign to Kubernetes services "" ""
worker_node_labels List of initial worker node labels [] ["worker-pool=default"]

Check the list of valid machine types and their specs. Use az vm list-skus to get the identifier.


Unlike AWS and GCP, Azure requires its virtual networks to have non-overlapping IPv4 CIDRs (yeah, go figure). Instead of each cluster just using for instances, each Azure cluster's host_cidr must be non-overlapping (e.g. for the 1st cluster, for the 2nd cluster, etc).


Do not choose a controller_type smaller than Standard_B2s. Smaller instances are not sufficient for running a controller.

Low Priority

Add worker_priority=Low to use Low Priority workers that run on Azure's surplus capacity at lower cost, but with the tradeoff that they can be deallocated at random. Low priority VMs are Azure's analog to AWS spot instances or GCP premptible instances.