Template: Deploy a Kubernetes Cluster on Azure

A BigchainDB node can be run inside a Kubernetes cluster. This page describes one way to deploy a Kubernetes cluster on Azure.

Step 1: Get a Pay-As-You-Go Azure Subscription

Microsoft Azure has a Free Trial subscription (at the time of writing), but it’s too limited to run an advanced BigchainDB node. Sign up for a Pay-As-You-Go Azure subscription via the Azure website.

You may find that you have to sign up for a Free Trial subscription first. That’s okay: you can have many subscriptions.

Step 2: Create an SSH Key Pair

You’ll want an SSH key pair so you’ll be able to SSH to the virtual machines that you’ll deploy in the next step. (If you already have an SSH key pair, you could reuse it, but it’s probably a good idea to make a new SSH key pair for your Kubernetes VMs and nothing else.)

See the page about how to generate a key pair for SSH.

Step 3: Deploy an Azure Container Service (ACS)

It’s possible to deploy an Azure Container Service (ACS) from the Azure Portal (i.e. online in your web browser) but it’s actually easier to do it using the Azure Command-Line Interface (CLI).

Microsoft has instructions to install the Azure CLI 2.0 on most common operating systems. Do that.

If you already have the Azure CLI installed, you may want to update it.

Warning

az component update isn’t supported if you installed the CLI using some of Microsoft’s provided installation instructions. See the Microsoft docs for update instructions.

Next, login to your account using:

$ az login

It will tell you to open a web page and to copy a code to that page.

If the login is a success, you will see some information about all your subscriptions, including the one that is currently enabled ("state": "Enabled"). If the wrong one is enabled, you can switch to the right one using:

$ az account set --subscription <subscription name or ID>

Next, you will have to pick the Azure data center location where you’d like to deploy your cluster. You can get a list of all available locations using:

$ az account list-locations

Next, create an Azure “resource group” to contain all the resources (virtual machines, subnets, etc.) associated with your soon-to-be-deployed cluster. You can name it whatever you like but avoid fancy characters because they may confuse some software.

$ az group create --name <resource group name> --location <location name>

Example location names are koreacentral and westeurope.

Finally, you can deploy an ACS using something like:

$ az acs create --name <a made-up cluster name> \
--resource-group <name of resource group created earlier> \
--master-count 3 \
--agent-count 2 \
--admin-username ubuntu \
--agent-vm-size Standard_D2_v2 \
--dns-prefix <make up a name> \
--ssh-key-value ~/.ssh/<name>.pub \
--orchestrator-type kubernetes \
--debug --output json

Note

Please refer to Azure documentation for a comprehensive list of options available for az acs create. Please tune the following parameters as per your requirement:

  • Master count.
  • Agent count.
  • Agent VM size.
  • Optional: Master storage profile.
  • Optional: Agent storage profile.

There are more options. For help understanding all the options, use the built-in help:

$ az acs create --help

It takes a few minutes for all the resources to deploy. You can watch the progress in the Azure Portal: go to Resource groups (with the blue cube icon) and click on the one you created to see all the resources in it.

Optional: SSH to Your New Kubernetes Cluster Nodes

You can SSH to one of the just-deployed Kubernetes “master” nodes (virtual machines) using:

$ ssh -i ~/.ssh/<name> ubuntu@<master-ip-address-or-fqdn>

where you can get the IP address or FQDN of a master node from the Azure Portal. For example:

$ ssh -i ~/.ssh/mykey123 ubuntu@mydnsprefix.westeurope.cloudapp.azure.com

Note

All the master nodes are accessible behind the same public IP address and FQDN. You connect to one of the masters randomly based on the load balancing policy.

The “agent” nodes shouldn’t get public IP addresses or externally accessible FQDNs, so you can’t SSH to them directly, but you can first SSH to the master and then SSH to an agent from there using their hostname. To do that, you could copy your SSH key pair to the master (a bad idea), or use SSH agent forwarding (better). To do the latter, do the following on the machine you used to SSH to the master:

$ echo -e "Host <FQDN of the cluster from Azure Portal>\n  ForwardAgent yes" >> ~/.ssh/config

To verify that SSH agent forwarding works properly, SSH to the one of the master nodes and do:

$ echo "$SSH_AUTH_SOCK"

If you get an empty response, then SSH agent forwarding hasn’t been set up correctly. If you get a non-empty response, then SSH agent forwarding should work fine and you can SSH to one of the agent nodes (from a master) using:

$ ssh ubuntu@k8s-agent-4AC80E97-0

where k8s-agent-4AC80E97-0 is the name of a Kubernetes agent node in your Kubernetes cluster. You will have to replace it by the name of an agent node in your cluster.

Optional: Delete the Kubernetes Cluster

$ az acs delete \
--name <ACS cluster name> \
--resource-group <name of resource group containing the cluster>

Optional: Delete the Resource Group

CAUTION: You might end up deleting resources other than the ACS cluster.

$ az group delete \
--name <name of resource group containing the cluster>

Next, you can run a BigchainDB node on your new Kubernetes cluster.