Kubernetes Template: Deploy a Single BigchainDB Node¶
This page describes how to deploy the first BigchainDB node in a BigchainDB cluster, or a stand-alone BigchainDB node, using Kubernetes. It assumes you already have a running Kubernetes cluster.
If you want to add a new BigchainDB node to an existing BigchainDB cluster, refer to the page about that.
Below, we refer to many files by their directory and filename,
such as configuration/config-map.yaml
. Those files are files in the
bigchaindb/bigchaindb repository on GitHub in the k8s/
directory.
Make sure you’re getting those files from the appropriate Git branch on
GitHub, i.e. the branch for the version of BigchainDB that your BigchainDB
cluster is using.
Step 1: Install and Configure kubectl¶
kubectl is the Kubernetes CLI. If you don’t already have it installed, then see the Kubernetes docs to install it.
The default location of the kubectl configuration file is ~/.kube/config
.
If you don’t have that file, then you need to get it.
Azure. If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0 (as per our template),
then you can get the ~/.kube/config
file using:
$ az acs kubernetes get-credentials \
--resource-group <name of resource group containing the cluster> \
--name <ACS cluster name>
If it asks for a password (to unlock the SSH key)
and you enter the correct password,
but you get an error message,
then try adding --ssh-key-file ~/.ssh/<name>
to the above command (i.e. the path to the private key).
Note
About kubectl contexts. You might manage several Kubernetes clusters. To make it easy to switch from one to another, kubectl has a notion of “contexts,” e.g. the context for cluster 1 or the context for cluster 2. To find out the current context, do:
$ kubectl config view
and then look for the current-context
in the output.
The output also lists all clusters, contexts and users.
(You might have only one of each.)
You can switch to a different context using:
$ kubectl config use-context <new-context-name>
You can also switch to a different context for just one command
by inserting --context <context-name>
into any kubectl command.
For example:
$ kubectl --context k8s-bdb-test-cluster-0 get pods
will get a list of the pods in the Kubernetes cluster associated
with the context named k8s-bdb-test-cluster-0
.
Step 2: Connect to Your Cluster’s Web UI (Optional)¶
You can connect to your cluster’s Kubernetes Dashboard (also called the Web UI) using:
$ kubectl proxy -p 8001
or
$ az acs kubernetes browse -g [Resource Group] -n [Container service instance name] --ssh-key-file /path/to/privateKey
or, if you prefer to be explicit about the context (explained above):
$ kubectl --context k8s-bdb-test-cluster-0 proxy -p 8001
The output should be something like Starting to serve on 127.0.0.1:8001
.
That means you can visit the dashboard in your web browser at
http://127.0.0.1:8001/ui.
Step 3: Configure Your BigchainDB Node¶
See the page titled How to Configure a BigchainDB Node.
Step 4: Start the NGINX Service¶
- This will will give us a public IP for the cluster.
- Once you complete this step, you might need to wait up to 10 mins for the public IP to be assigned.
- You have the option to use vanilla NGINX without HTTPS support or an NGINX with HTTPS support.
Step 4.1: Vanilla NGINX¶
This configuration is located in the file
nginx-http/nginx-http-svc.yaml
.Set the
metadata.name
andmetadata.labels.name
to the value set inngx-instance-name
in the ConfigMap above.Set the
spec.selector.app
to the value set inngx-instance-name
in the ConfigMap followed by-dep
. For example, if the value set in thengx-instance-name
isngx-http-instance-0
, set thespec.selector.app
tongx-http-instance-0-dep
.Set
ports[0].port
andports[0].targetPort
to the value set in thecluster-frontend-port
in the ConfigMap above. This is thepublic-cluster-port
in the file which is the ingress in to the cluster.Start the Kubernetes Service:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-svc.yaml
Step 4.2: NGINX with HTTPS¶
You have to enable HTTPS for this one and will need an HTTPS certificate for your domain.
You should have already created the necessary Kubernetes Secrets in the previous step (i.e.
https-certs
).This configuration is located in the file
nginx-https/nginx-https-svc.yaml
.Set the
metadata.name
andmetadata.labels.name
to the value set inngx-instance-name
in the ConfigMap above.Set the
spec.selector.app
to the value set inngx-instance-name
in the ConfigMap followed by-dep
. For example, if the value set in thengx-instance-name
isngx-https-instance-0
, set thespec.selector.app
tongx-https-instance-0-dep
.Set
ports[0].port
andports[0].targetPort
to the value set in thecluster-frontend-port
in the ConfigMap above. This is thepublic-secure-cluster-port
in the file which is the ingress in to the cluster.Set
ports[1].port
andports[1].targetPort
to the value set in themongodb-frontend-port
in the ConfigMap above. This is thepublic-mdb-port
in the file which specifies where MongoDB is available.Start the Kubernetes Service:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-svc.yaml
Step 5: Assign DNS Name to the NGINX Public IP¶
This step is required only if you are planning to set up multiple BigchainDB nodes or are using HTTPS certificates tied to a domain.
The following command can help you find out if the NGINX service started above has been assigned a public IP or external IP address:
$ kubectl --context k8s-bdb-test-cluster-0 get svc -wOnce a public IP is assigned, you can map it to a DNS name. We usually assign
bdb-test-cluster-0
,bdb-test-cluster-1
and so on in our documentation. Let’s assume that we assign the unique name ofbdb-test-cluster-0
here.
Set up DNS mapping in Azure.
Select the current Azure resource group and look for the Public IP
resource. You should see at least 2 entries there - one for the Kubernetes
master and the other for the NGINX instance. You may have to Refresh
the
Azure web page listing the resources in a resource group for the latest
changes to be reflected.
Select the Public IP
resource that is attached to your service (it should
have the Azure DNS prefix name along with a long random string, without the
master-ip
string), select Configuration
, add the DNS assigned above
(for example, bdb-test-cluster-0
), click Save
, and wait for the
changes to be applied.
To verify the DNS setting is operational, you can run nslookup <DNS
name added in Azure configuration>
from your local Linux shell.
This will ensure that when you scale the replica set later, other MongoDB members in the replica set can reach this instance.
Step 6: Start the MongoDB Kubernetes Service¶
This configuration is located in the file
mongodb/mongo-svc.yaml
.Set the
metadata.name
andmetadata.labels.name
to the value set inmdb-instance-name
in the ConfigMap above.Set the
spec.selector.app
to the value set inmdb-instance-name
in the ConfigMap followed by-ss
. For example, if the value set in themdb-instance-name
ismdb-instance-0
, set thespec.selector.app
tomdb-instance-0-ss
.Set
ports[0].port
andports[0].targetPort
to the value set in themongodb-backend-port
in the ConfigMap above. This is themdb-port
in the file which specifies where MongoDB listens for API requests.Start the Kubernetes Service:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-svc.yaml
Step 7: Start the BigchainDB Kubernetes Service¶
- This configuration is located in the file
bigchaindb/bigchaindb-svc.yaml
.- Set the
metadata.name
andmetadata.labels.name
to the value set inbdb-instance-name
in the ConfigMap above.- Set the
spec.selector.app
to the value set inbdb-instance-name
in the ConfigMap followed by-dep
. For example, if the value set in thebdb-instance-name
isbdb-instance-0
, set thespec.selector.app
tobdb-instance-0-dep
.
- Set
ports[0].port
andports[0].targetPort
to the value set in thebigchaindb-api-port
in the ConfigMap above. This is thebdb-api-port
in the file which specifies where BigchainDB listens for HTTP API requests.- Set
ports[1].port
andports[1].targetPort
to the value set in thebigchaindb-ws-port
in the ConfigMap above. This is thebdb-ws-port
in the file which specifies where BigchainDB listens for Websocket connections.
Start the Kubernetes Service:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-svc.yaml
Step 8: Start the OpenResty Kubernetes Service¶
This configuration is located in the file
nginx-openresty/nginx-openresty-svc.yaml
.Set the
metadata.name
andmetadata.labels.name
to the value set inopenresty-instance-name
in the ConfigMap above.Set the
spec.selector.app
to the value set inopenresty-instance-name
in the ConfigMap followed by-dep
. For example, if the value set in theopenresty-instance-name
isopenresty-instance-0
, set thespec.selector.app
toopenresty-instance-0-dep
.Start the Kubernetes Service:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-svc.yaml
Step 9: Start the NGINX Kubernetes Deployment¶
- NGINX is used as a proxy to OpenResty, BigchainDB and MongoDB instances in the node. It proxies HTTP/HTTPS requests on the
cluster-frontend-port
to the corresponding OpenResty or BigchainDB backend, and TCP connections onmongodb-frontend-port
to the MongoDB backend.- As in step 4, you have the option to use vanilla NGINX without HTTPS or NGINX with HTTPS support.
Step 9.1: Vanilla NGINX¶
- This configuration is located in the file
nginx-http/nginx-http-dep.yaml
.- Set the
metadata.name
andspec.template.metadata.labels.app
to the value set inngx-instance-name
in the ConfigMap followed by a-dep
. For example, if the value set in thengx-instance-name
isngx-http-instance-0
, set the fields tongx-http-instance-0-dep
.
- Set the ports to be exposed from the pod in the
spec.containers[0].ports
section. We currently expose 3 ports -mongodb-frontend-port
,cluster-frontend-port
andcluster-health-check-port
. Set them to the values specified in the ConfigMap.
The configuration uses the following values set in the ConfigMap:
cluster-frontend-port
cluster-health-check-port
cluster-dns-server-ip
mongodb-frontend-port
ngx-mdb-instance-name
mongodb-backend-port
ngx-bdb-instance-name
bigchaindb-api-port
bigchaindb-ws-port
Start the Kubernetes Deployment:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-http/nginx-http-dep.yaml
Step 9.2: NGINX with HTTPS¶
- This configuration is located in the file
nginx-https/nginx-https-dep.yaml
.- Set the
metadata.name
andspec.template.metadata.labels.app
to the value set inngx-instance-name
in the ConfigMap followed by a-dep
. For example, if the value set in thengx-instance-name
isngx-https-instance-0
, set the fields tongx-https-instance-0-dep
.- Set the ports to be exposed from the pod in the
spec.containers[0].ports
section. We currently expose 3 ports -mongodb-frontend-port
,cluster-frontend-port
andcluster-health-check-port
. Set them to the values specified in the ConfigMap.
- The configuration uses the following values set in the ConfigMap:
cluster-frontend-port
cluster-health-check-port
cluster-fqdn
cluster-dns-server-ip
mongodb-frontend-port
ngx-mdb-instance-name
mongodb-backend-port
openresty-backend-port
ngx-openresty-instance-name
ngx-bdb-instance-name
bigchaindb-api-port
bigchaindb-ws-port
- The configuration uses the following values set in the Secret:
https-certs
Start the Kubernetes Deployment:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-https/nginx-https-dep.yaml
Step 10: Create Kubernetes Storage Classes for MongoDB¶
MongoDB needs somewhere to store its data persistently, outside the container where MongoDB is running. Our MongoDB Docker container (based on the official MongoDB Docker container) exports two volume mounts with correct permissions from inside the container:
- The directory where the mongod instance stores its data:
/data/db
. There’s more explanation in the MongoDB docs about storage.dbpath. - The directory where the mongodb instance stores the metadata for a sharded
cluster:
/data/configdb/
. There’s more explanation in the MongoDB docs about sharding.configDB.
Explaining how Kubernetes handles persistent volumes, and the associated terminology, is beyond the scope of this documentation; see the Kubernetes docs about persistent volumes.
The first thing to do is create the Kubernetes storage classes.
Set up Storage Classes in Azure.
First, you need an Azure storage account.
If you deployed your Kubernetes cluster on Azure
using the Azure CLI 2.0
(as per our template),
then the az acs create command already created a
storage account in the same location and resource group
as your Kubernetes cluster.
Both should have the same “storage account SKU”: Standard_LRS
.
Standard storage is lower-cost and lower-performance.
It uses hard disk drives (HDD).
LRS means locally-redundant storage: three replicas
in the same data center.
Premium storage is higher-cost and higher-performance.
It uses solid state drives (SSD).
You can create a storage account
for Premium storage and associate it with your Azure resource group.
For future reference, the command to create a storage account is
az storage account create.
Note
Please refer to Azure documentation for the list of VMs that are supported by Premium Storage.
The Kubernetes template for configuration of Storage Class is located in the
file mongodb/mongo-sc.yaml
.
You may have to update the parameters.location
field in the file to
specify the location you are using in Azure.
If you want to use a custom storage account with the Storage Class, you can also update parameters.storageAccount and provide the Azure storage account name.
Create the required storage classes using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-sc.yaml
You can check if it worked using kubectl get storageclasses
.
Step 11: Create Kubernetes Persistent Volume Claims¶
Next, you will create two PersistentVolumeClaim objects mongo-db-claim
and
mongo-configdb-claim
.
This configuration is located in the file mongodb/mongo-pvc.yaml
.
Note how there’s no explicit mention of Azure, AWS or whatever.
ReadWriteOnce
(RWO) means the volume can be mounted as
read-write by a single Kubernetes node.
(ReadWriteOnce
is the only access mode supported
by AzureDisk.)
storage: 20Gi
means the volume has a size of 20
gibibytes.
You may want to update the spec.resources.requests.storage
field in both
the files to specify a different disk size.
Create the required Persistent Volume Claims using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-pvc.yaml
You can check its status using: kubectl get pvc -w
Initially, the status of persistent volume claims might be “Pending” but it should become “Bound” fairly quickly.
Note
The default Reclaim Policy for dynamically created persistent volumes is Delete
which means the PV and its associated Azure storage resource will be automatically
deleted on deletion of PVC or PV. In order to prevent this from happening do
the following steps to change default reclaim policy of dyanmically created PVs
from Delete
to Retain
- Run the following command to list existing PVs
$ kubectl --context k8s-bdb-test-cluster-0 get pv
- Run the following command to update a PV’s reclaim policy to <Retain>
$ kubectl --context k8s-bdb-test-cluster-0 patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
For notes on recreating a private volume form a released Azure disk resource consult the page about cluster troubleshooting.
Step 12: Start a Kubernetes StatefulSet for MongoDB¶
This configuration is located in the file
mongodb/mongo-ss.yaml
.Set the
spec.serviceName
to the value set inmdb-instance-name
in the ConfigMap. For example, if the value set in themdb-instance-name
ismdb-instance-0
, set the field tomdb-instance-0
.Set
metadata.name
,spec.template.metadata.name
andspec.template.metadata.labels.app
to the value set inmdb-instance-name
in the ConfigMap, followed by-ss
. For example, if the value set in themdb-instance-name
ismdb-instance-0
, set the fields to the valuemdb-insance-0-ss
.Note how the MongoDB container uses the
mongo-db-claim
and themongo-configdb-claim
PersistentVolumeClaims for its/data/db
and/data/configdb
directories (mount paths).Note also that we use the pod’s
securityContext.capabilities.add
specification to add theFOWNER
capability to the container. That is because the MongoDB container has the usermongodb
, with uid999
and groupmongodb
, with gid999
. When this container runs on a host with a mounted disk, the writes fail when there is no user with uid999
. To avoid this, we use the Docker feature of--cap-add=FOWNER
. This bypasses the uid and gid permission checks during writes and allows data to be persisted to disk. Refer to the Docker docs for details.As we gain more experience running MongoDB in testing and production, we will tweak the
resources.limits.cpu
andresources.limits.memory
.Set the ports to be exposed from the pod in the
spec.containers[0].ports
section. We currently only expose the MongoDB backend port. Set it to the value specified formongodb-backend-port
in the ConfigMap.The configuration uses the following values set in the ConfigMap:
mdb-instance-name
mongodb-replicaset-name
mongodb-backend-port
The configuration uses the following values set in the Secret:
mdb-certs
ca-auth
Optional: You can change the value for
STORAGE_ENGINE_CACHE_SIZE
in the ConfigMapstorage-engine-cache-size
, for more information regarding this configuration, please consult the MongoDB Official Documentation.Optional: If you are not using the Standard_D2_v2 virtual machines for Kubernetes agents as per the guide, please update the
resources
formongo-ss
. We suggest allocatingmemory
using the following scheme for a MongoDB StatefulSet:memory = (Total_Memory_Agent_VM_GB - 2GB) STORAGE_ENGINE_CACHE_SIZE = memory / 2Create the MongoDB StatefulSet using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb/mongo-ss.yamlIt might take up to 10 minutes for the disks, specified in the Persistent Volume Claims above, to be created and attached to the pod. The UI might show that the pod has errored with the message “timeout expired waiting for volumes to attach/mount”. Use the CLI below to check the status of the pod in this case, instead of the UI. This happens due to a bug in Azure ACS.
$ kubectl --context k8s-bdb-test-cluster-0 get pods -w
Step 13: Configure Users and Access Control for MongoDB¶
In this step, you will create a user on MongoDB with authorization to create more users and assign roles to them. Note: You need to do this only when setting up the first MongoDB node of the cluster.
Find out the name of your MongoDB pod by reading the output of the
kubectl ... get pods
command at the end of the last step. It should be something likemdb-instance-0-ss-0
.Log in to the MongoDB pod using:
$ kubectl --context k8s-bdb-test-cluster-0 exec -it <name of your MongoDB pod> bashOpen a mongo shell using the certificates already present at
/etc/mongod/ssl/
$ mongo --host localhost --port 27017 --verbose --ssl \ --sslCAFile /etc/mongod/ca/ca.pem \ --sslPEMKeyFile /etc/mongod/ssl/mdb-instance.pemInitialize the replica set using:
> rs.initiate( { _id : "bigchain-rs", members: [ { _id : 0, host :"<hostname>:27017" } ] } )The
hostname
in this case will be the value set inmdb-instance-name
in the ConfigMap. For example, if the value set in themdb-instance-name
ismdb-instance-0
, set thehostname
above to the valuemdb-instance-0
.The instance should be voted as the
PRIMARY
in the replica set (since this is the only instance in the replica set till now). This can be observed from the mongo shell prompt, which will readPRIMARY>
.Create a user
adminUser
on theadmin
database with the authorization to create other users. This will only work the first time you log in to the mongo shell. For further details, see localhost exception in MongoDB.PRIMARY> use admin PRIMARY> db.createUser( { user: "adminUser", pwd: "superstrongpassword", roles: [ { role: "userAdminAnyDatabase", db: "admin" }, { role: "clusterManager", db: "admin"} ] } )Exit and restart the mongo shell using the above command. Authenticate as the
adminUser
we created earlier:PRIMARY> use admin PRIMARY> db.auth("adminUser", "superstrongpassword")
db.auth()
returns 0 when authentication is not successful, and 1 when successful.We need to specify the user name as seen in the certificate issued to the BigchainDB instance in order to authenticate correctly. Use the following
openssl
command to extract the user name from the certificate:$ openssl x509 -in <path to the bigchaindb certificate> \ -inform PEM -subject -nameopt RFC2253You should see an output line that resembles:
subject= emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DEThe
subject
line states the complete user name we need to use for creating the user on the mongo shell as follows:PRIMARY> db.getSiblingDB("$external").runCommand( { createUser: 'emailAddress=dev@bigchaindb.com,CN=test-bdb-ssl,OU=BigchainDB-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', writeConcern: { w: 'majority' , wtimeout: 5000 }, roles: [ { role: 'clusterAdmin', db: 'admin' }, { role: 'readWriteAnyDatabase', db: 'admin' } ] } )You can similarly create users for MongoDB Monitoring Agent and MongoDB Backup Agent. For example:
PRIMARY> db.getSiblingDB("$external").runCommand( { createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-mon-ssl,OU=MongoDB-Mon-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', writeConcern: { w: 'majority' , wtimeout: 5000 }, roles: [ { role: 'clusterMonitor', db: 'admin' } ] } ) PRIMARY> db.getSiblingDB("$external").runCommand( { createUser: 'emailAddress=dev@bigchaindb.com,CN=test-mdb-bak-ssl,OU=MongoDB-Bak-Instance,O=BigchainDB GmbH,L=Berlin,ST=Berlin,C=DE', writeConcern: { w: 'majority' , wtimeout: 5000 }, roles: [ { role: 'backup', db: 'admin' } ] } )
Step 14: Start a Kubernetes Deployment for MongoDB Monitoring Agent¶
This configuration is located in the file
mongodb-monitoring-agent/mongo-mon-dep.yaml
.Set
metadata.name
,spec.template.metadata.name
andspec.template.metadata.labels.app
to the value set inmdb-mon-instance-name
in the ConfigMap, followed by-dep
. For example, if the value set in themdb-mon-instance-name
ismdb-mon-instance-0
, set the fields to the valuemdb-mon-instance-0-dep
.The configuration uses the following values set in the Secret:
mdb-mon-certs
ca-auth
cloud-manager-credentials
Start the Kubernetes Deployment using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-monitoring-agent/mongo-mon-dep.yaml
Step 15: Start a Kubernetes Deployment for MongoDB Backup Agent¶
This configuration is located in the file
mongodb-backup-agent/mongo-backup-dep.yaml
.Set
metadata.name
,spec.template.metadata.name
andspec.template.metadata.labels.app
to the value set inmdb-bak-instance-name
in the ConfigMap, followed by-dep
. For example, if the value set in themdb-bak-instance-name
ismdb-bak-instance-0
, set the fields to the valuemdb-bak-instance-0-dep
.The configuration uses the following values set in the Secret:
mdb-bak-certs
ca-auth
cloud-manager-credentials
Start the Kubernetes Deployment using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f mongodb-backup-agent/mongo-backup-dep.yaml
Step 16: Start a Kubernetes Deployment for BigchainDB¶
This configuration is located in the file
bigchaindb/bigchaindb-dep.yaml
.Set
metadata.name
andspec.template.metadata.labels.app
to the value set inbdb-instance-name
in the ConfigMap, followed by-dep
. For example, if the value set in thebdb-instance-name
isbdb-instance-0
, set the fields to the valuebdb-insance-0-dep
.Set the value of
BIGCHAINDB_KEYPAIR_PRIVATE
(not base64-encoded). (In the future, we’d like to pull the BigchainDB private key from the Secret namedbdb-private-key
, but a Secret can only be mounted as a file, so BigchainDB Server would have to be modified to look for it in a file.)As we gain more experience running BigchainDB in testing and production, we will tweak the
resources.limits
values for CPU and memory, and as richer monitoring and probing becomes available in BigchainDB, we will tweak thelivenessProbe
andreadinessProbe
parameters.Set the ports to be exposed from the pod in the
spec.containers[0].ports
section. We currently expose 2 ports -bigchaindb-api-port
andbigchaindb-ws-port
. Set them to the values specified in the ConfigMap.The configuration uses the following values set in the ConfigMap:
mdb-instance-name
mongodb-backend-port
mongodb-replicaset-name
bigchaindb-database-name
bigchaindb-server-bind
bigchaindb-ws-interface
cluster-fqdn
bigchaindb-ws-port
cluster-frontend-port
bigchaindb-wsserver-advertised-scheme
bdb-public-key
bigchaindb-backlog-reassign-delay
bigchaindb-database-maxtries
bigchaindb-database-connection-timeout
bigchaindb-log-level
bdb-user
The configuration uses the following values set in the Secret:
bdb-certs
ca-auth
Create the BigchainDB Deployment using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f bigchaindb/bigchaindb-dep.yamlYou can check its status using the command
kubectl get deployments -w
Step 17: Start a Kubernetes Deployment for OpenResty¶
This configuration is located in the file
nginx-openresty/nginx-openresty-dep.yaml
.Set
metadata.name
andspec.template.metadata.labels.app
to the value set inopenresty-instance-name
in the ConfigMap, followed by-dep
. For example, if the value set in theopenresty-instance-name
isopenresty-instance-0
, set the fields to the valueopenresty-instance-0-dep
.Set the port to be exposed from the pod in the
spec.containers[0].ports
section. We currently expose the port at which OpenResty is listening for requests,openresty-backend-port
in the above ConfigMap.The configuration uses the following values set in the Secret:
threescale-credentials
The configuration uses the following values set in the ConfigMap:
cluster-dns-server-ip
openresty-backend-port
ngx-bdb-instance-name
bigchaindb-api-port
Create the OpenResty Deployment using:
$ kubectl --context k8s-bdb-test-cluster-0 apply -f nginx-openresty/nginx-openresty-dep.yamlYou can check its status using the command
kubectl get deployments -w
Step 18: Configure the MongoDB Cloud Manager¶
Refer to the documentation for details on how to configure the MongoDB Cloud Manager to enable monitoring and backup.
Step 19: Verify the BigchainDB Node Setup¶
Step 19.1: Testing Internally¶
To test the setup of your BigchainDB node, you could use a Docker container
that provides utilities like nslookup
, curl
and dig
.
For example, you could use a container based on our
bigchaindb/toolbox image.
(The corresponding
Dockerfile
is in the bigchaindb/bigchaindb
repository on GitHub.)
You can use it as below to get started immediately:
$ kubectl --context k8s-bdb-test-cluster-0 \
run -it toolbox \
--image bigchaindb/toolbox \
--image-pull-policy=Always \
--restart=Never --rm
It will drop you to the shell prompt.
To test the MongoDB instance:
$ nslookup mdb-instance-0
$ dig +noall +answer _mdb-port._tcp.mdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://mdb-instance-0:27017
The nslookup
command should output the configured IP address of the service
(in the cluster).
The dig
command should return the configured port numbers.
The curl
command tests the availability of the service.
To test the BigchainDB instance:
$ nslookup bdb-instance-0
$ dig +noall +answer _bdb-api-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _bdb-ws-port._tcp.bdb-instance-0.default.svc.cluster.local SRV
$ curl -X GET http://bdb-instance-0:9984
$ wsc -er ws://bdb-instance-0:9985/api/v1/streams/valid_transactions
To test the OpenResty instance:
$ nslookup openresty-instance-0
$ dig +noall +answer _openresty-svc-port._tcp.openresty-instance-0.default.svc.cluster.local SRV
To verify if OpenResty instance forwards the requests properly, send a POST
transaction to OpenResty at post 80
and check the response from the backend
BigchainDB instance.
To test the vanilla NGINX instance:
$ nslookup ngx-http-instance-0
$ dig +noall +answer _public-cluster-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-health-check-port._tcp.ngx-http-instance-0.default.svc.cluster.local SRV
$ wsc -er ws://ngx-http-instance-0/api/v1/streams/valid_transactions
$ curl -X GET http://ngx-http-instance-0:27017
The above curl command should result in the response
It looks like you are trying to access MongoDB over HTTP on the native driver port.
To test the NGINX instance with HTTPS and 3scale integration:
$ nslookup ngx-instance-0
$ dig +noall +answer _public-secure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-mdb-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ dig +noall +answer _public-insecure-cluster-port._tcp.ngx-instance-0.default.svc.cluster.local SRV
$ wsc -er wss://<cluster-fqdn>/api/v1/streams/valid_transactions
$ curl -X GET http://<cluster-fqdn>:27017
The above curl command should result in the response
It looks like you are trying to access MongoDB over HTTP on the native driver port.
Step 19.2: Testing Externally¶
Check the MongoDB monitoring and backup agent on the MongoDB Cloud Manager portal to verify they are working fine.
If you are using the NGINX with HTTP support, accessing the URL
http://<DNS/IP of your exposed BigchainDB service endpoint>:cluster-frontend-port
on your browser should result in a JSON response that shows the BigchainDB
server version, among other things.
If you are using the NGINX with HTTPS support, use https
instead of
http
above.
Use the Python Driver to send some transactions to the BigchainDB node and verify that your node or cluster works as expected.