In the last part of this series, I’m going to use Kubernetes to orchestrate the building of the same solution as in part 4, but thanks to Kubernetes, it is easy to scale and we get automatic restarts.
Unfortunately, I was not able to make SQL Server run with the Kubernetes implementation in Docker Desktop for Windows. It is something weird with the implementation of persistent volumes. So therefore, I used minikube. The downside of minikube is that is considerably slower than Docker. I have also found that it requires a command prompt with elevated privileges.
Start a command shell as administrator and install minikube:
# choco install minikube -y
Then start it. By default, it assumes it is running with Virtual Box, so we have to use a parameter to tell it to use Hyper-V:
# minikube start --vm-driver=hyperv
It takes some time, but you should eventually see something like this:
* minikube v1.4.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing hyperv VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"
We begin by defining a persistent volume claim, that claims storage from a persistent volume, so that SQL Server can store it’s databases. The PVC will later be references from the deployment template of the SQL deployment. Create a text file called e.g. pvc.yml
with the following contents:
# Claim persistent storage from a persistent volume with matching storage class, access mode and space.
# Here, I leave out the storage class which means it will the default one.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: contosouniversity-sql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Although this file is in YAML just like the docker compose file in the previous post, Kubernetes has a different API, different resource model and different command line tools. Every Kubernetes manifest file has a version, a kind (stating the resource kind to create), metadata (including name) and a specification (spec). Here we want to create a PVC called contosouniversity-sql-pvc
with 10 GByte of storage, and we want to both read and write. (“Once” means there is only one node can read and write.) Go ahead and create this resource with:
# kubectl apply -f pvc.yml
If we now asks Kubernetes about persistent volume claims, we will get the following:
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
contosouniversity-sql-pvc Bound pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a 10Gi RWO standard 78s
We can see that the PVC is bound to a persistent volume called pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
. This was created for us, but we could have created one ourselves first and given it a better name.
We can get an existing resource in YAML format with the -o yaml
option:
# kubectl get pv/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
hostPathProvisionerIdentity: 436643eb-e06a-11e9-ab7f-00155d05000e
pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
creationTimestamp: "2019-09-26T15:00:15Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
resourceVersion: "41010"
selfLink: /api/v1/persistentvolumes/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
uid: 9b2c2470-a747-412e-b6aa-cecc2913b77e
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: contosouniversity-sql-pvc
namespace: default
resourceVersion: "41003"
uid: b9bef32b-b09f-4347-9d8f-e79ec353f04a
hostPath:
path: /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
type: ""
persistentVolumeReclaimPolicy: Delete
storageClassName: standard
volumeMode: Filesystem
status:
phase: Bound
One thing to note is hostPath.path
which means that the volume is stored on the host at /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
. A good thing with minikube is that we can connect to the host with ssh:
# minikube ssh "ls -a /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a"
. ..
As you see, it is empty at the time.
Now it is time to create a deployment. The Deployment instructs Kubernetes how to create and update instances of your applications. You can specify how many instances of each application you want to have. Each instance is a pod that consists of one (or sometimes more) docker container(s). If you have multiple pods, they can be distributed over multiple nodes. Create a file called e.g. sql.yml
with the following contents:
# The deployment encapsulates a pod with SQL Server 2017
apiVersion: apps/v1
kind: Deployment
metadata:
name: contosouniversity-sql
labels:
app: contosouniversity
tier: sql
spec:
replicas: 1
selector:
matchLabels:
app: contosouniversity
tier: sql
template:
metadata:
labels:
app: contosouniversity
tier: sql
spec:
terminationGracePeriodSeconds: 10
containers:
- image: mcr.microsoft.com/mssql/server:2017-latest
name: contosouniversity-sql
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
valueFrom:
secretKeyRef:
name: mssql
key: SA_PASSWORD
ports:
- containerPort: 1433
name: sql
volumeMounts:
- name: sql-persistent-storage
mountPath: /var/opt/mssql
volumes:
- name: sql-persistent-storage
persistentVolumeClaim:
claimName: contosouniversity-sql-pvc
Some important notes on this file:
replicas: 1
means we want just one instance of this pod.selector
specifies what should go into this deployment, in this case stuff that matches labels app: contosouniversity
and tier: sql
.template.spec.containers.image:
We’re using a container image of Microsoft SQL Server as the template for this deployment.Env:
We set a few environment variables. Two are simple name and value pairs, and gets the value from a secret. We will come back to how to set this.volumeMounts
and volumes
: Use our previously defined persistent volume claim and mount it at /var/opt/mssql
, which is where SQL Server stores its data (and logs), inside the container.
Before we apply this file, we must create the secret. I will use an example password here. Then we apply the SQL deployment.
# kubectl create secret generic mssql --from-literal=SA_PASSWORD="Passw0rd!"
secret/mssql created
# kubectl apply -f sql.yml
deployment.apps/contosouniversity-sql created
We can now check how our deployment is doing. When it is ready, there should be one pod up and running:
# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
contosouniversity-sql 1/1 1 1 108s
# kubectl get pods
NAME READY STATUS RESTARTS AGE
contosouniversity-sql-6dc9cf4676-v6hqz 1/1 Running 0 117s
We can now once again get a shell one the host and list the files:
# minikube ssh "ls -l /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a"
total 12
drwxr-xr-x 2 root root 4096 Sep 27 14:35 data
drwxr-xr-x 2 root root 4096 Sep 27 14:41 log
drwxr-xr-x 2 root root 4096 Sep 27 14:34 secrets
# minikube ssh "ls -l /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a/data"
total 53120
-rw-r----- 1 root root 4194304 Sep 27 14:43 master.mdf
-rw-r----- 1 root root 2097152 Sep 27 14:43 mastlog.ldf
-rw-r----- 1 root root 8388608 Sep 27 14:43 model.mdf
-rw-r----- 1 root root 8388608 Sep 27 14:43 modellog.ldf
-rw-r----- 1 root root 14024704 Sep 27 14:43 msdbdata.mdf
-rw-r----- 1 root root 524288 Sep 27 14:43 msdblog.ldf
-rw-r----- 1 root root 8388608 Sep 27 14:43 tempdb.mdf
-rw-r----- 1 root root 8388608 Sep 27 14:43 templog.ldf
To be able to use this deployment from our web application, we must create a service, which is a way to expose an application running on a set of pods as a network service. Create a file called sql-service.yml
with the following contents:
apiVersion: v1
kind: Service
metadata:
name: sql1
labels:
app: contosouniversity
spec:
type: LoadBalancer
ports:
- port: 1433
targetPort: 1433
selector:
app: contosouniversity
tier: sql
Note that the name of the service (sql1
) must match what we have in the web application’s connection string.
# kubectl apply -f sql-service.yml
service/sql1 created
# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sql1 LoadBalancer 10.111.43.48 <pending> 1433:31710/TCP 35s
There is no point in waiting for an external IP – with minikube it is always going to be <pending>
. Instead, we use a minikube command to get the address, which we can use as the server name for sqlcmd
to query the database:
# minikube service sql1 --url
* http://192.168.238.61:31710
# sqlcmd -S 192.168.238.61,31710 -U sa -P Passw0rd! -Q "SELECT create_date, getdate() as now FROM sys.server_principals WHERE sid = 0x010100000000000512000000"
create_date now
----------------------- -----------------------
2019-09-27 14:35:01.817 2019-09-27 14:56:19.200
(1 rows affected)
Note that sqlcmd
expects a comma between IP address and port, not a colon.
Now it is time to create the web deployment and service so that we can reach it from the outside world. But this time, we don’t pull a ready-made image from a registry. Instead, we want to use our own web application image. Since we have switched context from Docker Desktop to minikube, we must build the image again, but before that we have to define a few environment variables. Depending on which shell you’re using, the syntax will be different for that, but if you just do minkube docker-env
you will get instructions:
# minikube docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.238.61:2376
SET DOCKER_CERT_PATH=C:\Users\henrik.olsson\.minikube\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minikube docker-env') DO @%i
# @FOR /f "tokens=*" %i IN ('minikube docker-env') DO @%i
# docker build --tag contosouniversity-web .
Sending build context to Docker daemon 1.149MB
...
---> bdc51ca2306a
Successfully built bdc51ca2306a
Successfully tagged contosouniversity-web:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
This will take a really long time. (Didn’t I say that minikube is slow?) While waiting, create a file called web.yml
with the following contents:
apiVersion: apps/v1
kind: Deployment
metadata:
name: contosouniversity-web
labels:
app: contosouniversity
tier: web
spec:
replicas: 2
selector:
matchLabels:
app: contosouniversity
tier: web
template:
metadata:
labels:
app: contosouniversity
tier: web
spec:
containers:
- name: contosouniversity-web
image: contosouniversity-web:latest
imagePullPolicy: Never
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: contosouniversity-web
spec:
type: LoadBalancer
selector:
app: contosouniversity
tier: web
ports:
- name: http
port: 5000
targetPort: 80
Here, the definitions of both the deployment and the service are in the same file with three dashes (- - -
) separating them. This time, we specify that we want to have two replicas running. We can now apply this and get the service address:
# kubectl apply -f web.yml
deployment.apps/contosouniversity-web created
service/contosouniversity-web created
# minikube service contosouniversity-web --url
* http://192.168.238.61:32500
# start http://192.168.238.61:32500
The last command will hopefully display the web application in your default browser. Click on a menu item to convince yourself that it works.
But what happens now if one of the instances goes away? Let’s try.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
contosouniversity-sql-6dc9cf4676-v6hqz 1/1 Running 0 49m
contosouniversity-web-5c89b5ff6b-qlb44 1/1 Running 0 5m10s
contosouniversity-web-5c89b5ff6b-v7qnx 1/1 Running 0 5m10s
# kubectl delete pod contosouniversity-web-5c89b5ff6b-qlb44
pod "contosouniversity-web-5c89b5ff6b-qlb44" deleted
# kubectl get pods
NAME READY STATUS RESTARTS AGE
contosouniversity-sql-6dc9cf4676-v6hqz 1/1 Running 0 50m
contosouniversity-web-5c89b5ff6b-54bsk 1/1 Running 0 15s
contosouniversity-web-5c89b5ff6b-v7qnx 1/1 Running 0 6m10s
As you can see, when I deleted the contosouniversity-web-5c89b5ff6b-qlb44
pod, Kubernetes immediately started another one (contosouniversity-web-5c89b5ff6b-54bsk
).
If you got this far in the post, it is time to clean up, removing stuff we have created.
# kubectl delete -f web.yml
deployment.apps "contosouniversity-web" deleted
service "contosouniversity-web" deleted
# kubectl delete -f sql-service.yml
service "sql1" deleted
# kubectl delete -f sql.yml
deployment.apps "contosouniversity-sql" deleted
# kubectl delete -f pvc.yml
persistentvolumeclaim "contosouniversity-sql-pvc" deleted
Stopping minikube is kind of special. I’ve found that minikube --stop
doesn’t work that well. Instead, ssh into the host and do a shutdown:
# minikube ssh "sudo shutdown 0"
Shutdown scheduled for Fri 2019-09-27 15:33:40 UTC, use 'shutdown -c' to cancel.