Configuring Nginx for SPA

When running a single page application (SPA) served by a web server, you sometimes want to let the SPA use routing based on the path part of the URI, rather than a fragment. (In Vue.js: createWebHistory vs. createWebHashHistory.) If that is the case, you have to configure the web server to serve the application regardless of the requested path, or at least in all cases the requested path does not map to a file. I explored this using Nginx (or more specifically the nginxinc/nginx-unprivileged:alpine Docker image), and here is what I found.

The original configuration file (/etc/nginx/nginx.conf) looked like this:

worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /tmp/nginx.pid;

events {
    worker_connections  1024;
}

http {
    proxy_temp_path /tmp/proxy_temp;
    client_body_temp_path /tmp/client_temp;
    fastcgi_temp_path /tmp/fastcgi_temp;
    uwsgi_temp_path /tmp/uwsgi_temp;
    scgi_temp_path /tmp/scgi_temp;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

On the last line, it includes all configuration files from /etc/nginx/conf.d, and in this case there was only one, default.conf:

server {
    listen       8080;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

Most lines are commented here, but there is one including location /, and that is the one we want to modify. Therefore, create a new configuration file without the include and modify the root like this:

# Custom nginx configuration file.

worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /tmp/nginx.pid;

events {
    worker_connections  1024;
}

http {
    proxy_temp_path /tmp/proxy_temp;
    client_body_temp_path /tmp/client_temp;
    fastcgi_temp_path /tmp/fastcgi_temp;
    uwsgi_temp_path /tmp/uwsgi_temp;
    scgi_temp_path /tmp/scgi_temp;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    # Cannot include since we want our own server/location block.
    #include /etc/nginx/conf.d/*.conf;

    server {
        listen       8080;
        server_name  localhost;

        #access_log  /var/log/nginx/host.access.log  main;

        location / {
            root /usr/share/nginx/html;
            index index.html index.htm;
            # if $uri file or $uri/ directory is not found, redirect internally to /index.html:
            try_files $uri $uri/ /index.html;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
}

The important line is try_files $uri $uri/ /index.html; which results in files or directories not found redirect internally to /index.html.

The last step is to modify the Dockerfile to copy the custom configuration file and replace the original one:

...
# Copy to runtime image
FROM nginxinc/nginx-unprivileged:alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build dist /usr/share/nginx/html

Using Private Azure DevOps NuGet Feeds in Docker Build

This was a tough one, which required a combination of answers on StackOverflow. When building in an Azure DevOps pipeline, you don’t have to worry about authentication for consuming or pushing to private NuGet feeds in the same Azure DevOps instance. But if you want to build inside a Docker container, it becomes an issue. You have to use a (personal) access token (PAT) and update the NuGet source i the Dockerfile:

ARG PAT
RUN dotnet nuget update source your-nuget-source-name --username "your-nuget-source-name" --password "$PAT" --valid-authentication-types basic --store-password-in-clear-text

Both options in the end there are necessary. Also, I had to modify NuGet.config. My feed has upstream sources enabled, so I had removed the nuget.org feed:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear/>
    <add key="my-nuget-source-name" value="https://..." />
  </packageSources>
</configuration>

But this didn’t work, so I had to change it to:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" /> 
    <add key="my-nuget-source-name" value="https://..." />
  </packageSources>
</configuration>

You can obtain a PAT by clicking on your profile image in Azure DevOps and selecting Security. It needs Packaging Read or Read & write. You can then pass the PAT argument to docker build:

docker build -t somename --build-arg PAT="your generated token" .

In an Azure DevOps build step, you can use $(System.AccessToken) instead of your personal one.

Docker for ASP.NET Core Step-by-step Part 5 – Kubernetes

In the last part of this series, I’m going to use Kubernetes to orchestrate the building of the same solution as in part 4, but thanks to Kubernetes, it is easy to scale and we get automatic restarts.

Unfortunately, I was not able to make SQL Server run with the Kubernetes implementation in Docker Desktop for Windows. It is something weird with the implementation of persistent volumes. So therefore, I used minikube. The downside of minikube is that is considerably slower than Docker. I have also found that it requires a command prompt with elevated privileges.

Start a command shell as administrator and install minikube:

# choco install minikube -y

Then start it. By default, it assumes it is running with Virtual Box, so we have to use a parameter to tell it to use Hyper-V:

# minikube start --vm-driver=hyperv

It takes some time, but you should eventually see something like this:

* minikube v1.4.0 on Microsoft Windows 10 Enterprise 10.0.18362 Build 18362
* Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
* Starting existing hyperv VM for "minikube" ...
* Waiting for the host to be provisioned ...
* Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
* Relaunching Kubernetes using kubeadm ...
* Waiting for: apiserver proxy etcd scheduler controller dns
* Done! kubectl is now configured to use "minikube"

We begin by defining a persistent volume claim, that claims storage from a persistent volume, so that SQL Server can store it’s databases. The PVC will later be references from the deployment template of the SQL deployment. Create a text file called e.g. pvc.yml with the following contents:

# Claim persistent storage from a persistent volume with matching storage class, access mode and space.
# Here, I leave out the storage class which means it will the default one.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: contosouniversity-sql-pvc
spec:
  accessModes:
  - ReadWriteOnce
resources:
  requests:
    storage: 10Gi 

Although this file is in YAML just like the docker compose file in the previous post, Kubernetes has a different API, different resource model and different command line tools. Every Kubernetes manifest file has a version, a kind (stating the resource kind to create), metadata (including name) and a specification (spec). Here we want to create a PVC called contosouniversity-sql-pvc with 10 GByte of storage, and we want to both read and write. (“Once” means there is only one node can read and write.) Go ahead and create this resource with:

# kubectl apply -f pvc.yml

If we now asks Kubernetes about persistent volume claims, we will get the following:

# kubectl get pvc
NAME                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
contosouniversity-sql-pvc   Bound    pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a   10Gi       RWO            standard       78s

We can see that the PVC is bound to a persistent volume called pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a. This was created for us, but we could have created one ourselves first and given it a better name.

We can get an existing resource in YAML format with the -o yaml option:

# kubectl get pv/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    hostPathProvisionerIdentity: 436643eb-e06a-11e9-ab7f-00155d05000e
    pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
  creationTimestamp: "2019-09-26T15:00:15Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
  resourceVersion: "41010"
  selfLink: /api/v1/persistentvolumes/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
  uid: 9b2c2470-a747-412e-b6aa-cecc2913b77e
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: contosouniversity-sql-pvc
    namespace: default
    resourceVersion: "41003"
    uid: b9bef32b-b09f-4347-9d8f-e79ec353f04a
  hostPath:
    path: /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a
    type: ""
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
  volumeMode: Filesystem
status:
  phase: Bound

One thing to note is hostPath.path which means that the volume is stored on the host at /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a. A good thing with minikube is that we can connect to the host with ssh:

# minikube ssh "ls -a /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a"
.  ..

As you see, it is empty at the time.

Now it is time to create a deployment. The Deployment instructs Kubernetes how to create and update instances of your applications. You can specify how many instances of each application you want to have. Each instance is a pod that consists of one (or sometimes more) docker container(s). If you have multiple pods, they can be distributed over multiple nodes. Create a file called e.g. sql.yml with the following contents:

# The deployment encapsulates a pod with SQL Server 2017
apiVersion: apps/v1
kind: Deployment
metadata:
  name: contosouniversity-sql
  labels:
    app: contosouniversity
    tier: sql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: contosouniversity
      tier: sql
  template:
    metadata:
      labels:
        app: contosouniversity
        tier: sql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - image: mcr.microsoft.com/mssql/server:2017-latest
        name: contosouniversity-sql
        env:
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD
        ports:
        - containerPort: 1433
          name: sql
        volumeMounts:
        - name: sql-persistent-storage
          mountPath: /var/opt/mssql
      volumes:
      - name: sql-persistent-storage
        persistentVolumeClaim:
          claimName: contosouniversity-sql-pvc

Some important notes on this file:

  • replicas: 1 means we want just one instance of this pod.
  • selector specifies what should go into this deployment, in this case stuff that matches labels app: contosouniversity and tier: sql.
  • template.spec.containers.image: We’re using a container image of Microsoft SQL Server as the template for this deployment.
  • Env: We set a few environment variables. Two are simple name and value pairs, and gets the value from a secret. We will come back to how to set this.
  • volumeMounts and volumes: Use our previously defined persistent volume claim and mount it at /var/opt/mssql, which is where SQL Server stores its data (and logs), inside the container.

Before we apply this file, we must create the secret. I will use an example password here. Then we apply the SQL deployment.

# kubectl create secret generic mssql --from-literal=SA_PASSWORD="Passw0rd!"
secret/mssql created

# kubectl apply -f sql.yml
deployment.apps/contosouniversity-sql created

We can now check how our deployment is doing. When it is ready, there should be one pod up and running:

# kubectl get deployments
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
contosouniversity-sql   1/1     1            1           108s

# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
contosouniversity-sql-6dc9cf4676-v6hqz   1/1     Running   0          117s

We can now once again get a shell one the host and list the files:

# minikube ssh "ls -l /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a"
total 12
drwxr-xr-x 2 root root 4096 Sep 27 14:35 data
drwxr-xr-x 2 root root 4096 Sep 27 14:41 log
drwxr-xr-x 2 root root 4096 Sep 27 14:34 secrets

# minikube ssh "ls -l /tmp/hostpath-provisioner/pvc-b9bef32b-b09f-4347-9d8f-e79ec353f04a/data"
total 53120
-rw-r----- 1 root root  4194304 Sep 27 14:43 master.mdf
-rw-r----- 1 root root  2097152 Sep 27 14:43 mastlog.ldf
-rw-r----- 1 root root  8388608 Sep 27 14:43 model.mdf
-rw-r----- 1 root root  8388608 Sep 27 14:43 modellog.ldf
-rw-r----- 1 root root 14024704 Sep 27 14:43 msdbdata.mdf
-rw-r----- 1 root root   524288 Sep 27 14:43 msdblog.ldf
-rw-r----- 1 root root  8388608 Sep 27 14:43 tempdb.mdf
-rw-r----- 1 root root  8388608 Sep 27 14:43 templog.ldf

To be able to use this deployment from our web application, we must create a service, which is a way to expose an application running on a set of pods as a network service. Create a file called sql-service.yml with the following contents:

apiVersion: v1
kind: Service
metadata:
  name: sql1
  labels:
     app: contosouniversity
spec:
  type: LoadBalancer
  ports:
  - port: 1433
    targetPort: 1433
  selector:
    app: contosouniversity
    tier: sql

Note that the name of the service (sql1) must match what we have in the web application’s connection string.

# kubectl apply -f sql-service.yml
service/sql1 created

# kubectl get services
NAME   TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
sql1   LoadBalancer   10.111.43.48   <pending>     1433:31710/TCP   35s

There is no point in waiting for an external IP – with minikube it is always going to be <pending>. Instead, we use a minikube command to get the address, which we can use as the server name for sqlcmd to query the database:

# minikube service sql1 --url
* http://192.168.238.61:31710

# sqlcmd -S 192.168.238.61,31710 -U sa -P Passw0rd! -Q "SELECT create_date, getdate() as now FROM sys.server_principals WHERE sid = 0x010100000000000512000000"
create_date             now
----------------------- -----------------------
2019-09-27 14:35:01.817 2019-09-27 14:56:19.200

(1 rows affected)

Note that sqlcmd expects a comma between IP address and port, not a colon.

Now it is time to create the web deployment and service so that we can reach it from the outside world. But this time, we don’t pull a ready-made image from a registry. Instead, we want to use our own web application image. Since we have switched context from Docker Desktop to minikube, we must build the image again, but before that we have to define a few environment variables. Depending on which shell you’re using, the syntax will be different for that, but if you just do minkube docker-env you will get instructions:

# minikube docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.238.61:2376
SET DOCKER_CERT_PATH=C:\Users\henrik.olsson\.minikube\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minikube docker-env') DO @%i

# @FOR /f "tokens=*" %i IN ('minikube docker-env') DO @%i

# docker build --tag contosouniversity-web .
Sending build context to Docker daemon  1.149MB
...
 ---> bdc51ca2306a
Successfully built bdc51ca2306a
Successfully tagged contosouniversity-web:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.

This will take a really long time. (Didn’t I say that minikube is slow?) While waiting, create a file called web.yml with the following contents:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: contosouniversity-web
  labels:
    app: contosouniversity
    tier: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: contosouniversity
      tier: web
  template:
    metadata:
      labels:
        app: contosouniversity
        tier: web
    spec:
      containers:
      - name: contosouniversity-web
        image: contosouniversity-web:latest
        imagePullPolicy: Never
        ports:
        - containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
  name: contosouniversity-web
spec:
  type: LoadBalancer
  selector:
    app: contosouniversity
    tier: web
  ports:
  - name: http
    port: 5000
    targetPort: 80

Here, the definitions of both the deployment and the service are in the same file with three dashes (- - -) separating them. This time, we specify that we want to have two replicas running. We can now apply this and get the service address:

# kubectl apply -f web.yml                     
deployment.apps/contosouniversity-web created  
service/contosouniversity-web created          
                                               
# minikube service contosouniversity-web --url 
* http://192.168.238.61:32500                  
                                               
# start http://192.168.238.61:32500            

The last command will hopefully display the web application in your default browser. Click on a menu item to convince yourself that it works.

But what happens now if one of the instances goes away? Let’s try.

# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
contosouniversity-sql-6dc9cf4676-v6hqz   1/1     Running   0          49m
contosouniversity-web-5c89b5ff6b-qlb44   1/1     Running   0          5m10s
contosouniversity-web-5c89b5ff6b-v7qnx   1/1     Running   0          5m10s

# kubectl delete pod contosouniversity-web-5c89b5ff6b-qlb44
pod "contosouniversity-web-5c89b5ff6b-qlb44" deleted

# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
contosouniversity-sql-6dc9cf4676-v6hqz   1/1     Running   0          50m
contosouniversity-web-5c89b5ff6b-54bsk   1/1     Running   0          15s
contosouniversity-web-5c89b5ff6b-v7qnx   1/1     Running   0          6m10s

As you can see, when I deleted the contosouniversity-web-5c89b5ff6b-qlb44 pod, Kubernetes immediately started another one (contosouniversity-web-5c89b5ff6b-54bsk).

If you got this far in the post, it is time to clean up, removing stuff we have created.

# kubectl delete -f web.yml
deployment.apps "contosouniversity-web" deleted
service "contosouniversity-web" deleted

# kubectl delete -f sql-service.yml
service "sql1" deleted

# kubectl delete -f sql.yml
deployment.apps "contosouniversity-sql" deleted

# kubectl delete -f pvc.yml
persistentvolumeclaim "contosouniversity-sql-pvc" deleted

Stopping minikube is kind of special. I’ve found that minikube --stop doesn’t work that well. Instead, ssh into the host and do a shutdown:

# minikube ssh "sudo shutdown 0"
Shutdown scheduled for Fri 2019-09-27 15:33:40 UTC, use 'shutdown -c' to cancel.

Docker for ASP.NET Core Step-by-step Part 4 – Docker Compose

In the previous parts of the series, I used just one docker container running a basic web application. But in real life, we probably want to have a database to store data. In this part, I’m going to build a solution with two Docker containers, one with a ASP.NET Core web application and one with a SQL Server Express instance.

The first step is to get and test an SQL Server instance:

docker pull mcr.microsoft.com/mssql/server:2017-latest
docker run --env 'ACCEPT_EULA=Y' --env 'SA_PASSWORD=Passw0rd!' -p 1433:1433 --name sql1 --rm mcr.microsoft.com/mssql/server:2017-latest

Explanation of these commands:

  • docker pull downloads an image from Docker Hub, in this case the latest version of SQL Server 2017 for Ubuntu.
  • The docker run command runs this image in a container. The --env parameters sets environment variables that the image needs when starting SQL Server for the first time. -p 1433:1433 maps port 1433 on the host to 1433 on the container. --name sql1 sets the name of the container to sql1 to make it simpler to refer to it later. --rm means remove the container when it is stopped.

To gain access to the terminal again, press Ctrl+C. (You can use the --detach parameter to avoid this.) Now try to access the server. If you have SQL Server tools installed locally, you can do:

sqlcmd -S localhost,1433 -U SA -Q "select @@servername, @@version"

This will print something similar to this:

3d29a94a8a7f                                                                                                                     Microsoft SQL Server 2017 (RTM-CU15) (KB4498951) - 14.0.3162.1 (X64)
        May 15 2019 19:14:30
        Copyright (C) 2017 Microsoft Corporation
        Developer Edition (64-bit) on Linux (Ubuntu 16.04.6 LTS)

As you can see, the server name is just a random heximal string. If you do docker ps you will actually see that it is equal to the container ID. To have different server name, you could use:

docker stop sql1
docker run --env 'ACCEPT_EULA=Y' --env 'SA_PASSWORD=Passw0rd!' -p 1433:1433 --name sql1 --hostname sql1 --rm --detach mcr.microsoft.com/mssql/server:2017-latest

The next step is to have a web application that needs a database. As a an example, we can take the ContosoUniversity sample from the ASP.NET Core data access tutorial.

This time, I’m going to cheat and let Visual Studio generate Dockerfile. Just right-click on the project and select Add and then Docker support. This generated this file:

FROM mcr.microsoft.com/dotnet/core/aspnet:2.1-stretch-slim AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.1-stretch AS build
WORKDIR /src
COPY ["ContosoUniversity.csproj", ""]
RUN dotnet restore "ContosoUniversity.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "ContosoUniversity.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "ContosoUniversity.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "ContosoUniversity.dll"]

Change the connection string in appsettings.json to Server=sql1;Database=CU-1;User Id=SA;Password=Passw0rd!;MultipleActiveResultSets=true.

Now, build the image with docker build --tag contosouniversity . and run it with docker run -it --rm -p 5000:80 contosouniversity:latest.

Unfortunately, that won’t work – you will get:

An error occurred creating the DB.
System.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)

That is because the containers have no network in common. We could fix this by creating a network and using the --network option:

docker stop sql1
docker network create my-net
docker run --env 'ACCEPT_EULA=Y' --env 'SA_PASSWORD=Passw0rd!' -p 1433:1433 --name sql1 --hostname sql1 --rm --detach --network=my-net mcr.microsoft.com/mssql/server:2017-latest

This should work!

But needing to type all these commands is a bit tiresome. This is where docker compose comes to the rescue. To use it, we create a YAML file that specifies what networks and services we want to have. Here is an example:

version: '3'
networks:
  container-net:
services:
  sql:
    image: "mcr.microsoft.com/mssql/server:2017-latest"
    ports:
      - "1433:1433"
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=Passw0rd!
    hostname: sql1
    networks:
      container-net:
        aliases:
          - sql1
  web:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "5000:80"
    networks:
      container-net:

This instructs Docker to create a network called container-net and two services, sql, which is our SQL Server, and web, which is our web application. The parameters are fairly self-explanatory if you read the explanations of the docker run parameters above. There are a couple of interesting points to mention:

  • The SQL container is created from a ready-made image, and the web container from a image that is built (build:) with a context of . with docker file Dockerfile.
  • The network is needed for the containers to communicate. It is only these two containers that are able to communicate using this network (unless another container is connected with a docker network connect command).

To run all this, you can use the following command:
docker-compose -p ContosoUniversity up

This will build the web image, create the containers and network and start the containers. The -p parameter sets the project name which is used to prefix images, containers and networks. If this is not specified, the current folder name is used as project name (cu21 in this case). When you press Ctrl+C the containers will stop. You can use detach here as well: docker-compose -p ContosoUniversity --detach. If you submit another docker-compose up command, only the stuff that is needed to get the system to the desired state (specified by the YAML file) is performed. In this case, this means starting the containers. You can test this by e.g. stopping one container and then upping again:

> docker stop contosouniversity_web_1
contosouniversity_web_1
> docker ps
CONTAINER ID        IMAGE                                        COMMAND                  CREATED             STATUS              PORTS                    NAMES
1300de122c58        mcr.microsoft.com/mssql/server:2017-latest   "/opt/mssql/bin/sqls…"   13 minutes ago      Up 3 minutes        0.0.0.0:1433->1433/tcp   contosouniversity_sql_1
> docker-compose -p ContosoUniversity up --detach
contosouniversity_sql_1 is up-to-date
Starting contosouniversity_web_1 ... done                                

You can stop and start all containers like this:

> docker-compose -p ContosoUniversity stop
Stopping contosouniversity_web_1 ... done                                                                                                                                                        Stopping contosouniversity_sql_1 ... done                                                                                                                                                        > docker-compose -p ContosoUniversity start
Starting sql ... done                                                                                                                                                                            Starting web ... done                                                                                                                                                                   

To stop and remove everything, use docker-compose -p ContosoUniversity down.

However, there is still one problem. If we create a new student in the test application (http://localhost:5000/Students?pageIndex=3) and then use docker-compose down and then docker-compose up, the record is gone. That is because data is stored inside the container, and we just removed that container and re-created it. The solution is to use a Docker volume that is persistent regardless of containers. The only thing we have to do is to modify the composition definition, adding a named volume at the top level and reference it in the SQL definition, like this:

version: '3.2'
networks:
  container-net:
services:
  sql:
    image: "mcr.microsoft.com/mssql/server:2017-latest"
    ports:
      - "1433:1433"
    environment:
      - ACCEPT_EULA=Y
      - SA_PASSWORD=Passw0rd!
    hostname: sql1
    networks:
      container-net:
        aliases:
          - sql1
    volumes:
      - type: volume
        source: mssql
        target: /var/opt/mssql
  web:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "5000:80"
    networks:
      container-net:
volumes:
    mssql:

Now, use these commands to prove that the volume is persistent:

> docker-compose -p ContosoUniversity up --detach
Creating network "contosouniversity_container-net" with the default driver
Creating volume "contosouniversity_mssql" with default driver
Creating contosouniversity_web_1 ... done                                                                                                                                                        Creating contosouniversity_sql_1 ... done                                                                                                                                                        > docker-compose -p ContosoUniversity down
Stopping contosouniversity_sql_1 ... done                                                                                                                                                        Stopping contosouniversity_web_1 ... done                                                                                                                                                        Removing contosouniversity_sql_1 ... done                                                                                                                                                        Removing contosouniversity_web_1 ... done                                                                                                                                                        Removing network contosouniversity_container-net
> docker volume ls
DRIVER              VOLUME NAME
local               contosouniversity_mssql
> docker run --rm --mount 'type=volume,src=contosouniversity_mssql,dst=/mssql' alpine ls -l /mssql
total 12
drwxr-xr-x    2 root     root          4096 Jun 19 15:27 data
drwxr-xr-x    2 root     root          4096 Jun 19 15:27 log
drwxr-xr-x    2 root     root          4096 Jun 19 15:27 secrets
> docker run --rm --mount 'type=volume,src=contosouniversity_mssql,dst=/mssql' alpine ls -l /mssql/log
total 1232
-rw-r-----    1 root     root         77824 Jun 19 15:27 HkEngineEventFile_0_132054316292630000.xel
-rw-r-----    1 root     root         11918 Jun 19 15:27 errorlog
-rw-r-----    1 root     root             0 Jun 19 15:27 errorlog.1
-rw-r-----    1 root     root       1048576 Jun 19 15:27 log.trc
-rw-r-----    1 root     root           156 Jun 19 15:27 sqlagentstartup.log
-rw-r-----    1 root     root        118784 Jun 19 15:27 system_health_0_132054316300250000.xel

The last two commands start an Alpine Linux instance with the volume mounted and runs ls -l in it.

Docker for ASP.NET Core Step-by-step Part 3

In part 3 of this series, we’re going to build the test application in a Docker container (unlike before in our development environment) and try to reduce the size.

I’m using two tools here, both the Mono linker and Microsoft’s trimming tool. You can try reducing the size on your development machine first:

Set-Location $appname
Remove-Item published -Recurse -Force
# Latest versions:
# nuget list ILLink.Tasks -Source https://dotnet.myget.org/F/dotnet-core/api/v3/index.json -PreRelease
# nuget list Microsoft.Packaging.Tools.Trimming -PreRelease
dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
dotnet add package Microsoft.Packaging.Tools.Trimming -v 1.1.0-preview1-26619-01
dotnet publish -c Release --self-contained --runtime win10-x64 -o published /p:TrimUnusedDependencies=true /p:ShowLinkerSizeComparison=true

If you get error NETSDK1016: Unable to find resolved path for 'coreclr' you have hit a bug that can be fixed by adding an element to your project file:

  <PropertyGroup>
    <TargetFramework>netcoreapp2.2</TargetFramework>
    <AspNetCoreHostingModel>InProcess</AspNetCoreHostingModel>
    <!-- Workaround for https://github.com/mono/linker/issues/314: -->
    <CrossGenDuringPublish>false</CrossGenDuringPublish>
  </PropertyGroup>

Try running the app. It will probably not start, because the Mono linker was too aggressive in removing stuff. You can fix this by inserting another entry in the project file:

<ItemGroup>
    <LinkerRootAssemblies Include="Microsoft.AspNetCore.Mvc.Razor.Extensions;Microsoft.Extensions.FileProviders.Composite;Microsoft.Extensions.Primitives;Microsoft.AspNetCore.Diagnostics.Abstractions" />
</ItemGroup> 

What has to be included here must be determined by trial and error.

For my sample web application, these two methods reduced the size from 100 MB to 50 MB. Note that adding the Mono linker adds considerable build time. Taking that and the above issues into consideration, you might want to skip that step, just keeping the Microsoft trimming tool. Note that this will be included automatically in .NET Core 3.0.

Remove the two packages again from the project (but keep the workaround and the LinkerRootAssemblies element). Then, save the following docker file:

# We're building inside a Docker container, and that must have the SDK:
FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
ARG appname
WORKDIR /workdir

# Restore in an intermediate layer.
# This layer only changes if we change the csproj, which means faster build in the normal case.
# Copy just the solution and project files.
COPY *.sln .
COPY $appname/*.csproj ./$appname/
# If we have a custom nuget.config, copy it as well.
# COPY nuget.config . 
RUN dotnet restore

# Copy everything else.
COPY $appname ./$appname/
WORKDIR $appname
# Add IL Linker package and Microsoft trimming tools. Latest versions:
# nuget list ILLink.Tasks -Source https://dotnet.myget.org/F/dotnet-core/api/v3/index.json -PreRelease
# nuget list Microsoft.Packaging.Tools.Trimming -PreRelease
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet add package Microsoft.Packaging.Tools.Trimming -v 1.1.0-preview1-26619-01
RUN dotnet publish -c Release --self-contained --runtime win10-x64 -o published /p:TrimUnusedDependencies=true /p:ShowLinkerSizeComparison=true

# Now, build the runtime container:
FROM mcr.microsoft.com/windows/nanoserver:1903 AS runtime
ARG appname
WORKDIR /app
COPY --from=build /workdir/$appname/published ./
ENV ASPNETCORE_URLS=http://+:80
ENTRYPOINT WebApplication1.exe

This time, we want to transfer the source code folder to the Docker engine. To avoid sending stuff like binaries, create a .dockerignore file with the following contents:

# directories
**/bin/
**/obj/
**/out/
**/published/

# files
Dockerfile*
**/*.trx
**/*.md
**/*.ps1
**/*.cmd
**/*.sh

Then use this command to build a new image:

docker build --file Dockerfile --tag $appname.ToLower() --build-arg appname=$appname .

Now, docker images reveals the size of the image to be 310 MB (using Windows containers). Nano server is 256 MB, so our application added just 54 MB, not so bad for a web application.

As before, run with

docker run -it --rm --name $appname $appname.ToLower()

And to test, open another terminal and use

docker exec WebApplication1 ipconfig

to find out the IP address to use for testing.

To use Linux instead, switch to Linux containers and change these lines in the docker file:

RUN dotnet publish -c Release --self-contained --runtime linux-musl-x64 -o published /p:TrimUnusedDependencies=true /p:ShowLinkerSizeComparison=true
...
FROM mcr.microsoft.com/dotnet/core/runtime-deps:2.2-alpine as runtime
...
ENTRYPOINT ["/app/WebApplication1"]

The resulting image is just 72 MB! As before, run with

docker run -it --rm -p 5000:80 --name $appname $appname.ToLower()

Docker for ASP.NET Core Step-by-step Part 2

In the previous part, I showed a very basic ASP.NET Core Docker example using a base image with the runtime installed. Here, I modify this to use a base image without .NET Core runtime, instead packaging necessary runtime components with the app.

First, make the application self-contained:

$appname="WebApplication1"
dotnet publish -c Release -o published --self-contained --runtime win10-x64 $appname\$appname.csproj

Modify the docker file to this:

FROM mcr.microsoft.com/windows/nanoserver:1903 AS runtime
WORKDIR /app
COPY * ./
ENV ASPNETCORE_URLS=http://+:80
ENTRYPOINT WebApplication1.exe
  • First, we don’t need a base image that contains the ASP.NET Core/.NET Core runtime, so we use “nanoserver” instead.
  • I don’t fully understand why we have to define the environment variable that tells ASP.NET to use port 80 in this case.
  • ENTRYPOINT has been changed to the built executable instead of dotnet.exe WebApplication1.dll.

Build the image:

docker build --file Dockerfile.runtime.selfcontained --tag $appname.ToLower() $appname\published

A docker images command reveals the size: 361 MB, not so much better than the previous 406 MB. But we will fix that in part 3 of this series.

You run it the same way as in part 1.

Docker for ASP.NET Core Step-by-step Part 1

Microsoft has a nice tutorial and samples for running ASP.NET Core in Docker, but they are a little too much “pre-baked” for my taste. Here, I try to show step-by-step how to build and run a docker image from scratch.

First, we must have a sample application. Use one you have or create a new one with the template. I will assume the latter, and that the project is in a sub-folder to the solution file. I will also assume that PowerShell is used as terminal.

Build and publish the the project:

$appname="WebApplication1"
dotnet publish -c Release -o published $appname\$appname.csproj

Then, test that it works:†

dotnet $appname\published\$appname.dll

Start a browser and go to http://localhost:5000. This should display a welcome page.

Now it is time to build a docker image that can be run inside a container. Add a dockerfile, that instructs docker how to build the image. Here is a sample:

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
WORKDIR /app
COPY * ./
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Explanation of these lines (copied largely from the Docker documenation):

  • The FROM instruction initializes a new build stage and sets the
    Base Image for subsequent instructions. As such, a valid
    Dockerfile must start with a FROM instruction. We’re using Microsoft’s base image with ASP.NET Core 2.2 runtime pre-installed.
  • The WORKDIR instruction sets the working directory for any
    RUN, CMD, ENTRYPOINT, COPY
    and ADD instructions that follow it in the Dockerfile.
    If the WORKDIR doesn’t exist, it will be created even if it’s not
    used in any subsequent Dockerfile instruction.
  • The COPY instruction copies new files or directories from
    <src> and adds them to the filesystem of the container at the
    path <dest>. Here we copy everything from the context (more on that later) to the working folder. The trailing “/” signals that destination is a folder rather than a file.
  • An ENTRYPOINT allows you to configure a container that will run
    as an executable. We start dotnet with the web application DLL as argument. The reason we cannot use the appname variable here is that arguments do not work in ENTRYPOINT parameters. We could have used the following work-around:
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS runtime
ARG appname
ENV appname=$appname
WORKDIR /app
COPY * ./
ENTRYPOINT dotnet $appname.dll

But this works only on with Linux containers. With Windows containers, the last line would be:

ENTRYPOINT dotnet %appname%.dll

Save this file e.g. as Dockerfile.runtime. Now, we can build an image with this command:

docker build --file Dockerfile.runtime --tag $appname.ToLower() $appname\published

This builds a docker image with a tag equal to our sample application and context set to the published folder. All files in the context is transferred to the docker engine which runs the command, so don’t use “/” as context as it would transfer you entire hard-drive. (https://docs.docker.com/engine/reference/commandline/build/)

If using the longer docker file above, we have to send the application name as a parameter to build:

docker build --file Dockerfile.runtime --tag $appname.ToLower() --build-arg appname=$appname $appname\published

If build succeeded we should have a new image. Issue the following command to check:

docker images

This results in something similar to this (I’m using Windows containers):

REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
webapplication1                        latest              0f5f1a58f7fa        5 seconds ago       406MB
mcr.microsoft.com/dotnet/core/aspnet   2.2                 cd16942656bb        2 weeks ago         401MB

Now it is time to run. With Linux containers, you can just do this:

docker run -it --rm -p 5000:80 --name $appname $appname.ToLower()

The parameters mean:

  • -it: Allocate a pseudo-TTY and keep it open even if not attached. (Same effect as --interactive --tty.)
  • --rm: Automatically remove the container when it exits.
  • -p: Map port 5000 on the local machine to port 80 in the container.
  • --name: Name the container.
  • The last parameter is the image tag.

For Windows containers, you cannot use localhost and port mappings. To run, use this:

docker run -it --rm --name $appname $appname.ToLower()

To test, you need the IP of the container. The steps needed are (copied from https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/docker/building-net-docker-images?view=aspnetcore-2.2):

  • Open up another command prompt.
  • Run docker ps to see the running containers. Verify that the container is there.
  • Run docker exec WebApplication1 ipconfig to display the IP address of the container.
  • Paste the IP address into the browser address bar, e.g. http://172.24.50.120.

That’s it for part 1 of this series. In part 2, we will use a self-contained application, which means we don’t have to use a base with .NET Core runtime.