Configuring Nginx for SPA

When running a single page application (SPA) served by a web server, you sometimes want to let the SPA use routing based on the path part of the URI, rather than a fragment. (In Vue.js: createWebHistory vs. createWebHashHistory.) If that is the case, you have to configure the web server to serve the application regardless of the requested path, or at least in all cases the requested path does not map to a file. I explored this using Nginx (or more specifically the nginxinc/nginx-unprivileged:alpine Docker image), and here is what I found.

The original configuration file (/etc/nginx/nginx.conf) looked like this:

worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /tmp/nginx.pid;

events {
    worker_connections  1024;
}

http {
    proxy_temp_path /tmp/proxy_temp;
    client_body_temp_path /tmp/client_temp;
    fastcgi_temp_path /tmp/fastcgi_temp;
    uwsgi_temp_path /tmp/uwsgi_temp;
    scgi_temp_path /tmp/scgi_temp;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

On the last line, it includes all configuration files from /etc/nginx/conf.d, and in this case there was only one, default.conf:

server {
    listen       8080;
    server_name  localhost;

    #access_log  /var/log/nginx/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

Most lines are commented here, but there is one including location /, and that is the one we want to modify. Therefore, create a new configuration file without the include and modify the root like this:

# Custom nginx configuration file.

worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /tmp/nginx.pid;

events {
    worker_connections  1024;
}

http {
    proxy_temp_path /tmp/proxy_temp;
    client_body_temp_path /tmp/client_temp;
    fastcgi_temp_path /tmp/fastcgi_temp;
    uwsgi_temp_path /tmp/uwsgi_temp;
    scgi_temp_path /tmp/scgi_temp;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    # Cannot include since we want our own server/location block.
    #include /etc/nginx/conf.d/*.conf;

    server {
        listen       8080;
        server_name  localhost;

        #access_log  /var/log/nginx/host.access.log  main;

        location / {
            root /usr/share/nginx/html;
            index index.html index.htm;
            # if $uri file or $uri/ directory is not found, redirect internally to /index.html:
            try_files $uri $uri/ /index.html;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
}

The important line is try_files $uri $uri/ /index.html; which results in files or directories not found redirect internally to /index.html.

The last step is to modify the Dockerfile to copy the custom configuration file and replace the original one:

...
# Copy to runtime image
FROM nginxinc/nginx-unprivileged:alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build dist /usr/share/nginx/html

RabbitMQ with MassTransit and multiple consumers

MassTransit seems to be an excellent .NET library for using queues and message buses. The same library can be used with Azure ServiceBus, RabbitMQ and Amazon SQS, which simplifies working with multiple products or switching from one to another. Getting started with MassTransit and RabbitMQ is quite easy using their tutorial. But the tutorial demonstrates using a single consumer, while in the real world you probably have multiple. It must be possible to group these so that each group receives a copy of every message, but only one instance within each group. Examples of groups could be feature/pull request branches of an application, where there are multiple Kubernets pods running for each feature/pull request branch. To accomplish that, configure a unique queue name using ReceiveEndpoint for each group. Example:

services.AddMassTransit(x =>
{
    x.AddConsumer<MessageConsumer>();

    x.UsingRabbitMq((context, cfg) =>
    {
        cfg.Host(...);
        cfg.ReceiveEndpoint(queueName, e =>
        {
            e.ConfigureConsumer<MessageConsumer>(context);
        });
        cfg.ConfigureEndpoints(context);
    });
});

Export a List of Apps from an Android Device

This method assumes a fair bit of computer knowledge, as it involves command-line tools aimed at developers.

  • Download Android SDK platform tools from the download page and unpack them, or use your package manager.
  • Enable developer settings on your device, specifically USB debugging.
  • Connect your device to your computer using USB.
  • Type these commands from a terminal (assuming your current directory is where you unpacked the download) to list the connected device (to check that the connection works) and list (in this case) enabled packages:
.\adb.exe devices -l
.\adb.exe shell pm list packages -e

Entity Framework Core and Nullable

There is one thing that has been bothering me for some time, and that is how it is possible to combine an Entity Framework Core database model with enabled null-checks. If you set up the standard EF tutorial model like this:

using Microsoft.EntityFrameworkCore;
using System.ComponentModel.DataAnnotations.Schema;

namespace ModelTest;

[Table("Blog")]
public class Blog
{
    public string Id { get; set; }
    public string Name { get; set; }
    public virtual Uri SiteUri { get; set; }

    public ICollection<Post> Posts { get; }
}

[Table("Post")]
public class Post
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string Content { get; set; }
    public DateTimeOffset PublishedOn { get; set; }
    public bool Archived { get; set; }

    public string BlogId { get; set; }
    public Blog Blog { get; set; }
}

internal class MyContext : DbContext
{
    public MyContext(DbContextOptions<MyContext> options) : base(options)
    { }

    public DbSet<Blog> Blogs { get; set; }
    public DbSet<Post> Posts { get; set; }

    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.Entity<Blog>()
            .HasMany(blog => blog.Posts)
            .WithOne(post => post.Blog)
            .HasForeignKey(post => post.BlogId)
            .HasPrincipalKey(blog => blog.Id);
    }
}

And a simple ASP.NET Core Web API like this:

using Microsoft.EntityFrameworkCore;
using ModelTest;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddDbContext<MyContext>(opt => opt.UseSqlServer("Data Source=(localdb)\\MSSQLLocalDB;Initial Catalog=ModelTest;Integrated Security=True;"));

var app = builder.Build();

// Configure the HTTP request pipeline.
app.UseSwagger();
app.UseSwaggerUI();

app.MapGet("/blogs", async (MyContext dbContext) =>
    await dbContext.Blogs.ToListAsync());
app.MapGet("/blogs/{id}", async (MyContext dbContext, string id) =>
    await dbContext.Blogs.SingleOrDefaultAsync(blog => blog.Id == id));
app.MapGet("/blogs/{id}/posts", async (MyContext dbContext, string id) =>
    await dbContext.Posts.Where(post => post.BlogId == id).ToListAsync());
app.MapGet("/posts/{id}", async (MyContext dbContext, string id) =>
    await dbContext.Posts.SingleOrDefaultAsync(post => post.Id == id));

app.Run();

There will be no less than eleven warnings (CS8618 “Non-nullable property xxx must contain a non-null value when exiting constructor. Consider declaring the property as nullable”). One way would be to simply suppress this in the model classes using

#pragma warning disable CS8618
...
#pragma warning restore CS8618

But it is kind of unsatisfactory and can lead to bugs. If we take Post as an example, no properties will actually be null except Blog, which can be seen by making a request to /blogs/{id}/posts:

[
  {
    "id": "1",
    "title": "Post 1",
    "content": "Hello",
    "publishedOn": "2023-04-03T00:00:00+02:00",
    "archived": false,
    "blogId": "1",
    "blog": null
  },
  {
    "id": "2",
    "title": "Post 2",
    "content": "Another post",
    "publishedOn": "2023-04-03T00:00:00+02:00",
    "archived": false,
    "blogId": "1",
    "blog": null
  }
]

Sidetrack: To make the blog property appear in each post, we must include the Blog property in our query (and at the same time prevent cycles):

builder.Services.Configure<Microsoft.AspNetCore.Http.Json.JsonOptions>(options =>
{
    options.SerializerOptions.WriteIndented = true;
    options.SerializerOptions.ReferenceHandler = System.Text.Json.Serialization.ReferenceHandler.IgnoreCycles;
});

app.MapGet("/blogs/{id}/posts", async (MyContext dbContext, string id) =>
    await dbContext.Posts.Where(post => post.BlogId == id).Include(post => post.Blog).ToListAsync());

Ok, other than disabling the warning, could we do something better? Maybe do as the warning says?

public class Post
{
    public string? Id { get; set; }
    public string? Title { get; set; }
    public string? Content { get; set; }
    public DateTimeOffset PublishedOn { get; set; }
    public bool Archived { get; set; }

    public string? BlogId { get; set; }
    public Blog? Blog { get; set; }
}

The problem with this is that Entity Framework now wants to alter the columns to be nullable, which we didn’t want since we didn’t define the properties as nullable in the first place. This can be seen by creating a test migration:

 dotnet ef migrations add Test

So if we instead create a constructor that takes all property values as parameters?

public class Blog
{
    public Blog(string id, string name, Uri siteUri, ICollection<Post> posts)
    {
        Id = id;
        Name = name;
        SiteUri = siteUri;
        Posts = posts;
    }
...
}

public class Post
{
    public Post(string id, string title, string content, DateTimeOffset publishedOn, bool archived, string blogId, Blog blog)
    {
        Id = id;
        Title = title;
        Content = content;
        PublishedOn = publishedOn;
        Archived = archived;
        BlogId = blogId;
        Blog = blog;
    }
...
}

Doesn’t work:

System.InvalidOperationException: No suitable constructor was found for entity type 'Blog'. The following constructors had parameters that could not be bound to properties of the entity type:
    Cannot bind 'posts' in 'Blog(string id, string name, Uri siteUri, ICollection<Post> posts)'
Note that only mapped properties can be bound to constructor parameters. Navigations to related entities, including references to owned types, cannot be bound.

What we can do is to exclude the navigation property from the constructor.

public class Blog
{
    public Blog(string id, string name, Uri siteUri)
    {
        Id = id;
        Name = name;
        SiteUri = siteUri;
    }
...
}

public class Post
{
    public Post(string id, string title, string content, DateTimeOffset publishedOn, bool archived, string blogId)
    {
        Id = id;
        Title = title;
        Content = content;
        PublishedOn = publishedOn;
        Archived = archived;
        BlogId = blogId;
    }
...
}

This means we only have to warnings left:

Warning	CS8618	Non-nullable property 'Posts' must contain a non-null value when exiting constructor. Consider declaring the property as nullable.
Warning	CS8618	Non-nullable property 'Blog' must contain a non-null value when exiting constructor. Consider declaring the property as nullable.

This is actually quite in order – if we don’t include those properties in the LINQ query, they are going to be null, so it can be argued that the most sensible thing is to declare them as nullable:

[Table("Blog")]
public class Blog
{
    public Blog(string id, string name, Uri siteUri)
    {
        Id = id;
        Name = name;
        SiteUri = siteUri;
    }

    public string Id { get; set; }
    public string Name { get; set; }
    public Uri SiteUri { get; set; }
    public ICollection<Post>? Posts { get; }
}

[Table("Post")]
public class Post
{
    public Post(string id, string title, string content, DateTimeOffset publishedOn, bool archived, string blogId)
    {
        Id = id;
        Title = title;
        Content = content;
        PublishedOn = publishedOn;
        Archived = archived;
        BlogId = blogId;
    }

    public string Id { get; set; }
    public string Title { get; set; }
    public string Content { get; set; }
    public DateTimeOffset PublishedOn { get; set; }
    public bool Archived { get; set; }
    public string BlogId { get; set; }
    public Blog? Blog { get; set; }
}

Could we take this one step further, eliminating the setters? No, unfortunately not, but we can change them to init:

[Table("Blog")]
public class Blog
{
    public Blog(string id, string name, Uri siteUri)
    {
        Id = id;
        Name = name;
        SiteUri = siteUri;
    }

    public string Id { get; init; }
    public string Name { get; init; }
    public Uri SiteUri { get; init; }
    public ICollection<Post>? Posts { get; init; }
}

[Table("Post")]
public class Post
{
    public Post(string id, string title, string content, DateTimeOffset publishedOn, bool archived, string blogId)
    {
        Id = id;
        Title = title;
        Content = content;
        PublishedOn = publishedOn;
        Archived = archived;
        BlogId = blogId;
    }

    public string Id { get; init; }
    public string Title { get; init; }
    public string Content { get; init; }
    public DateTimeOffset PublishedOn { get; init; }
    public bool Archived { get; init; }
    public string BlogId { get; init; }
    public Blog? Blog { get; init; }
}

In fact, we can take it one step further and use records instead. Navigation properties must still be standard properties with get and init:

[Table("Blog")]
public record class Blog(string Id, string Name, Uri SiteUri)
{
    public ICollection<Post>? Posts { get; init; }
}

[Table("Post")]
public record class Post(string Id, string Title, string Content, DateTimeOffset PublishedOn, bool Archived, string BlogId)
{
    public Blog? Blog { get; init; }
}

Serving Static Content from ASP.NET Core/Kestrel

This is really simple. Using a terminal, go to the root folder of the static content and issue the following command:

dotnet new web

Then, change Program.cs to this:

using Microsoft.Extensions.FileProviders;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.UseDefaultFiles(); 
app.UseFileServer(new FileServerOptions
{
    FileProvider = new PhysicalFileProvider(builder.Environment.ContentRootPath),
    RequestPath = "/app"
});
app.Run();

Now, suppose you have an index.html, you can access it from the browser using https://localhost:{port}/app.

Adding Authentication and Authorization to an ASP.NET Core Web App – the Minimalist Approach

I like to do stuff with as little code as possible. Faced with the issue of forcing authentication (using a Microsoft account) I first looked at some tutorials/guidelines but they involved an identity database accessed using Entity Framework, which I was not interested in. I just wanted to force authentication, I didn’t want users to register. So here is the minimal approach I ended up with. The web app is using razor pages and .NET 6 by the way.

  • Add Microsoft’s identity package: install-package Microsoft.Identity.Web
  • In Program.cs, add services to the container and configure the http request pipeline:
builder.Services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
    .AddMicrosoftIdentityWebApp(builder.Configuration.GetSection("AzureAd"));
builder.Services.AddAuthorization(options =>
    options.FallbackPolicy = new AuthorizationPolicyBuilder().RequireUserName("user.email@outlook.com").Build());
...
app.UseAuthentication();
app.UseAuthorization();

In the lambda for AddAuthorization, I create a fallback policy, which means that pages lacking [Authorize] or [Anonymous] attributes will get that policy. This particular policy only allows a single, hard-coded user.

  • In appsettings.json, add an AzureAd section:
"AzureAd": {
    "ClientId": "...",
    "Instance": "https://login.microsoftonline.com/",
    "TenantId": "consumers",
    "CallbackPath": "/signin-oidc"
  }

For generating a client ID, go to Azure Active Directory App Registrations. Under Redirect URIs, add signin-oidc and the root for all environments, e.g. https://localhost:7235/signin-oidc and https://localhost:7235/.

Using Private Azure DevOps NuGet Feeds in Docker Build

This was a tough one, which required a combination of answers on StackOverflow. When building in an Azure DevOps pipeline, you don’t have to worry about authentication for consuming or pushing to private NuGet feeds in the same Azure DevOps instance. But if you want to build inside a Docker container, it becomes an issue. You have to use a (personal) access token (PAT) and update the NuGet source i the Dockerfile:

ARG PAT
RUN dotnet nuget update source your-nuget-source-name --username "your-nuget-source-name" --password "$PAT" --valid-authentication-types basic --store-password-in-clear-text

Both options in the end there are necessary. Also, I had to modify NuGet.config. My feed has upstream sources enabled, so I had removed the nuget.org feed:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <clear/>
    <add key="my-nuget-source-name" value="https://..." />
  </packageSources>
</configuration>

But this didn’t work, so I had to change it to:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <packageSources>
    <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" /> 
    <add key="my-nuget-source-name" value="https://..." />
  </packageSources>
</configuration>

You can obtain a PAT by clicking on your profile image in Azure DevOps and selecting Security. It needs Packaging Read or Read & write. You can then pass the PAT argument to docker build:

docker build -t somename --build-arg PAT="your generated token" .

In an Azure DevOps build step, you can use $(System.AccessToken) instead of your personal one.

Integration/Stack Testing with ASP.NET Core 6

I have written some posts about this subject before:

It is now time to update this information to the current long-term release, ASP.NET Core with .NET 6. This post exemplifies with a web API, but most should be applicable to web applications as well.

First some words about terminology. With a unit test, we mean a test of some aspect of a single component (class) with all its dependencies mocked. An integration test usually means a test with components integrated into a whole, where dependencies are not mocked. I work quite a lot with Web APIs (HTTP + JSON) and find it very useful to integration test each endpoint (or operation in a controller), but since they often ultimately depend on a database of some sort, the tests also depend on the right data being present. One way to solve that is to use an in-memory database, that is prepared with suitable data before the tests and discarded after. I tend to use a simpler method, where just the infrastructure component that talks to the database (often called repository) is replaced with a mocked one. So the whole stack is tested, except the database integration. Someone invented the term stack testing for this.

I almost always use xUnit as testing framework and Moq for mocking, and that is assumed in this blog post.

Test Fixture

Many aspects are well described in the official documentation. Read it! The description uses a class fixture, but I often use a test collection fixture instead. If the application uses the new simplified startup with an implicit Program and no Startup class, the fixture can look like this:

using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.Extensions.DependencyInjection;
using Moq;
using System;
using System.Linq;
using System.Net.Http;
using Xunit;

namespace TestProject;

[CollectionDefinition(Name)]
public class TestCollection : ICollectionFixture<TestHostFixture>
{
    // This class has no code, and is never created. Its purpose is simply
    // to be the place to apply [CollectionDefinition] and all the
    // ICollectionFixture<> interfaces.

    public const string Name = "Test Collection";
}

public class TestHostFixture
{
    public readonly HttpClient httpClient;
    private readonly IServiceProvider serviceProvider;

    public TestHostFixture()
    {
        var webAppFactory = new WebApplicationFactory<Program>()
            .WithWebHostBuilder(builder =>
            {
                builder.ConfigureServices(services =>
                {
                    services.Replace(Mock.Of<IDataStore>());
                });
            });
        httpClient = webAppFactory.CreateClient();
        serviceProvider = webAppFactory.Services;
    }

    public Mock<TService> GetMock<TService>() where TService : class
        => Mock.Get(serviceProvider.GetRequiredService<TService>());
}

The line starting with services.Replace is using the following extension method to replace a container registration of IDataStore with a mocked one:

public static class ServiceCollectionExtensions
{
    /// <summary>
    /// Replace a DI registration by a singleton mocked instance used in tests.
    /// </summary>
    public static void Replace<TService>(this IServiceCollection services, TService instance) where TService : class
    {
        var descriptor = services.Single(d => d.ServiceType == typeof(TService));
        services.Remove(descriptor);
        services.AddSingleton(instance);
    }
}

If using the classic Startup class with a ConfigureServices method, we can instead use ConfigureTestServices:

        var webAppFactory = new WebApplicationFactory<Startup>()
            .WithWebHostBuilder(builder =>
            {
                builder.ConfigureTestServices(services =>
                {

                    services.Add...
                });
            });

In a test, we can setup the mocked service and test like this:

using Moq;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Json;
using System.Threading.Tasks;
using WebApiTest;
using Xunit;

namespace TestProject;

[Collection(TestCollection.Name)]
public class UnitTest1
{
    private readonly HttpClient httpClient;

    public UnitTest1(TestHostFixture testHostFixture)
    {
        testHostFixture.GetMock<IDataStore>()
            .Setup(store => store.SomeOperationAsync())
            .ReturnsAsync(new { /* ... */ });
        httpClient = testHostFixture.httpClient;
    }

    [Fact]
    public async Task Test1()
    {
        var forecasts = await httpClient.GetFromJsonAsync<IEnumerable<WeatherForecast>>("WeatherForecast");
        Assert.NotNull(forecasts);
        Assert.NotEmpty(forecasts);
    }
}

Of course, if we have more classes using the same service, we can define an abstract base test class and put the [Collection(TestCollection.Name)] attribute on that, and move the constructor code above to it.

Custom WebApplicationFactory

If the above is not sufficient, you can derive from WebApplicationFactory:

public class CustomWebApplicationFactory : WebApplicationFactory<Program>
{
}

// or

public class CustomWebApplicationFactory<TEntryPoint>
   : WebApplicationFactory<TEntryPoint> where TEntryPoint : class
{
}

Here is a list of methods you can override:

ConfigureClient(HttpClient)Configures HttpClient instances created by this WebApplicationFactory<TEntryPoint>.
ConfigureWebHost(IWebHostBuilder)Gives a fixture an opportunity to configure the application before it gets built.
CreateHost(IHostBuilder)Creates the IHost with the bootstrapped application in builder. This is only called for applications using IHostBuilder. Applications based on IWebHostBuilder will use CreateServer(IWebHostBuilder) instead.
CreateHostBuilder()Creates a IHostBuilder used to set up TestServer.
CreateServer(IWebHostBuilder)Creates the TestServer with the bootstrapped application in builder. This is only called for applications using IWebHostBuilder. Applications based on IHostBuilder will use CreateHost(IHostBuilder) instead.
CreateWebHostBuilder()Creates a IWebHostBuilder used to set up TestServer.
DisposeAsync()Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources asynchronously.
GetTestAssemblies()Gets the assemblies containing the functional tests. The WebApplicationFactoryContentRootAttribute applied to these assemblies defines the content root to use for the given TEntryPoint.

Overriding Configuration

This can be a tricky part. Fortunately, Mark Vincze describes several methods in a blog post:

  • Override the property of an Options type
  • Add an additional in-memory collection
  • Set the config values via environment variables
  • Add a dedicated appsettings.json to our test project

.NET 6, JWTs, Authentication and Authorization

Introduction

This post is about how to deal with OAuth2 and the usage of JWT (JSON Web Tokens) in a non-trivial system consisting of rich clients and web APIs built using Microsoft .NET 6.

Suppose we have the following architecture:

                            --> Web API 2
                           /
Client app --> Web API 1 --
                           \
                            --> Web API 3

So we have a “front-end” or “composition” API that is dependent on other APIs. The challenge here is that all (or some) of these APIs require a user token, so we must enable API 1 to call API 2 and API 3 on behalf of a user.

My first take on this was to create a token in the client app and let API 1 validate the token and then forward it to other APIs. But that doesn’t work as a general solution if different APIs require different scopes since you cannot mix scopes for different resources in the same token. (Well, you can, but the validation will fail.)

It actually turned out that, using the Microsoft Identity libraries, it is not that hard to have the front API create new tokens on behalf of users and cache them. I will describe how to develop and configure such a solution in this post. An important prerequisite is that Azure Active Directory is used for identity.

Back-end API

Application Registration

To protect the back-end API (API 2 and 3 above) we must first add an app registration in Azure Active Directory and configure it accordingly:

Authentication Section

Redirect URIs: (none)

Front-channel logout URL: (none)

Implicit grant and hybrid flows:

  • Access tokens: false
  • ID tokens: false

Allow public client flows: No

Expose an API Section

Scopes defined by this API

ScopesWho can consentAdmin consent display nameUser consent display nameState
api://{application/client ID}/access_as_userAdmins and usersAccess the API on behalf of a userAccess the API on your behalfEnabled

Authorized client applications: Add the applications that are authorized to access this API, in our case API 1.

Code

Codewise, the template code is what we need. In order to use that, we need the following NuGet packages:

Microsoft.AspNetCore.Authentication.JwtBearer
Microsoft.AspNetCore.Authentication.OpenIdConnect
Microsoft.Identity.Web

During start-up, we need to add some service registrations:

builder.Services
    .AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd"));

We also need to configure the request pipeline:

// Must be before app.MapControllers():
app.UseAuthentication();
app.UseAuthorization();

In appsettings.json, we have:

"AzureAd": {
  "Instance": "https://login.microsoftonline.com/",
  "TenantId": "(tenant GUID or domain, e.g. yourcompany.onmicrosoft.com)",
  "ClientId": "(client ID GUID of this API in Azure Active Directory app registration)"
},
"RequiredScope": "access_as_user"

With this, no authentication or authorization is actually required. We can put the [Authorize] attribute on the appropriate endpoints or controllers, but to reduce the risk of forgetting that somewhere, I prefer to have it by default. We can do that during startup:

services.AddAuthorization(options =>
    options.FallbackPolicy = new AuthorizationPolicyBuilder()
        .RequireScope(Configuration.GetValue<string>("RequiredScope"))
        .Build());
    // The above means that endpoints without [Authorize] attribute etc get this policy.

Then, use the [AllowAnonymous] attribute where authentication and authorization is not required.

Front-End API

This the tricky one. What is needed is fairly well documented once you find it: Scenario: A web API that calls web APIs.

Application Registration

We need to create one app registration for the front-end API:

Authentication Section

Redirect URIs: (none)

Front-channel logout URL: (none)

Implicit grant and hybrid flows:

  • Access tokens: false
  • ID tokens: false

Allow public client flows: No

Expose an API Section

Scopes defined by this API

ScopesWho can consentAdmin consent display nameUser consent display nameState
api://{application/client ID}/access_as_userAdmins and usersAccess the API on behalf of a userAccess the API on your behalfEnabled

Authorized client applications: Add the applications that are authorized to access this API, in our case the client app.

Code

We need to enable token acquisition to call downstream APIs (API 2 and 3) and we do that during service registration:

builder.Services
    .AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddMicrosoftIdentityWebApi(builder.Configuration.GetSection("AzureAd"))
    .EnableTokenAcquisitionToCallDownstreamApi()
    .AddInMemoryTokenCaches();

For this to work, we must configure a client secret or certificate:

"AzureAd": {
  "Instance": "https://login.microsoftonline.com/",
  "TenantId": "(tenant GUID or domain, e.g. yourcompany.onmicrosoft.com)",
  "ClientId": "(client ID GUID of this API)"

  // To call an API use either ClientSecret or ClientCertificates:
  "ClientSecret": "[Copy the client secret added to the app from the Azure portal]",
  "ClientCertificates": [

    {
      "SourceType": "KeyVault",
      "KeyVaultUrl": "https://msidentitywebsamples.vault.azure.net",
      "KeyVaultCertificateName": "MicrosoftIdentitySamplesCert"
    }
  ]
}

In each endpoint that calls the downstream API, we can inject a Microsoft.Identity.Web.ITokenAcquisition instance and use that for getting a token:

var token = await tokenAcquisition.GetAccessTokenForUserAsync(new[] { $"api://{downstream API client ID}/{downstream API scope}" });
request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token);
// or
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);

There are even more clever techniques that might work for documented in A web API that calls web APIs: Call an API.

Client App

The client app in my example is a rich client (WPF).

Application Registration

Authentication Section

Mobile and desktop applications

Redirect URIs

(Notice that the Application (client) ID is part of the last two.)

Allow public client flows: No

Code

Here, we reference the Microsoft.Identity.Client NuGet library and set target framework to net6.0-windows10.0.17763.0 or higher.

In App.xml.cs, add a method that creates and returns an Microsoft.Identity.Client.IPublicClientApplication that can be registered in the dependency injection container:

private static Microsoft.Identity.Client.IPublicClientApplication CreateIdentityClient()
{

    var builder = Microsoft.Identity.Client.PublicClientApplicationBuilder.Create("{client ID}")
        .WithAuthority(Microsoft.Identity.Client.AzureCloudInstance.AzurePublic, "{tenant, e.g. mycompany.onmicrosoft.com)
        .WithDefaultRedirectUri()
        .WithBroker();
    return builder.Build();
}

To create a token, use something along the lines of:

AuthenticationResult authenticationResult;
var scopes = new[] { $"api://{API client ID}/{API scope}" };
var account = PublicClientApplication.OperatingSystemAccount;
try
{
    authenticationResult = await publicClientApplication
        .AcquireTokenSilent(scopes, account)
        .ExecuteAsync();
}
catch (MsalUiRequiredException ex)
{
    logger.LogError(ex, "Could not acquire token silently.");
    // A MsalUiRequiredException happened on AcquireTokenSilent. 
    // This indicates you need to call AcquireTokenInteractive to acquire a token
    authenticationResult = await publicClientApplication.AcquireTokenInteractive(api.Scopes)
        .WithAccount(account)
        .WithPrompt(Prompt.SelectAccount)
        .ExecuteAsync();
}