Categories
autofac c#

Autofac 6.0 Released

I’m super excited today, along with the rest of the Autofac team, to be able to announce the release of Autofac 6.0!

This version has got some major new features in it, and general improvements throughout the library, including an overhaul of the Autofac internals to use a customisable Resolve Pipeline when resolving services, built-in diagnostic support, support for the composite pattern, and more!

I’d like to thank everyone on the Autofac team for the load of effort that has gone into this release, I’m pretty thrilled to be able to unleash it into the world.

There’s a couple of breaking changes you should be aware of, and then I’ll go through an overview of some of the great new features at your disposal!

Breaking Changes

Despite the pretty big internal changes, the number of breaking changes are pretty low; we’ve managed to avoid any real behavioural changes.

You can see the complete set of breaking code changes between 5.x and 6.0 in our documentation. I’ll list some of the more pertinent ones here.

Framework Version Targeting Changes

Starting with Autofac 6.0, we now only target netstandard2.0 and netstandard2.1; we have removed the explicit target for net461.

The impact to you is that, while Autofac will still work on .NET Framework 4.6.1 as it did before, we strongly encourage you to upgrade to .NET Framework 4.7.2 or higher, as per the .NET Standard Documentation, to avoid any of the known dependency issues when using .NET Standard packages in .NET Framework 4.6.1.

Custom Registration Sources

If you have implemented a custom registration source you will need to update the IRegistrationSource.RegistrationsFor method.

// 5.x
IEnumerable<IComponentRegistration> RegistrationsFor(Service service, Func<Service, IEnumerable<IComponentRegistration>> registrationAccessor);

// 6.x
IEnumerable<IComponentRegistration> RegistrationsFor(Service service, Func<Service, IEnumerable<ServiceRegistration>> registrationAccessor);

The registrationAccessor parameter is a callback that, given a service, will return the set of registrations available for that service.

In 6.x, the return type of this callback was changed from IEnumerable to IEnumerable.

A ServiceRegistration encapsulates the registration (via the Registration property of the type), but also exposes the resolve pipeline Autofac needs in order to resolve a registration.

Custom Constructor Selectors

If you have implemented a custom IConstructorSelector to pass to the UsingConstructor registration method, you will need to update your implementation to use BoundConstructor instead of ConstructorParameterBinding.

The new BoundConstructor type exposes similar properties (including the TargetConstructor):

// v5.x
ConstructorParameterBinding SelectConstructorBinding(ConstructorParameterBinding[] constructorBindings, IEnumerable<Parameter> parameters);

// v6.x
BoundConstructor SelectConstructorBinding(BoundConstructor[] constructorBindings, IEnumerable<Parameter> parameters);

New Features/Improvements

There are a tonne of new features in Autofac 6.0; I’ll hit some of the highlights here.

Pipelines

The internals of Autofac have been through a major overhaul, so that the work of actually resolving an instance of a registration is implemented as a pipeline, consisting of middleware that handles each part of the process.

The existing ways you configure Autofac haven’t changed, but we have added some powerful new extensibility points you can use for advanced scenarios.

For example, you can add pipeline middleware to all resolves of a service, that runs before any built-in Autofac code:

var builder = new ContainerBuilder();

// Run some middleware at the very start of the pipeline, before any core Autofac behaviour.
builder.RegisterServiceMiddleware<IMyService>(PipelinePhase.ResolveRequestStart, (context, next) =>
{
    Console.WriteLine("Requesting Service: {0}", context.Service);

    // Continue the pipeline.
    next(context);
});

Anyone familiar with ASP.NET Core middleware may notice some similarities here! We have a context, and a next method to call to continue the pipeline.

You can check out our detailed docs on pipelines for a complete run down on how these work, and how to use them.

A lot of the following new features are only possible because of the pipeline change; it gave us the flexibility to do new and interesting things!

Support for the Composite Pattern

For some time we’ve been working towards adding built-in support for the Composite Pattern, going back to 2016.

Well, it’s finally here, and gives you the new RegisterComposite method on the ContainerBuilder!

Here’s an example from our documentation, where we have multiple log sinks that we want to wrap in a CompositeLogSink:

var builder = new ContainerBuilder();

// Here are our normal implementations.
builder.RegisterType<FileLogSink>().As<ILogSink>();
builder.RegisterType<DbLogSink>().As<ILogSink>();

// We're going to register a class to act as a Composite wrapper for ILogSink
builder.RegisterComposite<CompositeLogSink, ILogSink>();

var container = builder.Build();

// This will return an instance of `CompositeLogSink`.
var logSink = container.Resolve<ILogSink>();

logSink.WriteLog("log message");

// ...

// Here's our composite class; it's just a regular class that injects a
// collection of the same service.
public class CompositeLogSink : ILogSink
{
    private readonly IEnumerable<ILogSink> _implementations;

    public CompositeLogSink(IEnumerable<ILogSink> implementations)
    {
        // implementations will contain all the 'actual' registrations.
        _implementations = implementations;
    }

    public void WriteLog(string log)
    {
        foreach (var sink in _implementations)
        {
            sink.WriteLog(log);
        }
    }
}

Thanks to @johneking for his input and feedback on the design of the composites implementation.

There’s more guidance around how to use composites (including how to register open-generic composites, and use relationships like Lazy and Func) in our documentation on composites.

Diagnostic Tracing

One thing that has always been a bit challenging with Autofac (and Dependency Injection in general really), is figuring out why something isn’t working, and particularly which one of your services in your really complex object graph is causing your problem!

Happily, in Autofac 6.0, we have added built-in support for the .NET DiagnosticSource class, and we generate diagnostic events while we are resolving a service.

The easiest way to get started with our diagnostics is using the Autofac DefaultDiagnosticTracer, which will generate a tree-like view of each resolve, with dependencies, showing you exactly where things go wrong.

var builder = new ContainerBuilder();

// A depends on B1 and B2, but B2 is going to fail.
builder.RegisterType<A>();
builder.RegisterType<B1>();
builder.Register<B2>(ctx => throw new InvalidOperationException("No thanks."));

var container = builder.Build();

// Let's add a tracer.
var tracer = new DefaultDiagnosticTracer();
tracer.OperationCompleted += (sender, args) =>
{
    // TraceContent contains the output.
    Trace.WriteLine(args.TraceContent);
};

container.SubscribeToDiagnostics(tracer);

// Resolve A - will fail.
container.Resolve<A>();

When that Resolve<A>() call completes, our tracer’s event handler will fire, and TraceContent contains your verbose trace:

Resolve Operation Starting
{
  Resolve Request Starting
  {
    Service: AutofacDotGraph.A
    Component: AutofacDotGraph.A

    Pipeline:
    -> CircularDependencyDetectorMiddleware
      -> ScopeSelectionMiddleware
        -> SharingMiddleware
          -> RegistrationPipelineInvokeMiddleware
            -> ActivatorErrorHandlingMiddleware
              -> DisposalTrackingMiddleware
                -> A (ReflectionActivator)
                  Resolve Request Starting
                  {
                    Service: AutofacDotGraph.B1
                    Component: AutofacDotGraph.B1

                    Pipeline:
                    -> CircularDependencyDetectorMiddleware
                      -> ScopeSelectionMiddleware
                        -> SharingMiddleware
                          -> RegistrationPipelineInvokeMiddleware
                            -> ActivatorErrorHandlingMiddleware
                              -> DisposalTrackingMiddleware
                                -> B1 (ReflectionActivator)
                                <- B1 (ReflectionActivator)
                              <- DisposalTrackingMiddleware
                            <- ActivatorErrorHandlingMiddleware
                          <- RegistrationPipelineInvokeMiddleware
                        <- SharingMiddleware
                      <- ScopeSelectionMiddleware
                    <- CircularDependencyDetectorMiddleware
                  }
                  Resolve Request Succeeded; result instance was AutofacDotGraph.B1
                  Resolve Request Starting
                  {
                    Service: AutofacDotGraph.B2
                    Component: λ:AutofacDotGraph.B2

                    Pipeline:
                    -> CircularDependencyDetectorMiddleware
                      -> ScopeSelectionMiddleware
                        -> SharingMiddleware
                          -> RegistrationPipelineInvokeMiddleware
                            -> ActivatorErrorHandlingMiddleware
                              -> DisposalTrackingMiddleware
                                -> λ:AutofacDotGraph.B2
                                X- λ:AutofacDotGraph.B2
                              X- DisposalTrackingMiddleware
                            X- ActivatorErrorHandlingMiddleware
                          X- RegistrationPipelineInvokeMiddleware
                        X- SharingMiddleware
                      X- ScopeSelectionMiddleware
                    X- CircularDependencyDetectorMiddleware
                  }
                  Resolve Request FAILED
                    System.InvalidOperationException: No thanks.
                       at AutofacExamples.<>c.<ErrorExample>b__0_0(IComponentContext ctx) in D:\Experiments\Autofac\Examples.cs:line 24
                       at Autofac.RegistrationExtensions.<>c__DisplayClass39_0`1.<Register>b__0(IComponentContext c, IEnumerable`1 p)
                       at Autofac.Builder.RegistrationBuilder.<>c__DisplayClass0_0`1.<ForDelegate>b__0(IComponentContext c, IEnumerable`1 p)
                       at Autofac.Core.Activators.Delegate.DelegateActivator.ActivateInstance(IComponentContext context, IEnumerable`1 parameters)
                       ...
                X- A (ReflectionActivator)
              X- DisposalTrackingMiddleware
            X- ActivatorErrorHandlingMiddleware
          X- RegistrationPipelineInvokeMiddleware
        X- SharingMiddleware
      X- ScopeSelectionMiddleware
    X- CircularDependencyDetectorMiddleware
  }
  Resolve Request FAILED: Nested Resolve Failed
}
Operation FAILED

There’s a lot there, but you can see the start and end of the request for each of the child dependencies, including content telling you exactly which registration failed and every pipeline middleware visited during the operation.

We’re hoping this will help people investigate problems in their container, and make it easier to support you!

We’ve got some detailed documentation on diagnostics, including how to set up your own tracers, go check it out for more info.

Visualising your Services

Building on top of the diagnostics support I just mentioned, we’ve also added support for outputting graphs (in DOT format) representing your resolve operation, which can then be rendered to an image, using the Graphviz tools (or anything that can render the DOT format).

This feature is available in the new NuGet package, Autofac.Diagnostics.DotGraph.

var builder = new ContainerBuilder();

// Here's my complicated(ish) dependency graph.
builder.RegisterType<A>();
builder.RegisterType<B1>();
builder.RegisterType<B2>();
builder.RegisterType<C1>();
builder.RegisterType<C2>().SingleInstance();

var container = builder.Build();

// Using the new DOT tracer here.
var tracer = new DotDiagnosticTracer();
tracer.OperationCompleted += (sender, args) =>
{
    // Writing to file in-line may not be ideal, this is just an example.
    File.WriteAllText("graphContent.dot", args.TraceContent);
};

container.SubscribeToDiagnostics(tracer);

container.Resolve<A>();

Once I convert this to a visual graph (there’s a useful VSCode Extension that will quickly preview the graph for you), I get this:

If you’ve got a big dependency graph, hopefully this will help you understand the chain of dependencies more readily!

There’s more information on the DOT Graph support in our documentation.

Pooled Instances

A new Autofac package, Autofac.Pooling, is now available that provides the functionality to maintain a pool of object instances within your Autofac container.

The idea is that, for certain resources (like connections to external components), rather than get a new instance for every lifetime scope, which is disposed at the end of the scope, you can retrieve from a container-shared pool of these objects, and return to the pool at the end of the scope.

You can do this by configuring a registration with PooledInstancePerLifetimeScope or PooledInstancePerMatchingLifetimeScope methods:

var builder = new ContainerBuilder();

// Configure my pooled registration.
builder.RegisterType<MyCustomConnection>()
        .As<ICustomConnection>()
        .PooledInstancePerLifetimeScope();

var container = builder.Build();

using (var scope = container.BeginLifetimeScope())
{
    // Creates a new instance of MyCustomConnection
    var instance = scope.Resolve<ICustomConnection>();

    instance.DoSomething();
}

// When the scope ends, the instance of MyCustomConnection
// is returned to the pool, rather than being disposed.

using (var scope2 = container.BeginLifetimeScope())
{
    // Does **not** create a new instance, but instead gets the
    // previous instance from the pool.
    var instance = scope.Resolve<ICustomConnection>();

    instance.DoSomething();
}

// Instance gets returned back to the pool again at the
// end of the lifetime scope.

You can resolve these pooled services like any normal service, but you’ll be getting an instance from the pool when you do!

Check out the documentation on pooled instances for details on how to control pool capacity, implement custom behaviour when instances are retrieved/returned to the pool, and even how to implement custom pool policies to do interesting things like throttle your application based on the capacity of the pool!

Generic Delegate Registrations

Autofac has had the concept of open generic registrations for some time, where you can specify an open-generic type to provide an open-generic service.

var builder = new ContainerBuilder();
// Register a generic that will provide closed types of IService<>
builder.RegisterGeneric(typeof(Implementation<>)).As(typeof(IService<>));

In Autofac 6.0, we’ve added the ability to register a delegate to provide the generic, instead of a type, for advanced scenarios where you need to make custom decisions about the resulting closed type.

var builder = new ContainerBuilder();

builder.RegisterGeneric((ctxt, types, parameters) =>
{
    // Make decisions about what closed type to use.
    if (types.Contains(typeof(string)))
    {
        return new StringSpecializedImplementation();
    }

    return Activator.CreateInstance(typeof(GeneralImplementation<>).MakeGenericType(types));
}).As(typeof(IService<>));

Concurrency Performance Improvements

There’s been a lot of work into performance with this release of Autofac; particularly around performance in highly-concurrent scenarios, like web servers.

We’ve removed a load of locking from the core of Autofac, to the point that once a service has been resolved once from a lifetime scope, subsequent resolves of that service are lock-free.

In some highly-concurrent scenarios, we’ve seen a 4x reduction in the time it takes to resolve objects through Autofac!

Thanks @alsami for the work on automating our benchmark execution, @twsouthwick for work on caching generated delegate types, and @DamirAinullin for varied performance tweaks.

Other Changes

  • Support async handlers for OnPreparing, OnActivating, OnActivated and OnRelease (PR#1172).
  • Circular Dependency depth changes to allow extremely deep dependency graphs that have no circular references (PR#1148).
  • ContainerBuilder is now sealed (Issue#1120).
  • Explicitly injected properties can now be declared using an expression (PR#1123, thanks @mashbrno).

Still Todo

We’re working hard to get all of the ~25 integration packages pushed to NuGet as quickly as we can, so please bear with us while we get these sorted.

Some of this is sitting in branches ready to go, other things need to be done now that we have this core package out there.

If your favorite integration isn’t ready yet, we’re doing our best. Rather than filing "When will this be ready?" issues, consider pull requests with the required updates.

Thank You!

I’d like to personally thank all the contributors who contributed to the 6.0 release since we shipped 5.0:

Hopefully the Github Contributors page hasn’t let me down, I wouldn’t want to miss anyone!

Categories
c# Uncategorized

Making Users Re-Enter their Password: ASP.NET Core & IdentityServer4

It’s often good security behaviour to require users to re-enter their password when they want to change some secure property of their account, like generate personal access tokens, or change their Multi-factor Authentication (MFA) settings.

You may have seen the Github ‘sudo’ mode, which asks you to re-enter your password when you try to change something sensitive.

Sudo Mode Dialog
The Github sudo mode prompt.

Most of the time a user’s session is long-lived, so when they want to do something sensitive, it’s best to check they still are who they say.

I’ve been working on the implementation of IdentityServer4 at Enclave for the past week or so, and had this requirement to require password confirmation before users can modify their MFA settings.

I thought I’d write up how I did this for posterity, because it took a little figuring out.

The Layout

In our application, we have two components, both running on ASP.NET Core 3.1

  • The Accounts app that holds all the user data; this is where Identity Server runs; we use ASP.NET Core Identity to do the actual user management.
  • The Portal app that holds the UI. This is a straightforward MVC app right now, no JS or SPA to worry about.

To make changes to a user’s account settings, the Profile Controller in the Portal app makes API calls to the Accounts app.

The Portal calls APIs in the Accounts app

All the API calls to the Accounts app are already secured using the Access Token from when the user logged in; we have an ASP.NET Core Policy in place for our additional API (as per the IdentityServer docs) to protect it.

The Goal

The desired outcome here is that specific sensitive API endpoints within the Accounts app require the calling user to have undergone a second verification, where they must have re-entered their password recently in order to use the API.

What we want to do is:

  • Allow the Portal app to request a ‘step-up’ access token from the Accounts app.
  • Limit the step-up access token to a short lifetime (say 15 minutes), with no refresh tokens.
  • Call a sensitive API on the Accounts App, and have the Accounts App validate the step-up token.

Issuing the Step-Up Token

First up, we need to generate a suitable access token when asked. I’m going to add a new controller, StepUpApiController, in the Accounts app.

This controller is going to have a single endpoint, which requires a regular access token before you can call it.

We’re going to use the provided IdentityServerTools class, that we can inject into our controller, to do the actual token generation.

Without further ado, let’s look at the code for the controller:

[Route("api/stepup")]
[ApiController]
[Authorize(ApiScopePolicy.WriteUser)]
public class StepUpApiController : ControllerBase
{
    private static readonly TimeSpan ValidPeriod = TimeSpan.FromMinutes(15);

    private readonly UserManager<ApplicationUser> _userManager;
    private readonly IdentityServerTools _idTools;

    public StepUpApiController(UserManager<ApplicationUser> userManager,
                               IdentityServerTools idTools)
    {
        _userManager = userManager;
        _idTools = idTools;
    }

    [HttpPost]
    public async Task<StepUpApiResponse> StepUp(StepUpApiModel model)
    {
        var user = await _userManager.GetUserAsync(User);

        // Verify the provided password.
        if (await _userManager.CheckPasswordAsync(user, model.Password))
        {
            var clientId = User.FindFirstValue(JwtClaimTypes.ClientId);

            var claims = new Claim[]
            {
                new Claim(JwtClaimTypes.Subject, User.FindFirstValue(JwtClaimTypes.Subject)),
            };

            // Create a token that:
            //  - Is associated to the User's client.
            //  - Is only valid for our configured period (15 minutes)
            //  - Has a single scope, indicating that the token can only be used for stepping up.
            //  - Has the same subject as the user.
            var token = await _idTools.IssueClientJwtAsync(
                clientId,
                (int)ValidPeriod.TotalSeconds,
                new[] { "account-stepup" },
                additionalClaims: claims);

            return new StepUpApiResponse { Token = token, ValidUntil = DateTime.UtcNow.Add(ValidPeriod) };
        }

        Response.StatusCode = StatusCodes.Status401Unauthorized;

        return null;
    }
}

A couple of important points here:

  • In order to even access this API, the normal access token being passed in the requested must conform to our own WriteUser scope policy, which requires a particular scope be in the access token to get to this API.
  • This generated access token is really basic; it has a single scope, “account-stepup”, and only a single additional claim containing the subject.
  • We associate the step-up token to the same client ID as the normal access token, so only the requesting client can use that token.
  • We explicitly state a relatively short lifetime on the token (15 minutes here).

Sending the Token

This is the easy bit; once you have the token, you can store it somewhere in the client, and send it in a subsequent request.

Before sending the step-up token, you’ll want to check the expiry on it, and if you need a new one, then prompt the user for their credentials and start the process again.

For any request to the sensitive API, we need to include both the normal access token from the user’s session, plus the new step-up token.

I set this up when I create the HttpClient:

private async Task<HttpClient> GetClient(string? stepUpToken = null)
{
    var client = new HttpClient();

    // Set the base address to the URL of our Accounts app.
    client.BaseAddress = _accountUrl;

    // Get the regular user access token in the session and add that as the normal
    // Authorization Bearer token.
    // _contextAccessor is an instance of IHttpContextAccessor.
    var accessToken = await _contextAccessor.HttpContext.GetUserAccessTokenAsync();
    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

    if (stepUpToken is object)
    {
        // We have a step-up token; include it as an additional header (without the Bearer indicator).
        client.DefaultRequestHeaders.Add("X-Authorization-StepUp", stepUpToken);
    }

    return client;
}

That X-Authorization-StepUp header is where we’re going to look when checking for the token in the Accounts app.

Validating the Step-Up Token

To validate a provided step-up token in the Accounts app, I’m going to define a custom ASP.NET Core Policy that requires the API call to provide a step-up token.

If there are terms in here that don’t seem immediately obvious, check out the docs on Policy-based authorization in ASP.NET Core. It’s a complex topic, but the docs do a pretty good job of breaking it down.

Let’s take a look at an API call endpoint that requires step-up:

[ApiController]
[Route("api/user")]
[Authorize(ApiScopePolicy.WriteUser)]
public class UserApiController : Controller
{
    [HttpPost("totp-enable")]
    [Authorize("require-stepup")]
    public async Task<IActionResult> EnableTotp(TotpApiEnableModel model)
    {
        // ... do stuff ...
    }
}

That Authorize attribute I placed on the action method specifies that we want to enforce a require-stepup policy on this action. Authorize attributes are additive, so a request to EnableTotp requires both our normal WriteUser policy and our step-up policy.

Defining our Policy

To define our require-stepup policy, lets jump over to our Startup class; specifically, in ConfigureServices, where we set up Authorization using the AddAuthorization method:

services.AddAuthorization(options =>
{
    // Other policies omitted...

    options.AddPolicy("require-stepup", policy =>
    { 
        policy.AddAuthenticationSchemes("local-api-scheme");
        policy.RequireAuthenticatedUser();
        
        // Add a new requirement to the policy (for step-up).
        policy.AddRequirements(new StepUpRequirement());
    });
});

The ‘local-api-scheme’ is the built-in scheme provided by IdentityServer for protecting local API calls.

That requirement class, StepUpRequirement is just a simple marker class for indicating to the policy that we need step-up. It’s also how we wire up a handler to check that requirement:

public class StepUpRequirement : IAuthorizationRequirement
{
}

Defining our Authorization Handler

We now need an Authorization Handler that lets us check incoming requests meet our new step-up requirement.

So, let’s create one:

public class StepUpAuthorisationHandler : AuthorizationHandler<StepUpRequirement>
{
    private const string StepUpTokenHeader = "X-Authorization-StepUp";

    private readonly IHttpContextAccessor _httpContextAccessor;
    private readonly ITokenValidator _tokenValidator;

    public StepUpAuthorisationHandler(
        IHttpContextAccessor httpContextAccessor,
        ITokenValidator tokenValidator)
    {
        _httpContextAccessor = httpContextAccessor;
        _tokenValidator = tokenValidator;
    }

    /// <summary>
    /// Called by the framework when we need to check a request.
    /// </summary>
    protected override async Task HandleRequirementAsync(
        AuthorizationHandlerContext context,
        StepUpRequirement requirement)
    {
        // Only interested in authenticated users.
        if (!context.User.IsAuthenticated())
        {
            return;
        }

        var httpContext = _httpContextAccessor.HttpContext;

        // Look for our special request header.
        if (httpContext.Request.Headers.TryGetValue(StepUpTokenHeader, out var stepUpHeader))
        {
            var headerValue = stepUpHeader.FirstOrDefault();

            if (!string.IsNullOrEmpty(headerValue))
            {
                // Call our method to check the token.
                var validated = await ValidateStepUp(context.User, headerValue);

                // Token was valid, so succeed.
                // We don't explicitly have to fail, because that is the default.
                if (validated)
                {
                    context.Succeed(requirement);
                }
            }
        }
    }

    private async Task<bool> ValidateStepUp(ClaimsPrincipal user, string header)
    {
        // Use the normal token validator to check the access token is valid, and contains our
        // special expected scope.
        var validated = await _tokenValidator.ValidateAccessTokenAsync(header, "account-stepup");

        if (validated.IsError)
        {
            // Bad token.
            return false;
        }

        // Validate that the step-up token is for the same client as the access token.
        var clientIdClaim = validated.Claims.FirstOrDefault(x => x.Type == JwtClaimTypes.ClientId);

        if (clientIdClaim is null || clientIdClaim.Value != user.FindFirstValue(JwtClaimTypes.ClientId))
        {
            return false;
        }

        // Confirm a subject is supplied.
        var subjectClaim = validated.Claims.FirstOrDefault(x => x.Type == JwtClaimTypes.Subject);

        if (subjectClaim is null)
        {
            return false;
        }

        // Confirm that the subject of the stepup and the current user are the same.
        return subjectClaim.Value == user.FindFirstValue(JwtClaimTypes.Subject);
    }
}

Again, let’s take a look at the important bits of the class:

  • The handler derives from AuthorizationHandler<StepUpRequirement>, indicating to ASP.NET that we are a handler for our custom requirement.
  • We stop early if there is no authenticated user; that’s because the step-up token is only valid for a user who is already logged in.
  • We inject and use IdentityServer’s ITokenValidator interface to let us validate the token using ValidateAccessTokenAsync; we specify the scope we require.
  • We check that the client ID of the step-up token is the same as the regular access token used to authenticate with the API.
  • We check that the subjects match (i.e. this step-up token is for the same user).

The final hurdle is to register our authorization handler in our Startup class:

services.AddSingleton<IAuthorizationHandler, StepUpAuthorisationHandler>();

Wrapping Up

There you go, we’ve now got a secondary access token being issued to indicate step-up has been done, and we’ve got a custom authorization handler to check our new token.

Categories
c#

Adding the Username to the Logs for every ASP.NET Core Request with NLog

I’m currently investigating how we port a large ASP.NET application to ASP.NET Core, and one of the things I had to figure out this morning was how to include information about the logged-in user in our logs.

The most important value to collect is the username, but I also need to be able to collect other information from the session at some point.

In the original ASP.NET app we did some slightly questionable HttpContext.Current access inside the logging providers, which I didn’t love.

In ASP.NET Core however, I can combine the use of the middleware system with NLog to add this information to my logs in a much better/easier way.

Adding NLog to my App

To add NLog to my ASP.NET Core app, I just followed the basic instructions at https://github.com/NLog/NLog/wiki/Getting-started-with-ASP.NET-Core-2 to get up and going (that guide tells you what you need to put in your Program and Startup classes to wire everything up).

I then updated the default HomeController to write a log message in the index page:

public class HomeController : Controller
{
public ILogger<HomeController> Logger { get; }
public HomeController(ILogger<HomeController> logger)
{
Logger = logger;
}
public IActionResult Index()
{
Logger.LogInformation("User loaded the home page");
return View();
}
// and the rest..
}
view raw HomeController.cs hosted with ❤ by GitHub

So when I launch my app, I get my log messages out (this is just the basic ASP.NET Core site template):

Adding our Username to Log Events

First up, I’m just going to add an extremely basic action to my HomeController that will sign me in with a test user (I’ve already set up the Startup class configuration to add cookie authentication):

public async Task<IActionResult> SignMeIn()
{
// Just do a real basic login.
var claims = new List<Claim>
{
new Claim(ClaimTypes.Name, "ajevans"),
new Claim("FullName", "Alistair Evans"),
new Claim(ClaimTypes.Role, "Administrator"),
};
var claimsIdentity = new ClaimsIdentity(claims, "AuthCookie");
await HttpContext.SignInAsync(
"AuthCookie",
new ClaimsPrincipal(claimsIdentity));
Logger.LogInformation("Signed in {user}", claimsIdentity.Name);
return RedirectToAction(nameof(Index));
}
view raw HomeController.cs hosted with ❤ by GitHub

Now we can do the middleware changes (this is the important bit). In the Startup class’ Configure method, we have an additional Use method:

public void Configure(IApplicationBuilder app)
{
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseAuthentication();
app.Use(async (ctxt, next) =>
{
if(ctxt.User == null)
{
// Not logged in, so nothing to do.
await next();
}
else
{
// Set a scoped value in the NLog context, then call the next
// middleware.
var userName = ctxt.User.Identity.Name;
using (MappedDiagnosticsLogicalContext.SetScoped("userName", userName))
{
await next();
}
}
});
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
view raw Startup.cs hosted with ❤ by GitHub

The MappedDiagnosticsLogicalContext class is an NLog class that lets you provide values that are scoped to the current async context. These values are attached to every log event raised inside the using block. The call to next() inside our using means that the entirety of the middleware pipeline (from that point onwards) has the userName property attached to it.

Displaying the Username in the Log

The last part of this is to update our nlog.config to display the username.

To do this, we use the MDLC Layout Renderer to pull the userName property out of the log event, by adding ${mdlc:userName} inside our layout:

<!-- the targets to write to -->
<targets>
<!-- write logs to file -->
<target xsi:type="File" name="allfile" fileName="c:\temp\nlog-all-${shortdate}.log"
layout="${longdate} | ${uppercase:${level:padding=5}} | ${mdlc:userName} | ${logger} | ${message} ${exception:format=tostring}" />
<!-- another file log, only own logs. Uses some ASP.NET core renderers -->
<target xsi:type="File" name="ownFile-web" fileName="c:\temp\nlog-own-${shortdate}.log"
layout="${longdate} | ${uppercase:${level:padding=5}} | ${mdlc:userName} | ${logger} | ${message} ${exception:format=tostring} | url: ${aspnet-request-url} | action: ${aspnet-mvc-action}" />
</targets>
view raw nlog.config hosted with ❤ by GitHub

Now, if we start the application, and log in, we get our username in each log event!

2019-09-07 10:40:15.4822 | DEBUG | | Main.Program | Starting Application | url: | action:
2019-09-07 10:40:18.5081 | INFO | | Main.Controllers.HomeController | Home page | url: https://localhost/ | action: Index
2019-09-07 10:40:40.3932 | INFO | | Main.Controllers.HomeController | Signed in ajevans | url: https://localhost/Home/SignMeIn | action: SignMeIn
2019-09-07 10:40:40.4028 | INFO | ajevans | Main.Controllers.HomeController | Home page | url: https://localhost/ | action: Index
2019-09-07 10:40:58.8799 | INFO | ajevans | Main.Controllers.HomeController | Home page | url: https://localhost/ | action: Index
2019-09-07 10:41:05.0707 | INFO | ajevans | Main.Controllers.HomeController | Home page | url: https://localhost/ | action: Index
view raw logcontent.log hosted with ❤ by GitHub

The real bonus of assigning the username inside the middleware though is that the extra detail gets added to the internal Microsoft.* logs as well:

2019-09-07 10:41:05.0707 | INFO | | Microsoft.AspNetCore.Hosting.Internal.WebHost | Request starting HTTP/1.1 GET https://localhost:5001/
2019-09-07 10:41:05.0707 | INFO | ajevans | Microsoft.AspNetCore.Routing.EndpointMiddleware | Executing endpoint 'Main.Controllers.HomeController.Index (Main)'
2019-09-07 10:41:05.0707 | INFO | ajevans | Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker | Route matched with {action = "Index", controller = "Home"}. Executing controller action with signature Microsoft.AspNetCore.Mvc.IActionResult Index() on controller Main.Controllers.HomeController (Main).
2019-09-07 10:41:05.0707 | INFO | ajevans | Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker | Executing action method Main.Controllers.HomeController.Index (Main) - Validation state: Valid
2019-09-07 10:41:05.0707 | INFO | ajevans | Main.Controllers.HomeController | Home page
2019-09-07 10:41:05.0779 | INFO | ajevans | Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker | Executed action method Main.Controllers.HomeController.Index (Main), returned result Microsoft.AspNetCore.Mvc.ViewResult in 5.3381ms.
2019-09-07 10:41:05.0779 | INFO | ajevans | Microsoft.AspNetCore.Mvc.ViewFeatures.ViewResultExecutor | Executing ViewResult, running view Index.
2019-09-07 10:41:05.0779 | INFO | ajevans | Microsoft.AspNetCore.Mvc.ViewFeatures.ViewResultExecutor | Executed ViewResult - view Index executed in 1.978ms.
2019-09-07 10:41:05.0779 | INFO | ajevans | Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker | Executed action Main.Controllers.HomeController.Index (Main) in 8.4146ms
2019-09-07 10:41:05.0779 | INFO | ajevans | Microsoft.AspNetCore.Routing.EndpointMiddleware | Executed endpoint 'Main.Controllers.HomeController.Index (Main)'
2019-09-07 10:41:05.0779 | INFO | | Microsoft.AspNetCore.Hosting.Internal.WebHost | Request finished in 10.42ms 200 text/html; charset=utf-8
2019-09-07 10:41:05.1352 | INFO | | Microsoft.AspNetCore.Hosting.Internal.WebHost | Request starting HTTP/1.1 GET https://localhost:5001/favicon.ico
2019-09-07 10:41:05.1679 | INFO | | Microsoft.AspNetCore.StaticFiles.StaticFileMiddleware | Sending file. Request path: '/favicon.ico'. Physical path: 'D:\AzureDevOps\ISA-Prototyping\Main\wwwroot\favicon.ico'
2019-09-07 10:41:05.1685 | INFO | | Microsoft.AspNetCore.Hosting.Internal.WebHost | Request finished in 33.3657ms 200 image/x-icon
view raw mslogcontent.log hosted with ❤ by GitHub

You’ll notice that not all the log messages have a username value. Why is that?

The reason is that those log messages come from middleware that occurs earlier in the pipeline than our middleware that assigns the username, so those log events won’t contain that property.

In conclusion…

So, you’ve seen here how to use the NLog MappedDiagnosticsLogicalContext class and ASP.NET Core middleware to add extra information to all the log messages for each request. Don’t forget that you can add as much information to the log as you need, for example you could pull some user session state value out of the Session and add that too.

Categories
architecture c#

Managing Big Enterprise Applications in the .NET Ecosystem

I’m going to spend a few minutes here discussing some advice for designing/maintaining large enterprise-grade .NET applications, particularly ones that you sell to others, rather than in-house creations.

Disclaimer: I work largely with big applications used by enterprise customers. I imagine a lot of people reading this do as well, but plenty of people may disagree with some of my thoughts/suggestions. What follows is just based on my experience of designing and deploying user-driven ASP.NET applications.

A Brief Defense of (Deployment) Monoliths

Microservices are all the rage right now, and they are very cool, I will not deny that; small blocks of easy-to-maintain logic that all build, deploy and start quickly are brilliant. They’re great when you are deploying your own software, either onto your own premises or in the cloud; but what if your software has to be deployed onto someone else’s environment, by the owner of that environment?

What if they don’t use containers, or even use virtualisation?
What if they have no DevOps pipeline at all, and everything must be done manually?
What if those global customers have disparate regulatory and internal governance concerns that govern how and where data is stored, and how your application is managed?

In these situations, deployment simplicity is one of the most important considerations we have, and microservices deployment is by no means simple.

What I need is to keep the number of deployable components to a minimum. My goal is a one-click installer, followed by minimal configuration.

I asked a panel at a recent Microsoft Azure conference what solutions/plans they had for taking a complex microservices architecture and deploying it in someone else’s infrastructure as a simple-to-install component. If they use Azure as well, then you might be in luck in the near future, but other than that I didn’t get any answers that gave me hope for distributable microservice packages.

Managing Monoliths

In the modern development ecosystem, some people think ‘monolith’ is a dirty word. They’re seen as inevitable blobs of spaghetti code, horrible bloat and painful development experiences. But that doesn’t have to be true.

I’m going to write up a couple of blog posts that go into specific tips for maintaining enterprise ASP.NET monoliths, but I’ll start with some general advice.

All of the following applies to ASP.NET applications on the full .NET Framework, and .NET Core (soon to be known as .NET 5).

Make it Modular, Make it Patchable

The concept of a ‘Modular Monolith’ is not new. If you don’t break your application into multiple libraries (i.e. DLLs), you’re going to get into the world of spaghetti code so fast it will make your source control repository collapse in on itself.

I find that circular reference prevention actually ends up helping to enforce good design patterns, which you do not get if everything is in one big project.

Even if all your code is super tidy, if you’re distributing your software to enterprise customers, at some point you are going to need to patch something, because big customers just don’t upgrade to your latest version very often (once a decade is not that unusual). They certainly aren’t going to just upgrade to the latest build on trunk/master when they find a bug that needs fixing.

If you need to reissue the entire application to patch something, your customer’s internal test teams are going to cry foul, because they can’t predict the impact of your changes, so they’ll say they need to retest the whole thing. They definitely won’t go on trust when you say that all your automated tests pass.

So, to that end, do not build an ASP.NET (v4 or Core) web application that sits in one project (despite what most intro tutorials start off telling you to do). I don’t care what size it is, break it up.

You can add your own Assembly Loading startup process if you need to. The .NET loaders do a great job of loading your references for you, but I find you end up needing a bit more control than you get from the default behaviour. For example, you can explicitly load the libraries of your application based on some manifest file (helpful to control patched DLL versions).

Micro-kernels are your friend

If you can, then build your application using a micro-kernel architecture. By micro-kernel, I mean that there should be a central core of your application that provides base technical support features (data access, logging, dependency injection, etc) but adds no actual functionality to your application.

Once you’ve got that, you can:

  • Update (and patch) blocks of functionality in your application easily. These change much more often than your core.
  • Create customer-specific features (which happens all the time) without polluting your general application code.
  • Develop and test your functionality blocks in isolation.
  • Scale-out your development to multiple teams by giving them different blocks of functionality to work on.

Does that sound familiar? A lot of those advantages are shared with developing microservices; small blocks of functionality with a specific problem domain, that can be developed in isolation.

In terms of deployment we’ve still got one deployment package; it’s your CI system that should bring the Core and Functionality components together into one installer or other package, based on a list of required components for a given customer or branch.

I will say that defining a micro-kernel architecture is very hard to do properly, especially if you have to add it later on, to an existing application architecture.

Pro tip – define your own internal NuGet packages for your Core components, so they can be distributed easily; you can then easily ‘release’ new Core versions to other teams.

If you output NuGet packages from your CI system, you can even have some teams that need Core functionality in development working off an ‘alpha’ build of Core.

Enforce Layer Separation in your APIs
(or ‘if you use a data context in an MVC controller the compiler will slap you’)

Just because everything may be running in one process doesn’t mean you shouldn’t maintain strict separation of layers.

At a minimum, you should define a Business layer that is allowed to access your database, and a UI/Web Service layer, that is not.

The business layer should never consume a UI service, and the UI layer should never directly access/modify data.

Clients of your application should only ever see that UI or Web Service layer.

Escalating terrors in software design.

You could enforce all of this through code reviews, but I find things can still slip through the gaps, so I like to make my API layout do the work in my Core to enforce the layout.

I find a good way to do this in a big .NET application (micro-kernel or otherwise) is to:

  • Define clear base classes that support functionality in each layer.
    For example, create a MyAppBusiness class in your business layer, that all business services must derive from. Similarly, define a MyAppController class that all MVC controllers will derive from (which in turn derives from the normal Controller class).
  • In those classes, expose protected methods to access core services that each layer needs. So your base MyAppBusiness class can expose data access to derived classes, and your MyAppController class can provide localisation/view-rendering support.
  • In your start-up procedure (preferably when you register your Dependency Injection services, if you use it, which you should), only register valid services, that derive from the right base class. Enforce by namespace/assembly if necessary. Throw exceptions if someone has got it wrong.

Where possible, developer mistakes should be detectable/preventable in code. Bake it into your APIs and you can make people follow standards because they can’t do anything if they don’t.

Next Up

In future posts on these sort of topics, I’ll talk about:

  • Tips for working with Entity Framework in applications with a complex database
  • Automated testing of big applications
  • Using PostSharp to verify developer patterns at compile time

..and any other topics that spring to mind.

Categories
c#

Easily loading lots of data in parallel over HTTP, using Dataflow in .NET Core

I recently had a requirement to load a large amount of data into an application, as fast as possible.

The data in question was about 100,000 transactions, stored line-by-line in a file, that needed to be sent over HTTP to a web application, that would process it and load it into a database.

This is actually pretty easy in .NET, and super efficient using async/await:

async static Task Main(string[] args)
{
var httpClient = new HttpClient();
httpClient.BaseAddress = new Uri("https://myserver");
using (var fileSource = new StreamReader(File.OpenRead(@"C:\Data\Sources\myfile.csv")))
{
await StreamData(fileSource, httpClient, "/api/send");
}
}
private static async Task StreamData(StreamReader fileSource, HttpClient httpClient, string path)
{
string line;
// Read from the file until it's empty
while ((line = await fileSource.ReadLineAsync()) != null)
{
// Convert a line of data into JSON compatible with the API
var jsonMsg = GetDataJson(line);
// Send it to the server
await httpClient.PostAsync(path, new StringContent(jsonMsg, Encoding.UTF8, "application/json"));
}
}
view raw program.cs hosted with ❤ by GitHub

Run that through, and I get a time of 133 seconds; this isn’t too bad right? Around 750 records per second.

But I feel like I can definitely make this better. For one thing, my environment doesn’t look exactly look like the diagram above. It’s a scaled production environment, so looks more like this:

I’ve got lots of resources that I’m not using right now, because I’m only sending one request at a time, so what I want to do is start loading the data in parallel.

Let’s look at a convenient way of doing this, using the System.Threading.Tasks.Dataflow package, which is available for .NET Framework 4.5+ and .NET Core.

The Dataflow components provide various ways of doing asynchronous processing, but here I’m going to use the ActionBlock, which allows me to post messages that are subsequently processed by a Task, in a callback. More importantly, it let’s me process messages in parallel.

Let’s look at the code for my new StreamDataInParallel method:

private static async Task StreamDataInParallel(StreamReader fileSource, HttpClient httpClient, string path, int maxParallel)
{
var block = new ActionBlock<string>(
async json =>
{
await httpClient.PostAsync(path, new StringContent(json, Encoding.UTF8, "application/json"));
}, new ExecutionDataflowBlockOptions
{
// Tells the action block how many we want to run at once.
MaxDegreeOfParallelism = maxParallel,
// 'Buffer' the same number of lines as there are parallel requests.
BoundedCapacity = maxParallel
});
string line;
while ((line = await fileSource.ReadLineAsync()) != null)
{
// This will not continue until there is space in the buffer.
await block.SendAsync(GetDataJson(line));
}
}
view raw program.cs hosted with ❤ by GitHub

The great thing about Dataflow is that in only about 18 lines of code, I’ve got parallel processing of data, pushing HTTP requests to a server at a rate of my choice (controlled by the maxParallel parameter).

Also, with the combination of the SendAsync method and specifying a BoundedCapacity, it means I’m only reading from my file when there are slots available in the buffer, so my memory consumption stays low.

I’ve run this a few times, increasing the number of parallel requests each time, and the results are below:

Sadly, I wasn’t able to run the benchmarking tests on the production environment (for what I hope are obvious reasons), so I’m running all this locally; the number of parallel requests I can scale to is way higher in production, but it’s all just a factor of total available cores and database server performance.

Value of maxParallelAverage Records/Second
1750
21293
31785
42150
52500
62777
72941
83125

With 8 parallel requests, we get over 3000 records/second, with a time of 32 seconds to load our 100,000 records.

You’ll notice that the speed does start to plateau (or at least I get diminishing returns); this will happen when we start to hit database contention (typically the main throttling factor, depending on your workload).

I’d suggest that you choose a sensible limit for how many requests you have going so you don’t accidentally denial-of-service your environment; we’ve got to assume that there’s other stuff going on at the same time.

Anyway, in conclusion, Dataflow has got loads of applications, this is just one of them that I took advantage of for my problem. So that’s it, go forth and load data faster!

Categories
arduino c#

Displaying Real-time Sensor Data in the Browser with SignalR and ChartJS

In my previous posts on Modding My Rowing Machine, I wired up an Arduino to my rowing machine, and streamed the speed sensor data to an ASP.NET core application.

In this post, I’m going to show you how to take sensor and counter data, push it to a browser as it arrives, and display it in a real-time chart.

If you want to skip ahead, I’ve uploaded all the code for the Arduino and ASP.NET components to a github repo at https://github.com/alistairjevans/rower-mod.

I’m using Visual Studio 2019 with the ASP.NET Core 3.0 Preview for all the server-side components, but the latest stable release of ASP.NET Core will work just fine, I’m not using any of the new features.

Pushing Data to the Browser

So, you will probably have heard of SignalR, the ASP.NET technology that can be used to push data to the browser from the server, and generally establish a closer relationship between the two.

I’m going to use it send data to the browser whenever new sensor data arrives, and also to let the browser request that the count be reset.

The overall component layout looks like this:

Setting up SignalR

This bit is pretty easy; first up, head over to the Startup.cs file in your ASP.NET app project, and in the ConfigureServices method, add SignalR:

public void ConfigureServices(IServiceCollection services)
{
// Define a writer that saves my data to disk
var folderPath = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData),
"rower");
services.AddSingleton<ISampleWriter>(svc => new SampleWriter(folderPath, "samples"));
// Keep my machine state as a singleton
services.AddSingleton<IMachineState, MachineState>();
services.AddControllersWithViews()
.AddNewtonsoftJson();
services.AddRazorPages();
// Add signalr services
services.AddSignalR();
}
view raw Startup.cs hosted with ❤ by GitHub

Next, create a SignalR Hub. This is effectively the endpoint your clients will connect to, and will contain any methods a client needs to invoke on the server.

public class FeedHub : Hub
{
private readonly IMachineState machineState;
private readonly ISampleWriter sampleWriter;
public FeedHub(IMachineState machineState, ISampleWriter sampleWriter)
{
this.machineState = machineState;
this.sampleWriter = sampleWriter;
}
public void ResetCount()
{
// Reset the state, and start a new data file
machineState.ZeroCount();
sampleWriter.StartNewFile();
}
}
view raw FeedHub.cs hosted with ❤ by GitHub

SignalR Hubs are just classes that derive from the Hub class. I’ve got just the one method in mine at the moment, for resetting my counter.

Before that Hub will work, you need to register it in your Startup class’ Configure method:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
//
// Omitted standard content for brevity...
//
app.UseSignalR(cfg => cfg.MapHub<FeedHub>("/feed"));
}
view raw startup.cs hosted with ❤ by GitHub

You’re also going to want to add the necessary SignalR javascript to your project. I did it using the “Manage Client-Side Libraries” feature in Visual Studio; you can find my entire libman.json file (which defines which libraries I’m using) on my github repo

Sending Data to the Client

In the MVC Controller where the data arrives from the Arduino, I’m going to push the sensor data to all clients connected to the hub.

The way you access the clients of a hub from outside the hub (i.e. an MVC Controller) is by resolving an IHubContext<THubType>, and then accessing the Clients property.

public class DataController : Controller
{
private readonly IMachineState machineState;
private readonly ISampleWriter sampleWriter;
private readonly IHubContext<FeedHub> feedHub;
public DataController(IMachineState machineState, ISampleWriter sampleWriter, IHubContext<FeedHub> feedHub)
{
this.machineState = machineState;
this.sampleWriter = sampleWriter;
this.feedHub = feedHub;
}
[HttpPost]
public async Task<ActionResult> ProvideReading(uint milliseconds, double speed, int count)
{
// Update our machine state.
machineState.UpdateMachineState(milliseconds, speed, count);
// Write the sample to file (our sample writer) and update all clients
// Wait for them both to finish.
await Task.WhenAll(
sampleWriter.ProvideSample(machineState.LastSample, machineState.Speed, machineState.Count),
feedHub.Clients.All.SendAsync("newData",
machineState.LastSample.ToString("o"),
machineState.Speed,
machineState.Count)
);
return StatusCode(200);
}
}
view raw DataController.cs hosted with ❤ by GitHub

Pro tip:
Got multiple IO operations to do in a single request, that don’t depend on each other? Don’t just await one, then await the other; use Task.WhenAll, and the operations will run in parallel.

In my example above I’m writing to a file and to SignalR clients at the same time, and only continuing when both are done.

Browser

Ok, so we’ve got the set-up to push data to the browser, but no HTML just yet. I don’t actually need any MVC Controller functionality, so I’m just going to create a Razor Page, which still gives me a Razor template, but without having to write the controller behind it.

If I put an ‘Index.cshtml’ file under a new ‘Pages’ folder in my project, and put the following content in it, that becomes the landing page of my app:

@page
<html>
<head>
</head>
<body>
<div class="container">
<div class="lblSpeed text lbl">Speed:</div>
<div class="valSpeed text" id="currentSpeed"><!-- speed goes here --></div>
<div class="lblCount text lbl">Count:</div>
<div class="valCount text" id="currentCount"><!-- stroke count goes here --></div>
<div class="btnContainer">
<button id="reset">Reset Count</button>
</div>
<div class="chartContainer">
<!-- I'm going to render my chart in this canvas -->
<canvas id="chartCanvas"></canvas>
</div>
</div>
<script src="~/lib/signalr/dist/browser/signalr.js"></script>
<script src="~/js/site.js"></script>
</body>
</html>
view raw Index.cshtml hosted with ❤ by GitHub

In my site.js file, I’m just going to open a connection to the SignalR hub and attach a callback for data being given to me:

"use strict";
// Define my connection (note the /feed address to specify the hub)
var connection = new signalR.HubConnectionBuilder().withUrl("/feed").build();
// Get the elements I need
var speedValue = document.getElementById("currentSpeed");
var countValue = document.getElementById("currentCount");
var resetButton = document.getElementById("reset");
window.onload = function () {
// Start the SignalR connection
connection.start().then(function () {
console.log("Connected");
}).catch(function (err) {
return console.error(err.toString());
});
resetButton.addEventListener("click", function () {
// When someone clicks the reset button, this
// will call the ResetCount method in my FeedHub.
connection.invoke("ResetCount");
});
};
// This callback is going to fire every time I get new data.
connection.on("newData", function (time, speed, count) {
speedValue.innerText = speed;
countValue.innerText = count;
});
view raw site.js hosted with ❤ by GitHub

That’s actually all we need to get data flowing down to the browser, and displaying the current speed and counter values!

I want something a little more visual though….

Displaying the Chart

I’m going to use the ChartJS library to render a chart, plus a handy plugin for ChartJS that helps with streaming live data and rendering it, the chartjs-plugin-streaming plugin.

First off, add the two libraries to your project (and your HTML file), plus MomentJS, which ChartJS requires to function.

Next, let’s set up our chart, by defining it’s configuration and attaching it to the 2d context of the canvas object:

window.onload = function () {
var ctx = document.getElementById('chartCanvas').getContext('2d');
window.myChart = new Chart(ctx, {
type: 'line',
data: {
datasets: [{
label: 'Speed',
data: []
}]
},
options: {
scales: {
xAxes: [{
type: 'realtime',
delay: 0,
// 20 seconds of data
duration: 20000
}],
yAxes: [{
ticks: {
suggestedMin: 0,
suggestedMax: 50
}
}]
}
}
});
// The other signalr setup is still here...
}
view raw site.js hosted with ❤ by GitHub

Finally, let’s make our chart display new sensor data as it arrives:

connection.on("newData", function (time, speed, count) {
// This subtract causes the data to be placed
// in the centre of the chart as it arrives,
// which I personally think looks better...
var dateValue = moment(time).subtract(5, 'seconds');
speedValue.innerText = speed;
countValue.innerText = count;
// append the new data to the existing chart data
myChart.data.datasets[0].data.push({
x: dateValue,
y: speed
});
// update chart datasets keeping the current animation
myChart.update({
preservation: true
});
});
view raw site.js hosted with ❤ by GitHub

With all that together, let’s see what we get!

Awesome, a real-time graph of my rowing!

As an aside, I used the excellent tool by @sarah_edo to generate a CSS grid really quickly, so thanks for that! You can find it at https://cssgrid-generator.netlify.com/

You can check out the entire solution, including all the code for the Arduino and the ASP.NET app, on the github repo at https://github.com/alistairjevans/rower-mod.

Next up for the rowing machine project, I want to put some form of gamification, achievements or progress tracking into the app, but not sure exactly how it will look yet.

Categories
arduino c# networks

Streaming real-time sensor data to an ASP.NET Core app from an Arduino

In my previous posts on Modding my Rowing Machine, I got started with an Arduino, and started collecting speed sensor data. The goal of this post is to connect to the WiFi network and upload sensor data to a server application I’ve got running on my laptop in as close to real-time as I can make it.

Connecting to WiFi

My Arduino Uno WiFi Rev 2 board has got a built-in WiFi module; it was considerably easier than I expected to get everything connected.

I first needed to install the necessary library to support the board, the WiFiNINA library:

Then you can just include the necessary header file and connect to the network:

#include <WiFiNINA.h>
#define NETSSID "MYNETWORK"
#define NETPASS "SECRETPASSWORD"
WiFiClient client;
void setup()
{
Serial.begin(9600);
// Start connecting...
WiFi.begin(NETSSID, NETPASS);
// Give it a moment...
delay(5000);
if(WiFi.status() == WL_CONNECTED)
{
Serial.println("Connected!");
}
}
view raw app.ino hosted with ❤ by GitHub

To be honest, that code probably isn’t going to cut it, because WiFi networks don’t work that nicely. You need a retry mechanism with timeouts to keep trying to connect. Let’s take a look at the full example:

#include <WiFiNINA.h>
#define NETSSID "MYNETWORK"
#define NETPASS "SECRETPASSWORD"
#define TOTAL_WAIT_TIME 60000 // 1 minute
#define ATTEMPT_TIME 5000 // 5 seconds between attempts
WiFiClient client;
void setup()
{
Serial.begin(9600);
unsigned long startTime = millis();
unsigned long lastAttemptTime = 0;
int wifiStatus;
// attempt to connect to Wifi network in a loop,
// until we connect.
while (wifiStatus != WL_CONNECTED)
{
unsigned long currentTime = millis();
if(currentTime - startTime > TOTAL_WAIT_TIME)
{
// Exceeded the total timeout for trying to connect, so stop.
Serial.println("Failed to connect");
while(true);
}
else if(currentTime - lastAttemptTime > ATTEMPT_TIME)
{
// Exceeded our attempt delay, initiate again.
Serial.println("Attempting Wifi Connection");
lastAttemptTime = currentTime;
wifiStatus = WiFi.begin(NETSSID, NETPASS);
}
else
{
// wait 500ms before we check the WiFi status.
delay(500);
}
}
Serial.println("Connected!");
}
view raw app.ino hosted with ❤ by GitHub

The Server

To receive the data from the Arduino, I created a light-weight ASP.NET Core 3.0 web application with a single controller endpoint to handle incoming data, taking a timestamp and the speed:

public class DataController : Controller
{
private readonly SampleWriter sampleWriter;
public DataController(SampleWriter sampleWriter)
{
this.sampleWriter = sampleWriter;
}
[HttpPost]
public async Task<ActionResult> ProvideReading(uint milliseconds, double speed)
{
// sampleWriter is just a singleton dependency with an open file stream,
// writing each record to a CSV file as it arrives.
await sampleWriter.ProvideSample(milliseconds, speed);
return StatusCode(200);
}
}
view raw DataController.cs hosted with ❤ by GitHub

Then, in my Arduino, I put the following code in a method to send data to my application:

#define SERVER "MYSERVER"
#define SERVERPORT 5000
WiFiClient client;
void sendData(unsigned long timestamp, double speed)
{
// Host and port
if(client.connect(SERVER, SERVERPORT))
{
char body[64];
// Clear the array to zeroes.
memset(body, 0, 64);
// Arduino sprintf does not support floats or doubles.
sprintf(body, "milliseconds=%lu&speed=", timestamp);
// Use the dtostrf to append the speed.
dtostrf(speed, 2, 3, &body[strlen(body)]);
int bodyLength = strlen(body);
// Specify the endpoint
client.println("POST /data/providereading HTTP/1.1");
// Write Host: SERVER:SERVERPORT
client.print("Host: ");
client.print(SERVER);
client.print(":");
client.println(SERVERPORT);
// Close the connection after the request
client.println("Connection: close");
// Write the amount of body data
client.print("Content-Length: ");
client.println(bodyLength);
client.println("Content-Type: application/x-www-form-urlencoded");
client.println();
client.print(body);
// Wait for the response
delay(100);
// Read the response (but we don't care what is in it)
while(client.read() != -1);
}
}
view raw app.ino hosted with ❤ by GitHub

I just want to briefly mention one part of the above code, where I’m preparing body data to send.

// Arduino sprintf does not support floats or doubles.
sprintf(body, "milliseconds=%lu&speed=", timestamp);
// Use the dtostrf to append the speed.
dtostrf(speed, 2, 3, &body[strlen(body)]);
view raw app.ino hosted with ❤ by GitHub

The Arduino libraries do not support the %f specifier (for a float) in the sprintf method, so I can’t just add the speed as an argument there. Instead, you have to use the dtostrf method to insert a double into the string, specifying the number of decimal points you want.

Also, if you specify %d (int) instead of %lu (unsigned long) for the timestamp, the sprintf method treats the value as a signed int and you get very strange numbers being sent through for the timestamp.

Once that was uploaded, I started getting requests through!

Performance

We now have HTTP requests from the Arduino to our ASP.NET Core app. But I’m not thrilled with the amount of time it takes to execute a single request.

If we take a look at the WireShark trace (I love WireShark), you can see that each request from start to finish is taking in the order of 100ms!

This is loads, and I can’t have my Arduino sitting there for that long.

ASP.NET Core Performance

You can see in the above trace that the web app handling the request is taking 20ms to return the response, which is a lot. I know that ASP.NET Core can do better than that.

Turns out this problem was actually due to the fact I had console logging switched on. Due to the synchronisation that takes place when writing to the console, it can add a lot of time to requests to print all that information-level data.

Once I turned the logging down from Information to Warning in my appsettings.json file, it got way better.

That’s better!

That actually gives us sub-millisecond response times from the server, which is awesome.

TCP Handshake Overhead

Annoyingly, each request is still taking up to 100ms from start of connection to the end. How come?

If you look at those WireShark traces, we spend a lot of time in the TCP handshaking process. Opening a TCP connection does generally come with lots of network overhead, and that call to client.connect(SERVER, SERVERPORT) in my code blocks until the TCP connection is open; I don’t want to sit there waiting for that every time I want to send a sample.

The simple solution to this is to make the connection stay open between samples, so we can just repeatedly sent data on the same connection, only needing to do the handshake once.

Let’s rework our previous sendData code on the Arduino to keep the connection open:

void sendData(unsigned long timestamp, double speed)
{
char body[64];
int success = 1;
// If we're not connected, open the connection.
if(!client.connected())
{
success = client.connect(SERVER, SERVERPORT);
}
if(success)
{
// Empty the buffer
// (I still don't really care about the response)
while(client.read() != -1);
memset(body, 0, 64);
sprintf(body, "milliseconds=%lu&speed=", timestamp);
dtostrf(speed, 2, 3, &body[strlen(body)]);
int bodyLength = strlen(body);
client.println("POST /data/providereading HTTP/1.1");
client.print("Host: ");
client.print(SERVER);
client.print(":");
client.println(SERVERPORT);
// This tells the server we want to leave the
// connection open.
client.println("Connection: keep-alive");
client.print("Content-Length: ");
client.println(bodyLength);
client.println("Content-Type: application/x-www-form-urlencoded");
client.println();
client.print(body);
}
}
view raw app.ino hosted with ❤ by GitHub

In this version, we ask the server to leave the connection open after the request, and only open the connection if it is closed. I’m also not blocking waiting for a response.

This gives us way better behaviour, and we’re now down to about 40ms total:

There’s one more thing that I don’t love about this though…

TCP Packet Fragmentation

So, what’s left to look at?

TCP segments

I’ve got a packet preceding each of my POST requests, that seems to hold things up by around 40ms. What’s going on here? Let’s look at the content of that packet:

Wireshark data view

What I can tell from this is that rather than wait for my HTTP request data, the Arduino is not buffering for long enough, and is just sending what it has after the first println call containing POST /data/providereading HTTP/1.1. This packet fragmentation slows everything up because the Arduino has to wait for an ACK from the server before it continues.

I just wanted to point out that it looks like the software in the Arduino libraries isn’t responsible for the fragmentation; it looks all the TCP behaviour is handled by the hardware WiFi module, that’s what is splitting my packets.

To stop this packet fragmentation, let’s adjust the sending code to prepare the entire request and send it all at once:

void sendData(unsigned long timestamp, double speed)
{
int success = 1;
char request[256];
char body[64];
if(!client.connected())
{
success = client.connect(server, 5000);
}
if(success)
{
// Empty the buffer
// (I still don't really care about the response)
while(client.read() != -1);
// Clear the request data
memset(request, 0, 256);
// Clear the body data
memset(body, 0, 64);
sprintf(body, "milliseconds=%lu&speed=", timestamp);
dtostrf(speed, 2, 3, &body[strlen(body)]);
char* currentPos = request;
// I'm using sprintf for the fixed length strings here
// to make it easier to read.
currentPos += sprintf(currentPos, "POST /data/providereading HTTP/1.1\r\n");
currentPos += sprintf(currentPos, "Host: %s:%d\r\n", server, 5000);
currentPos += sprintf(currentPos, "Connection: keep-alive\r\n");
currentPos += sprintf(currentPos, "Content-Length: %d\r\n", strlen(body));
currentPos += sprintf(currentPos, "Content-Type: application/x-www-form-urlencoded\r\n");
currentPos += sprintf(currentPos, "\r\n");
strcpy(currentPos, body);
// Send the entire request
client.print(request);
// Force the wifi module to send the packet now
// rather than buffering any more data.
client.flush();
}
}
view raw app.ino hosted with ❤ by GitHub

Once uploaded, let’s look at the new WireShark trace:

No TCP Fragmentation

There we go! Sub-millisecond responses from the server, and precisely hitting my desired 50ms window between each sample send.

There’s still ACKs going on obviously, but they aren’t blocking packet issuing, which is the important thing.

Summary

It’s always good to look at the WireShark trace for your requests to see if you’re getting the performance you want, and don’t dismiss the overhead of opening a new TCP connection each time!

Next Steps

Next up in the ‘Modding my Rowing Machine’ series, I’ll be taking this speed data and generating a real-time graph in my browser, that updates continuously! Stay tuned…

Categories
c#

Value Tuples for passing Lists of Key-Value Pairs in C# 7

Prior to C# 7, I have many, many times found myself needing to pass a hard-coded list of key-value pairs to a method, and I’ve had to do it painfully:

void SomeMethod(List<KeyValuePair<string, string>> pairs)
{
// Use the values
}
void Caller() {
// Yuck!
SomeMethod(new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>("key1", "val1"),
new KeyValuePair<string, string>("key2", "val2")
});
}
view raw program.cs hosted with ❤ by GitHub

We can make this marginally better with a factory function:

// Forgive me for this function name
KeyValuePair<string, string> KVP(string key, string val)
{
return new KeyValuePair<string, string>(key, val);
}
void Caller()
{
// Still pretty bad
SomeMethod(new List<KeyValuePair<string, string>> {
KVP("key1", "val1"),
KVP("key2", "val2")
});
}
view raw program.cs hosted with ❤ by GitHub

Then we can swap the list with an array to improve it a little more:

void SomeMethod(KeyValuePair<string, string>[] pairs)
{
}
void Caller()
{
// Ok, so I hate myself a little less
SomeMethod(new[]
{
KVP("key1", "val1"),
KVP("key2", "val2")
});
}
view raw program.cs hosted with ❤ by GitHub

Finally, we can use ‘params’ on the method so we don’t need to declare an array on the caller:

void SomeMethod(params KeyValuePair<string, string>[] pairs)
{
}
void Caller()
{
// params are great, but I still don't love it.
SomeMethod(
KVP("key1", "val1"),
KVP("key2", "val2")
);
}
view raw program.cs hosted with ❤ by GitHub

Okay, so it’s not too bad, but we still have that factory function which I’m not a fan of.

Value Tuples

Briefly, Value Tuples let you do things like this:

(string value1, string value2) SomeFunc()
{
return ("val1", "val2");
}
void CallerFunction()
{
var result = SomeFunc();
Console.WriteLine(result.value1); // val1
Console.WriteLine(result.value2); // val2
}
view raw program.cs hosted with ❤ by GitHub

You get a small value type containing your two values, that you can declare inline, and access the individual values in the result.

You don’t just have to use these in return types, you can use them anywhere a type definition could be used.

I’ve generally been a little reticent to use this new-ish feature too heavily, because I’m worried about people using them where they should actually be defining types.

That being said…

Making Our Parameters Nicer

In C# 7, now we get to pass those key-value lists the nice way:

void SomeMethod(params (string key, string value)[] pairs)
{
}
void Caller()
{
// Now that's more like it!
SomeMethod(
("key1", "val1"),
("key2", "val2")
);
}
view raw program.cs hosted with ❤ by GitHub

Look at that! No factory function required, we can simply pass in the pairs, just as I’ve wanted to be able to do for years, and it’s super readable.

Just define the array type of your params argument as the tuple you want, and away you go!

I’m still a little wary of Value Tuples generally, but this is definitely a lovely use case for them.