This is a technical documentation for the Cabinet Office’s Central Digital Platform.

Introduction

The Central Digital Platform (CDP) that is being developed to support the new procurement regime.

Overview

Context

The system context diagram below presents the Central Digital Platform system in the context of users and other systems.

System Context for Central Digital Platform
Figure 1. System Context for Central Digital Platform
Legend
Figure 2. Legend

Software Architecture

Container view

The diagram below focuses on high level responsibilities and technology choices within the Central Digital Platform system.

Container View of the Central Digital Platform system
Figure 3. Container View of the Central Digital Platform system
Legend
Figure 4. Legend

Development

This section covers development related topics.

Development environment

Requirements

The minimum set of tools required to work on the project covers:

  • An IDE of your choice (i.e. Visual Studio, Rider)

  • Docker

On Windows, it might be handy to set up git-bash or WSL in order to run tools like Make.

Configuration

The application is mostly configured to start with a fresh repository checkout. The only configuration missing are secrets that are not safe to be commited to the repository.

Never commit secrets to the repository.

Secrets are managed with the Secrets Manager and environment variables. IDE depends on the secrets manager while Docker depends on environment variables.

Environment variables for Docker are managed in compose.override.yml file that’s not version controlled. Run the following command to create it with defaults:

make compose.override.yml
Organisation App

The OrganisationApp requires the following secrets / environment variables:

  • OneLogin:Authority / OneLogin__Authority

  • OneLogin:ClientId / OneLogin__ClientId

  • OneLogin:PrivateKey / OneLogin__PrivateKey

These can be set as secrets with the following dotnet commands:

dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:Authority "https://oidc.example.com"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:ClientId "client-id"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:PrivateKey "-----BEGIN RSA PRIVATE KEY-----SECRET KEY-----END RSA PRIVATE KEY-----"

For Docker, the following environment variables need to be updated in compose.override.yml for the organisation-app service:

  organisation-app:
    environment:
      OneLogin__Authority: "https://oidc.example.com"
      OneLogin__ClientId: "client-id"
      OneLogin__PrivateKey: "-----BEGIN RSA PRIVATE KEY-----SECRET KEY-----END RSA PRIVATE KEY-----"
Authority API

The Authority API depends on the following secrets / environment variables:

  • OneLogin:Authority / OneLogin__Authority

  • PrivateKey / PrivateKey

To set them in the secrets manager, run:

dotnet user-secrets set --project Services/CO.CDP.Organisation.Authority OneLogin:Authority "https://oidc.example.com"
dotnet user-secrets set --project Services/CO.CDP.Organisation.Authority PrivateKey "-----BEGIN RSA PRIVATE KEY----"
Note
The make generate-authority-keys command generates a private key that can be used with PrivateKey. Make sure to copy the contents of file and not the path.

For Docker, update the authority service in compose.override.yml:

  authority:
    environment:
      PrivateKey: "-----BEGIN RSA PRIVATE KEY-----"
      OneLogin__Authority: "https://oidc.example.com"

Setting up the IDE

Import the project to your favourite IDE to build and run tests from there.

Runing all tests in the IDE
Figure 5. Running tests in an IDE

Alternatively, use the dotnet command or the following make targets to build and run tests:

make build
make test

Any dotnet tools used by the project are installed locally and will be restored by the above commands.

Setting Up Docker

While local development is done within the IDE of our choice, the Docker Compose configuration is provided to conveniently start all (or some) services at once.

First, make sure you configured environment variables in compose.override.yml (see Configuration).

Next, build all the Docker containers with the build-docker Make target:

make build-docker

Finally, we can start all Docker services with:

make up

By default, service and application ports are mapped as follows:

All services started in Docker
Figure 6. All services started in Docker

Later, all services can be stopped and destroyed with:

make down

Make targets

There’s a number of Make targets that provide shortcuts during development. Run make help to get an up-to-date list.

Table 1. Make targets
Target Description

help

Shows available commands

build

Builds the solution

test

Runs all tests

up

Starts all the Docker containers

down

Stops and removes all Docker containers

stop

Stops all Docker containers

ps

Lists all running Docker containers

db

Starts the database Docker container only and runs migrations

localstack

Starts the localstack Docker container for AWS services available locally

generate-authority-keys

Generates the private/public key pair for the authority service

Cookbooks

This section stores recipes useful in development.

Using Docker for development

During development, testing, or debugging it’s often useful to run some services with Docker, while others with an IDE. Others, prefer to run everything in the IDE, but infrastructure dependencies, like a database, with Docker. Both scenarios are supported.

Mixing services started on Docker and IDE

By default, Docker Compose will start all services defined in the Compose configuration.

All services started in Docker
Figure 7. All services started in Docker

Imagine we’d like to work on the Organisation App. Ideally, we’d work on it within our IDE, but continue running all the other services in Docker to use the application end-to-end.

First, let’s disable the organisation-app service in compose.override.yml by setting replicas to 0.

# ...
  organisation-app:
    deploy:
      replicas: 0
    # ...

This way, when we start Docker services the organisation-app won’t be started as we expect it to be started from the IDE.

Next, point the gateway to where the Organisation App is running (outside of Docker):

# ...
  gateway:
    environment:
      CDP_ORGANISATION_APP_HOST: 'http://host.docker.internal:58090'
#      CDP_AUTHORITY_HOST: 'http://host.docker.internal:5050'
#      CDP_TENANT_HOST: 'http://host.docker.internal:58080'
#      CDP_ORGANISATION_HOST: 'http://host.docker.internal:58082'
#      CDP_PERSON_HOST: 'http://host.docker.internal:58084'
#      CDP_FORMS_HOST: 'http://host.docker.internal:58086'
#      CDP_DATA_SHARING_HOST: 'http://host.docker.internal:58088'
    # ...

We use host.docker.internal to point to the host machine from Docker containers.

Now, we can start all Docker services (but the Organisation App) with make up and run the Organisation App with the IDE (choose http-for-docker launch profile).

Organisation App running in the IDE
Figure 8. Organisation App running in the IDE

We can pick and choose which services are run by the host or by Docker, so if we needed to also run the organisation service in the IDE, we can follow the same pattern. First, edit compose.override.yml:

# ...
  gateway:
    environment:
      CDP_ORGANISATION_APP_HOST: 'http://host.docker.internal:58090'
#      CDP_AUTHORITY_HOST: 'http://host.docker.internal:5050'
#      CDP_TENANT_HOST: 'http://host.docker.internal:58080'
      CDP_ORGANISATION_HOST: 'http://host.docker.internal:58082'
#      CDP_PERSON_HOST: 'http://host.docker.internal:58084'
#      CDP_FORMS_HOST: 'http://host.docker.internal:58086'
#      CDP_DATA_SHARING_HOST: 'http://host.docker.internal:58088'
  organisation:
    deploy:
      replicas: 0
    # ...

Next, start selected services in the IDE, while Docker takes care of the other ones.

Organisation App and Organisation service running in the IDE
Figure 9. Organisation App and Organisation service running in the IDE
Starting everything in the IDE

In case you preferred to run all the dotnet services in the IDE, you can disable them all in Docker by setting replicas to 0.

Alternatively to setting replicas to 0, you can run the db and migrations containers only with:

make db

Use the http launch profile in your IDE to start each service. All services can be run together by using a multi-launch profile (has to be created manually).

Multi-launch profile
Figure 10. Multi-launch profile
IDE profiles
Recent launch profiles
Figure 11. Recent launch profiles

Each service has two launch profiles pre-configured:

  • http-for-docker - the service will be proxied by the gateway service in Docker - good if we want to run most services on Docker, while few selected ones in IDE.

  • http - the service will be called directly - good if we only want to start db in Docker, but run everything else in the IDE.

Launch profiles
Figure 12. Launch profiles

Design Decisions

This is a log of important design decisions that were made during development.

The design decision log accomplishes the following goals:

  • Provide a structured approach to decision-making.

  • Explain to current and future developers why things are the way that they are.

  • Help to revise decisions when the situation changes.

1. Record architecture decisions

Date: 2024-03-04

Status

Accepted

Context

We need to record the architectural decisions made on this project so that future developers understand the context of our solutions.

Decision

We will use Architecture Decision Records, as described by Michael Nygard.

Consequences

All significant decisions will be discussed within the team and documented in the decision log including:

  • the date decision was made

  • the status of the decision (Accepted, Draft, or Rejected)

  • the context of the decision

  • options that were considered

  • the decision made

  • the consequences of the decision

2. Create web api clients

Date: 2024-03-25

Status

Accepted

Context

We need to create clients for each of our web api services to be used internally (at least initially). Clients must be compliant with the OpenAPI contracts we expose, so that once we expose them externally we’re sure of their correctness.

Microsoft is planning to make OpenAPI support a first-class citizen in ASP.NET Core. Swashbuckle is scheduled to be dropped in .NET 9 (see https://github.com/dotnet/aspnetcore/issues/54599). NSwag is the most feature-rich library in the ecosystem, and their maintainer already raised concerns Microsoft will make NSwag redundant.

We have considered a number of options to move forward.

Option 1: Craft clients from scratch

Create a Client interface for each service and implement it with HttpClient. This will require writing a lot of code ourselves based on the OpenAPI contract and keeping it in sync with the changing contract.

Advantages:

  • "It’s just code", so it’s easy to change.

  • It’s more flexible compared to generating code.

Disadvantages:

  • Needs a process to keep the client in sync with the OpenAPI contract.

  • Needs more tests to guarantee client’s compliance with the OpenAPI contract.

  • Requires effort and time.

Option 2: Generate clients based on OpenAPI contracts (Kiota)

Kiota is a command-line tool for generating API clients based on OpenAPI descriptions. It’s developed by Microsoft and has a nice programming API.

Here’s an example code using the generated Kiota client:

var authProvider = new AnonymousAuthenticationProvider();
var httpClient = new HttpClient
{
    BaseAddress = new Uri("http://localhost:5182")
};
var adapter = new HttpClientRequestAdapter(authProvider, httpClient: httpClient);
var client = new TenantClient(adapter);

var tenant = await client.Tenants.PostAsync(new NewTenant
{
    Name = $"Bob {Guid.NewGuid()}",
    ContactInfo = new TenantContactInfo
    {
        Email = "bob@example.com",
        Phone = "07925344234"
    }
});

// Properties are nullable even though they're required in the contract
var tenantId = tenant?.Id ?? Guid.Empty;
var foundTenant = await client.Tenants[tenantId].GetAsync();

Assert.Equivalent(
    new Models.Tenant
    {
        Id = tenantId,
        Name = tenant!.Name,
        ContactInfo = new TenantContactInfo
        {
            Email = "bob@example.com",
            Phone = "07925344234"
        }
    },
    foundTenant
);

Advantages:

  • Generated clients are compliant with the OpenAPI contract.

  • Generated clients can be refreshed when the OpenAPI contract changes.

  • Saves development time.

  • The tool comes from Microsoft and has an opportunity to become the standard in the .NET ecosystem.

Disadvantages:

  • It doesn’t seem to support some of the OpenAPI spec yet.

  • Nullable properties make it awkward to use. This won’t be even considered for fixing until Kiota v3 (we’re at Kiota v1 now).

Option 3: Generate clients based on OpenAPI contracts (NSwag)

NSwag is the most feature-rich OpenAPI toolchain in the .NET ecosystem. It support generating clients as well as OpenAPI contracts.

NSwag is actively maintained, but there’s a risk of it becoming eventually redundant since Microsoft revealed their plans.

Advantages:

  • Generated clients are compliant with the OpenAPI contract.

  • Generated clients can be refreshed when the OpenAPI contract changes.

  • Saves development time.

  • It’s well maintained.

  • It is mature.

  • It is feature-rich.

Disadvantages:

  • There’s a risk Microsoft will eventually make NSwag redundant.

Example code using the generated client:

var baseUrl = "http://localhost:5182";

ITenantClient client = new TenantClient(baseUrl, new HttpClient());

var tenant = await client.CreateTenantAsync(new NewTenant(
    name: $"Bob {Guid.NewGuid()}",
    contactInfo: new TenantContactInfo(
        email: "bob@example.com",
        phone: "07923234234"
    )
));

var foundTenant = await client.GetTenantAsync(tenant.Id);

Assert.Equal(
    new Tenant
    (
        id: tenant.Id,
        name: tenant.Name,
        contactInfo: new TenantContactInfo(
            email: "bob@example.com",
            phone: "07923234234"
        )
    ),
    foundTenant
);
Decision

We will use the NSwag to generate API clients (option 3). The tool is feature-complete and should cause us the least headaches while saving development time in the same time.

On the server side we should leverage OpenAPI.NET as much as possible (avoid Swashbuckle or NSwag specific types).

Consequences

We will ensure that OpenAPI contracts are generated at build-time so that client libraries have access to them. Client libraries will generate the client based on the OpenAPI contract.

3. Applying database migrations

Date: 2024-04-03

Status

Accepted

Context

We need to evolve the database as we progress with development and, later on, with maintenance.

We have considered a number of options including those suggested in the official Entity Core documentation: https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli

Option 1: DbContext.Database.Migrate() during startup

The application already has access to the database context, the connection, and migration files. That means it’s capable of executing the migrations during startup.

To achieve this, the following code needs to be added the the startup Program:

if (app.Configuration.GetValue<bool>("RunMigrationsOnStartup"))
{
    using (var scope = app.Services.CreateScope())
    {
        var db = scope.ServiceProvider.GetRequiredService<TenantContext>();
        db.Database.Migrate();
    }
}

Full change is available in the following pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/44

Advantages:

  • Works with no additional effort.

Disadvantages:

  • Multiple instances of any given service will compete for making schema modifications.

  • It’s officially not recommended to use this method in production.

Running dotnet ef database update is similar and is also not recommended for production.

Option 2: Generated SQL scripts

SQL scripts generated out of EF migrations are the most reliable option. An SQL script can be reviewed before applying and the changes are idempotent. It’s recommended SQL scripts are version controlled.

To generate a migration:

dotnet ef migrations script \
  -p Services/CO.CDP.Tenant.Persistence \
  -s Services/CO.CDP.Tenant.WebApi \
  --idempotent \
  -o Services/CO.CDP.Tenant.Persistence/Migrations/SQL/$(date +%Y-%m-%d_%H%M)_CreateTenants.sql

To apply it in our local development setup:

docker compose exec \
  -it db psql -U cdp_user \
  -f - cdp < Services/CO.CDP.Tenant.Persistence/Migrations/SQL/2024-03-23_2303_CreateTenants.sql

Example implementation can be found in the following pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/34/files

Advantages:

  • Reliability.

  • Simplicity.

Disadvantages:

  • Requires additional tools to execute the migration (a database client, like psql).

  • Since there’s no way to know which migrations were already executed, all the migrations need to be re-applied. It’s mostly fine since they’re idempotent.

Option 3: Generated SQL scripts (containerised)

One limitation of option 2 is the availability of the database client during the deployment. This could be fixed by building a dedicated migrations container to be run during the deployment.

Advantages:

  • Reliability.

Disadvantages:

  • Since there’s no way to know which migrations were already executed, all the migrations need to be re-applied. It’s mostly fine since they’re idempotent.

Option 4: Migration bundles

Migration bundles are single-file executables that can be used to apply migrations to a database. They’re meant to address the shortcomings of SQL scripts and command-line tools.

Migration bundles can be self contained, which means they won’t depend on anything else (like dotnet runtime).

Here’s an example of how to generate a bundle:

dotnet ef migrations bundle \
  -p Services/CO.CDP.Tenant.Persistence \
  -s Services/CO.CDP.Tenant.WebApi \
  --self-contained

Bundle is self contained and all it needs to apply migrations is the connection string:

./efbundle --connection 'Server=localhost;Database=MYDB;Username=MYUSER;Password=MYPASS;'

Advantages:

  • Self contained

Disadvantages:

  • It’s not very well documented how bundles work and how they’re different to migration command-line tools.

  • It’s a manual step that needs to be run in development.

Option 5: Migration bundles (containerised)

Similarly to containerising SQL scripts, we can containerise bundles, and ship them with the application or a dedicated container.

An implementation of this approach is provided in this pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/51

Advantages:

  • Self contained.

  • Container version of migrations matches the application’s version.

  • The container can be run automatically when the development database starts.

Disadvantages:

  • It’s not very well documented how bundles work and how they’re different to migration command-line tools.

Option 6: Use a specialized migrations library

Finally, a dedicated migrations library can be used to manage and execute migrations. We have not explored this option much.

Decision

We’re going to go with option 5 - Migration bundles (containerised). Bundles are one of the recommended solutions. Putting bundles inside a container gives us several options we could execute them during the deployment.

Option 5 is slitghtly easier to implement than option 3 while also being less flexible. Option 3 allows for tweaking generated SQL scripts. We might revisit the decision in future if it turns out we need this flexibility.

Consequences

We’re going to merge https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/51. The pull request introduces a migration container that’s executed with docker compose in local development environment. The same container will be executed during the deployment.

4. Error Handling and Validation

Date: 2024-04-04

Status

Draft

Context

We need to establish a standardised approach to error handling and validation.

Error handling
Option 1: Problem details response

Utilise the Results.Problem method to generate standardized HTTP problem details responses.

Advantages:

  • Preferred Approach: Results.Problem is Microsoft’s recommendations for handling errors.

  • Standards: This approach adheres to the RFC 7807 standard, providing a uniform format for error messages that include details such as status code, error type, and a readable description.

  • Rich Error Information: Offers a structured format that can include extensive information about the nature of the error, which can be extremely helpful for debugging by clients.

Disadvantages:

  • Heavy for Simple projects: Might be overly complex for simple error messages where extensive details are unnecessary or out of scope.

  • Potential Information Leakage: If not carefully managed, it could inadvertently expose sensitive information about the API’s internal workings.

Option 2: Validation Action Filters

Validation Action Filters intercept the request before it reaches the controller action, validating input data and returning appropriate error responses.

Advantages:

  • Pre-Controller Validation: Errors can be caught and handled before reaching the business logic.

  • Reuse: Logic can be encapsulated and reused across multiple controllers or actions, promoting code reuse.

Disadvantages:

  • Limited Scope: Primarily useful for handling model validation errors. Other types of errors (e.g., those thrown from within service layers or unhandled exceptions) would not be caught by these filters.

  • Coupling: Ties validation logic to the MVC framework, which may not be ideal if some parts of your application require different handling or operate outside the typical request pipeline.

Option 3: Custom Exception Middleware

Implement custom exception middleware that catches exceptions globally across the API, allowing for error handling.

Advantages:

  • Central Error Handling: Provides a single point within your application to handle exceptions, simplifying & reducing code duplication.

  • Flexibility: Allows you to create custom error responses based on exception types or other criteria.

  • Consistency: Can be combined with Results.Problem to ensure that all errors, handled or unhandled, return a consistent response format.

Disadvantages:

  • Complexity: Requires a more detailed setup and understanding of the middleware pipeline in ASP.NET Core.

  • Maintenance: As the application grows, the middleware might need continuous updates to handle new changes.

  • Debugging: This can make it difficult to trace back to the original source of the error, can be complex because errors are handled globally and may be stripped of their context.

Validation
Option 1: Data Annotations

Use attributes from the System.ComponentModel.DataAnnotations namespace to declare validation rules directly in your model classes.

This method is deeply integrated with ASP.NET Core and is supported out of the box, requiring minimal setup for basic validation scenarios.

Advantages:

  • Easy of Implementation: Simple to apply with minimal code, making it an efficient option for straightforward validation requirements.

  • Automatic Enforcement: Automatically enforced by ASP.NET Core during model binding.

  • Framework Integration: Seamlessly integrates with ASP.NET Core, providing error handling built-in stadnard model validation architecture.

Disadvantages:

  • Cluttered Model Classes: Adding validation attributes directly to model properties can lead to cluttered and bloated model classes.

  • Flexibility: Can de difficult and messy to implement more complex validation and cannot handle cross-field dependencies without additional overhead.

Option 2: Fluent Validation

A third-party library that uses a fluent interface to build validation rules separately from your models.

Defined separately from model classes, offering a powerful alternative to data annotations with support for complex validation scenarios.

Advantages:

  • Greater Flexibility and Control: Provides extensive control over validation logic, allowing for complex rules and dependencies.

  • Separation of Concerns: Keeps validation logic decoupled from model definitions, aiding in the maintenance and scalability of the application.

  • Consitency with Unit Testing: We are using FluentAssertions in our unit tests, which complement Fluent Validations.

Disadvantages:

  • Additional Dependency: Incorporates an external package into the project, introducing potential issues with third-party dependencies and updates.

  • External Configuration: Validation rules are maintained outside the model, potentially complicating the development process.

Option 3: Input Models with Built-in Validation

Use dedicated input models (DTOs) that include validation logic, separating it from your domain models.

This approach utilises either data annotations or custom validation methods embedded within the DTOs themselves, providing a clean separation between the data used for input and the internal domain models.

Advantages:

  • Clear Separation of Concerns: Keeps API input validation separate from business logic, reducing the complexity of domain models.

  • Specialised Validation Logic: Allows for validation rules that are specific to the API layer, which can differ from how data validation is handled internally.

  • Flexibility: More control over how data is validated and transformed before being passed to the domain model or business logic.

Disadvantages:

  • Duplication of code: May lead to duplication of code or additional boilerplate, potentially increasing the overhead of maintaining similar models.

  • Additional Mapping Required: Requires mapping between DTOs and domain models, which can introduce additional complexity and overhead.

Option 4: Custom Validation Attributes

Create custom attributes by extending the ValidationAttribute class to define specific validation rules that are not provided out of the box by data annotations.

Advantages:

  • Highly Customizable: Enables the creation of validation rules that are perfectly tailored to specific business requirements.

  • Reusable: Custom attributes can be reused across different models and projects, promoting consistency.

  • Integrates with MVC: Works seamlessly with ASP.NET Core’s model binding and validation frameworks.

Disadvantages:

  • Increased Complexity: Requires a deeper understanding of attribute-based programming and reflection.

  • Development Overhead: More time-consuming to develop and test compared to using standard data annotations.

Decisions
Validation Decision

We will adopt FluentValidation for validation, enhancing both maintainability and scalability (related to Validation Option 2). This approach allows for pre-controller validation, catching errors early and enabling us to reuse validation logic across multiple controllers, simplifying error handling and promoting code reuse.

Error Handling Decision

We have decided to implement Custom Exception Middleware as our error handling strategy (related to Error Handling Option 3). This approach centralises error handling at a single point within the application, simplifying the process and reducing code duplication. It also offers the flexibility to create custom error responses based on different exception types or criteria, and ensures consistency by combining with the Results.Problem method to standardise response formats across the API, according to RFC 7807.

Consequences

These updates require modifications to existing validation logic and some refactoring, since work has already commenced on creating APIs.

5. Database naming conventions

Date: 2024-06-21

Status

Accepted

Context

PostgreSQL clients are forced to quote table and column names in SQL queries, if they’re not lower-cased:

SELECT "Name" FROM "Organisations";

That’s because they’re lowercased by default unless quoted.

This is inconvenient when writing queries by hand.

Decision

We will follow a lowercase naming strategy with _ separated words.

SELECT name FROM organisations;
Consequences

We will switch to the snake case naming convention and migrate existing schema to follow it as well.

6. Acceptance tests

Date: 2024-05-28

Status

Draft

Context

We heavily rely on developer tests for small units of isolated code. For various reasons, we have not developed an acceptance test suite so far. As the project and the team matures, acceptance tests would be helpful to clearly state business requirements and make them self-verifiable as tests.

These tests would be run end-to-end as much as possible and therefore verify user journeys better.

We should still be conscious of costs that such tests bring and attempt to make them as lightweight as possible. The frontend is not javascript heavy and there’s no need to run these tests in a browser most of the time.

Option 1: xUnit

Use a library like Alba or Playwright directly in xUnit tests to exercise the Frontend application. Start all the application services with the dotnet web application factory, and any external dependencies (like the database) with Testcontainers.

Alba integrates very well with dotnet core applications out of the box. It’s possible to run and customise the application in-memory, without a need to execute tests in a browser.

Playwright is a very well established tool. With a little bit of customisation it should also be possible to run it against the application run in-memory, but it’s not available out of the box.

Advantages:

  • We continue using the technology we’re familiar with

  • It’s fast to write tests

Disadvantages:

  • Tests focus on "How" rather than "What", i.e. tests describe visiting pages and filling out forms, rather than tasks that users try to accomplish (i.e. Provide company information). This could be improved by introducing a layer of abstraction on top of user actions.

  • Tests are only readable by developers and can only be maintained by developers.

Option 2: SpecFlow

SpecFlow is the industry standard for acceptance tests in dotnet.

It’s great advantage is a separation of "What" from the "How". Tests can be expressed in a business language:

Feature: Confirming Personal Name on Supplier Information Portal

    As a representative of an organisation
    I want to confirm my first and last name
    So that my personal identity is correctly associated with the organisation's profile.

    Scenario: Successfully confirming a user's name
        Given the user is on the 'Confirm your name' page
        When the user enters their first and last name into the input fields
        And clicks the 'Continue' button
        Then the user should be taken to the next step in the registration process

It’s a structured language, called Gherkin.

Scenarios like the one above are then translated to actions (navigating to a page, making an api call etc).

Advantages:

  • Human-readable tests, a separation of "How" from "What".

  • Tests could be written by QAs (to some extent).

  • Nice reports available out of the box.

Disadvantages:

  • SpecFlow requires an IDE plugin that is not kept up to date for some IDEs (like Rider)

  • Tests can be executed with xunit, but the IDE plugin is required.

Option 3: Xunit.Gherkin.Quick

Quick is a lightweight "BDD" framework, similar to SpecFlow. It also enables us write tests as human-readable scenarios in Gherkin. The main difference to SpecFlow is that it’s built on top of xUnit.

Advantages:

  • Human-readable tests, a separation of "How" from "What".

  • Tests could be written by QAs (to some extent).

  • xUnit integration. All the tools we use in xUnit can be used here (like testcontainers).

Disadvantages:

  • It’s less popular than SpecFlow.

Decision

TBD