This is a technical documentation for the Cabinet Office’s Central Digital Platform.

Introduction

The Central Digital Platform (CDP) is designed to support the new procurement regime.

This documentation focuses on SIRSI (Supplier Information Registration and Sign In).

SIRSI is a part of wider solution that contributes to the Central Digital Platform together with other services like Find a Tender.

Overview

Context

The system context diagram below presents the Central Digital Platform system in the context of users and other systems.

System Context for Central Digital Platform
Figure 1. System Context for Central Digital Platform
Legend
Figure 2. Legend

Software Architecture

Container view

The diagram below focuses on high level responsibilities and technology choices within the Central Digital Platform system.

Container View of the Central Digital Platform system
Figure 3. Container View of the Central Digital Platform system
Legend
Figure 4. Legend

Most of the services that were implemented have been defined as part of the High Level Design (TPP Identity and Organisation Information Services HLD v0.1.docx) that the team was given to follow when they started.

A number of Web APIs is designed to be seen as independent externally, but internally shares the database. The entity verification services is one exception here as it is explicitly designed to be isolated from other services.

Personas

There are two main public personas that interact with the system: suppliers and buyers.

Supplier

Suppliers offer their services to government organisations. They use SIRSI to sign up their organisation on the platform, fill out their profile (supplier information), manage users and API keys.

Buyer

Buyers are government organisation that buy services from suppliers. They use SIRSI to sign up their organisation on the platform.

External services

GOV.UK One Login

GOV.UK One Login is used as a single sign-on solution shared between all the CDP services to provide seamless navigation experience between them.

GOV.UK Notify

GOV.UK Notify is used to send out email notifications to users.

Applications and services

Organisation Web Application

The Organisation web application is where buyers and suppliers sign up and manage their organisations. This is where suppliers provide the supplier information that is later used in the procurement process.

Organisation web application components
Figure 5. Organisation web application components

The web application implements the User Interface with the GOV.UK Design system. It calls other services (Web APIs) to perform actual work.

Authority

The Authority service provides an OpenID based authentication for the Web APIs. It’s used to exchange the One Login token to our own one. This is to make the token valid for longer. One Login token is only valid for three minutes, while the Authority token is valid for one hour.

The Authority service is one of the few services that were not part of the High Level Design. Originally, Web APIs were suppose to be authenticated with One Login tokens. This turned out not be be practical due to their short-lived nature.

Authority service components
Figure 6. Authority service components

Tenant

Tenant service components
Figure 7. Tenant service components

Person

Person service components
Figure 8. Person service components

Organisation

Organisation service components
Figure 9. Organisation service components

Forms

Forms service components
Figure 10. Forms service components

Data sharing

Data sharing service components
Figure 11. Data sharing service components

Entity verification

Entity verification service components
Figure 12. Entity verification service components

Development

This section covers development related topics.

Development environment

Requirements

The minimum set of tools required to work on the project covers:

  • An IDE of your choice (i.e. Visual Studio, Rider)

  • Docker

On Windows, it might be handy to set up git-bash or WSL in order to run tools like Make.

Configuration

The application is mostly configured to start with a fresh repository checkout. The only configuration missing are secrets that are not safe to be commited to the repository.

Never commit secrets to the repository.

Secrets are managed with the Secrets Manager and environment variables. IDE depends on the secrets manager while Docker depends on environment variables.

Environment variables for Docker are managed in compose.override.yml file that’s not version controlled. Run the following command to create it with defaults:

make compose.override.yml
Organisation App

The OrganisationApp requires the following secrets / environment variables:

  • OneLogin:AccountUrl / OneLogin__AccountUrl

  • OneLogin:Authority / OneLogin__Authority

  • OneLogin:ClientId / OneLogin__ClientId

  • OneLogin:PrivateKey / OneLogin__PrivateKey

  • CompaniesHouse:Url / CompaniesHouse__Url

  • CompaniesHouse:User / CompaniesHouse__User

  • CompaniesHouse:Password / CompaniesHouse__Password

  • CharityCommission:Url / CharityCommission__Url

  • CharityCommission:SubscriptionKey / CharityCommission__SubscriptionKey These can be set as secrets with the following dotnet commands:

dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:AccountUrl "https://home.integration.account.gov.uk"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:Authority "https://oidc.example.com"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:ClientId "client-id"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj OneLogin:PrivateKey "-----BEGIN RSA PRIVATE KEY-----SECRET KEY-----END RSA PRIVATE KEY-----"

dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj CompaniesHouse:Url "https://api.company-information.service.gov.uk"
dotnet user-secrets set --project Frontend/CO.CDP.OrganisationApp/CO.CDP.OrganisationApp.csproj CompaniesHouse:User "user-id"

For Docker, the following environment variables need to be updated in compose.override.yml for the organisation-app service:

  organisation-app:
    environment:
      OneLogin__AccountUrl: "https://home.integration.account.gov.uk"
      OneLogin__Authority: "https://oidc.example.com"
      OneLogin__ClientId: "client-id"
      OneLogin__PrivateKey: "-----BEGIN RSA PRIVATE KEY-----SECRET KEY-----END RSA PRIVATE KEY-----"
      CompaniesHouse__Url: "https://api.company-information.service.gov.uk"
      CompaniesHouse__User: "--UserId--"
      CompaniesHouse__Password: ""
      CharityCommission__Url: "https://api.charitycommission.gov.uk/register/api/"
      CharityCommission__SubscriptionKey: "--SubscriptionKey --"
Organisation WebApi

The Organisation.WebApi requires the following secrets / environment variables:

  • GOVUKNotify:ApiKey / GOVUKNotify__ApiKey

These can be set as secrets with the following dotnet commands:

dotnet user-secrets set --project Services/CO.CDP.Organisation.WebApi/CO.CDP.Organisation.WebApi.csproj GOVUKNotify:ApiKey "123456"

For Docker, the following environment variables need to be updated in compose.override.yml for the organisation service:

  organisation:
    environment:
      GOVUKNotify__ApiKey: "123456"
Authority API

The Authority API depends on the following secrets / environment variables:

  • OneLogin:Authority / OneLogin__Authority

  • PrivateKey / PrivateKey

To set them in the secrets manager, run:

dotnet user-secrets set --project Services/CO.CDP.Organisation.Authority OneLogin:Authority "https://oidc.example.com"
dotnet user-secrets set --project Services/CO.CDP.Organisation.Authority PrivateKey "-----BEGIN RSA PRIVATE KEY----"
Note
The make generate-authority-keys command generates a private key that can be used with PrivateKey. Make sure to copy the contents of file and not the path.

For Docker, update the authority service in compose.override.yml:

  authority:
    environment:
      PrivateKey: "-----BEGIN RSA PRIVATE KEY-----"
      OneLogin__Authority: "https://oidc.example.com"

Setting up the IDE

Import the project to your favourite IDE to build and run tests from there.

Runing all tests in the IDE
Figure 13. Running tests in an IDE

Alternatively, use the dotnet command or the following make targets to build and run tests:

make build
make test

Any dotnet tools used by the project are installed locally and will be restored by the above commands.

Setting Up Docker

While local development is done within the IDE of our choice, the Docker Compose configuration is provided to conveniently start all (or some) services at once.

First, make sure you configured environment variables in compose.override.yml (see Configuration).

Next, build all the Docker containers with the build-docker Make target:

make build-docker

Finally, we can start all Docker services with:

make up

By default, service and application ports are mapped as follows:

All services started in Docker
Figure 14. All services started in Docker

Later, all services can be stopped and destroyed with:

make down

Make targets

There’s a number of Make targets that provide shortcuts during development. Run make help to get an up-to-date list.

Table 1. Make targets
Target Description

help

Shows available commands

build

Builds the solution

test

Runs all tests

up

Starts all the Docker containers

down

Stops and removes all Docker containers

stop

Stops all Docker containers

ps

Lists all running Docker containers

db

Starts the database Docker container only and runs migrations

localstack

Starts the localstack Docker container for AWS services available locally

generate-authority-keys

Generates the private/public key pair for the authority service

Cookbooks

This section stores recipes useful in development.

Using Docker for development

During development, testing, or debugging it’s often useful to run some services with Docker, while others with an IDE. Others, prefer to run everything in the IDE, but infrastructure dependencies, like a database, with Docker. Both scenarios are supported.

Mixing services started on Docker and IDE

By default, Docker Compose will start all services defined in the Compose configuration.

All services started in Docker
Figure 15. All services started in Docker

Imagine we’d like to work on the Organisation App. Ideally, we’d work on it within our IDE, but continue running all the other services in Docker to use the application end-to-end.

First, let’s disable the organisation-app service in compose.override.yml by setting replicas to 0.

# ...
  organisation-app:
    deploy:
      replicas: 0
    # ...

This way, when we start Docker services the organisation-app won’t be started as we expect it to be started from the IDE.

Next, point the gateway to where the Organisation App is running (outside of Docker):

# ...
  gateway:
    environment:
      CDP_ORGANISATION_APP_HOST: 'http://host.docker.internal:58090'
#      CDP_AUTHORITY_HOST: 'http://host.docker.internal:5050'
#      CDP_TENANT_HOST: 'http://host.docker.internal:58080'
#      CDP_ORGANISATION_HOST: 'http://host.docker.internal:58082'
#      CDP_PERSON_HOST: 'http://host.docker.internal:58084'
#      CDP_FORMS_HOST: 'http://host.docker.internal:58086'
#      CDP_DATA_SHARING_HOST: 'http://host.docker.internal:58088'
    # ...

We use host.docker.internal to point to the host machine from Docker containers.

Now, we can start all Docker services (but the Organisation App) with make up and run the Organisation App with the IDE (choose http-for-docker launch profile).

Organisation App running in the IDE
Figure 16. Organisation App running in the IDE

We can pick and choose which services are run by the host or by Docker, so if we needed to also run the organisation service in the IDE, we can follow the same pattern. First, edit compose.override.yml:

# ...
  gateway:
    environment:
      CDP_ORGANISATION_APP_HOST: 'http://host.docker.internal:58090'
#      CDP_AUTHORITY_HOST: 'http://host.docker.internal:5050'
#      CDP_TENANT_HOST: 'http://host.docker.internal:58080'
      CDP_ORGANISATION_HOST: 'http://host.docker.internal:58082'
#      CDP_PERSON_HOST: 'http://host.docker.internal:58084'
#      CDP_FORMS_HOST: 'http://host.docker.internal:58086'
#      CDP_DATA_SHARING_HOST: 'http://host.docker.internal:58088'
  organisation:
    deploy:
      replicas: 0
    # ...

Next, start selected services in the IDE, while Docker takes care of the other ones.

Organisation App and Organisation service running in the IDE
Figure 17. Organisation App and Organisation service running in the IDE
Starting everything in the IDE

In case you preferred to run all the dotnet services in the IDE, you can disable them all in Docker by setting replicas to 0.

Alternatively to setting replicas to 0, you can run the db and migrations containers only with:

make db

Use the http launch profile in your IDE to start each service. All services can be run together by using a multi-launch profile (has to be created manually).

Multi-launch profile
Figure 18. Multi-launch profile
IDE profiles
Recent launch profiles
Figure 19. Recent launch profiles

Each service has two launch profiles pre-configured:

  • http-for-docker - the service will be proxied by the gateway service in Docker - good if we want to run most services on Docker, while few selected ones in IDE.

  • http - the service will be called directly - good if we only want to start db in Docker, but run everything else in the IDE.

Launch profiles
Figure 20. Launch profiles

Localizing UI text

Intro to solution

Localization has been achieved using .Net "Resource files" which are held in CO.CDP.Localization.

These are named with a pattern like StaticTextResource.resx and StaticTextResource.cy.resx, where the first file is the "neutral" language, and .cy. is the Welsh, and together they should load up in the resource explorer as a pair.

The frontend application has been extended so that it negotiates the current culture based on a cookie which is set when clicking on the language switcher. In the absence of this cookie, the culture defaults to en.

The form api has been localized based on the Accept-Language header, which is passed to them from the frontend via the CultureDelegatingHandler.

There are two separate resource files - StaticTextResource which is text that is defined in code in the organisation app. The other is FormsEngineResource which is responsible for content that is held in the forms engine database tables.

Guidance for naming keys

Generally speaking, keys are either "global" in that they are not unique to a particular section and may appear anywhere on the service - for example Global_Continue represents the word Continue on the buttons that appear across the site. Or they can be namespaced to a specific section of the service such as BasicInformation_SectionTitle.

Use Underscores_To_Separate and not dots or hyphens. This ensures that the key is the same whether you are using the strongly typed StaticTextResource class to access keys like StaticTextResource.Global_Back, or if you are using @inject IHtmlLocalizer<StaticTextResource> Localizer to access keys like `Localizer["Global_Back"]

Underscores should be used to separate "sections" of the key. And CamelCase should be used within those sections. So for example the key OrganisationRegistration_EnterOrganisationName_Heading represents a Heading within the EnterOrganisationName page in the OrganisationRegistration section of the service.

"Section" could be anything from site section, page, area of the page, specific type of org that it applies to - whatever makes sense in the context of the key you are naming. Broadly though, each underscore represents an increase in specificity.

This is purely a naming convention - there is no mechanism enforcing this. It requires developers to name things in an appropriate way.

As a rule - try to name keys in the most generic way that is appropriate. For example if the exclusions section contains some text that is used repeatedly in that section, don’t define multiple keys scoped to each page it’s used on.

Using the path to the view, or folder containing a set of views is a reasonable place to start with naming.

How to localize static pieces of text

As mentioned above, there are two methods to access localized strings.

Strongly typed classses

Please where possible prefer using the strongly typed classes, in order that we have compile time safety. This means that if a key is used which doesn’t exist in the resource files, we get a compiler error instead of a runtime error which you would get when using IHtmlLocalizer.

There are strongly typed classes for StaticTextResource and FormsEngineResource that allow you to access strings as follows:

<a href="@backLink" class="govuk-back-link">
    @StaticTextResource.Global_Back
</a>

If you have added a new key to the resource files, you will need to rebuild the solution for the strongly typed classes to rebuild and for your key to become available.

If you need to access HTML inside a value, you can pass it through Html.Raw:

@Html.Raw(StaticTextResource.Global_Back)
Using IStringLocalizer or IHtmlLocalizer

This method is mentioned for completeness, but should be avoided. Please use the strongly typed class where possible.

@inject IHtmlLocalizer<StaticTextResource> Localizer

@Localizer["Global_Back"]
How to localize form validation errors

Form validation errors are localized as follows:

[DisplayName(nameof(StaticTextResource.OrganisationRegistration_EnterOrganisationName_Heading))]
[Required(ErrorMessageResourceName = nameof(StaticTextResource.OrganisationRegistration_EnterOrganisationName_Heading), ErrorMessageResourceType = typeof(StaticTextResource))]
public string? OrganisationName { get; set; }

This uses the same resource, but you cannot pass the values straight into the annotations. The nameof syntax above is what you need.

How to localize Forms engine text

Forms engine text is stored in FormsEngineResource and uses keys named based on the form section. These need to be inserted into the database via migration scripts to replace whatever hardcoded text was there before.

For example, the exclusions section title is set to Exclusions_SectionTitle instead of just Exclusions. This is then replaced at runtime by the forms api which is able to localize any of the strings that come out of the database.

Inside question options

Question options are stored as JSON blobs in the database. You should not localize the entire JSON blob, by placing the JSON into the resource file.

The Forms API is capable of translating specific keys within the JSON and so you can just replace individual values like {"choices": ["Qualifications_QuestionName_ChoiceOne", "Qualifications_QuestionName_ChoiceTwo"]} and these will be resovled at runtime by the automapper.

Handling plural forms

Plural forms can be handled by defining multiple keys, for example:

Supplier_SupplierInformationSummary_ConnectedPersonCount_None = "No connected persons"
Supplier_SupplierInformationSummary_ConnectedPersonCount_Plural = "{0} Connected persons"
Supplier_SupplierInformationSummary_ConnectedPersonCount_Singular = "{0} Connected person"

This can then be used in the code to format the right output:

if (count == 0)
{
    return Model.ConnectedPersonStepStatus == SupplierInformationStatus.StepStatus.Completed ? StaticTextResource.Supplier_SupplierInformationSummary_ConnectedPersonCount_None : string.Empty;
}

return string.Format(count > 1 ? StaticTextResource.Supplier_SupplierInformationSummary_ConnectedPersonCount_Plural : StaticTextResource.Supplier_SupplierInformationSummary_ConnectedPersonCount_Singular, count);

Design Decisions

This is a log of important design decisions that were made during development.

The design decision log accomplishes the following goals:

  • Provide a structured approach to decision-making.

  • Explain to current and future developers why things are the way that they are.

  • Help to revise decisions when the situation changes.

1. Record architecture decisions

Date: 2024-03-04

Status

Accepted

Context

We need to record the architectural decisions made on this project so that future developers understand the context of our solutions.

Decision

We will use Architecture Decision Records, as described by Michael Nygard.

Consequences

All significant decisions will be discussed within the team and documented in the decision log including:

  • the date decision was made

  • the status of the decision (Accepted, Draft, or Rejected)

  • the context of the decision

  • options that were considered

  • the decision made

  • the consequences of the decision

2. Create web api clients

Date: 2024-03-25

Status

Accepted

Context

We need to create clients for each of our web api services to be used internally (at least initially). Clients must be compliant with the OpenAPI contracts we expose, so that once we expose them externally we’re sure of their correctness.

Microsoft is planning to make OpenAPI support a first-class citizen in ASP.NET Core. Swashbuckle is scheduled to be dropped in .NET 9 (see https://github.com/dotnet/aspnetcore/issues/54599). NSwag is the most feature-rich library in the ecosystem, and their maintainer already raised concerns Microsoft will make NSwag redundant.

We have considered a number of options to move forward.

Option 1: Craft clients from scratch

Create a Client interface for each service and implement it with HttpClient. This will require writing a lot of code ourselves based on the OpenAPI contract and keeping it in sync with the changing contract.

Advantages:

  • "It’s just code", so it’s easy to change.

  • It’s more flexible compared to generating code.

Disadvantages:

  • Needs a process to keep the client in sync with the OpenAPI contract.

  • Needs more tests to guarantee client’s compliance with the OpenAPI contract.

  • Requires effort and time.

Option 2: Generate clients based on OpenAPI contracts (Kiota)

Kiota is a command-line tool for generating API clients based on OpenAPI descriptions. It’s developed by Microsoft and has a nice programming API.

Here’s an example code using the generated Kiota client:

var authProvider = new AnonymousAuthenticationProvider();
var httpClient = new HttpClient
{
    BaseAddress = new Uri("http://localhost:5182")
};
var adapter = new HttpClientRequestAdapter(authProvider, httpClient: httpClient);
var client = new TenantClient(adapter);

var tenant = await client.Tenants.PostAsync(new NewTenant
{
    Name = $"Bob {Guid.NewGuid()}",
    ContactInfo = new TenantContactInfo
    {
        Email = "bob@example.com",
        Phone = "07925344234"
    }
});

// Properties are nullable even though they're required in the contract
var tenantId = tenant?.Id ?? Guid.Empty;
var foundTenant = await client.Tenants[tenantId].GetAsync();

Assert.Equivalent(
    new Models.Tenant
    {
        Id = tenantId,
        Name = tenant!.Name,
        ContactInfo = new TenantContactInfo
        {
            Email = "bob@example.com",
            Phone = "07925344234"
        }
    },
    foundTenant
);

Advantages:

  • Generated clients are compliant with the OpenAPI contract.

  • Generated clients can be refreshed when the OpenAPI contract changes.

  • Saves development time.

  • The tool comes from Microsoft and has an opportunity to become the standard in the .NET ecosystem.

Disadvantages:

  • It doesn’t seem to support some of the OpenAPI spec yet.

  • Nullable properties make it awkward to use. This won’t be even considered for fixing until Kiota v3 (we’re at Kiota v1 now).

Option 3: Generate clients based on OpenAPI contracts (NSwag)

NSwag is the most feature-rich OpenAPI toolchain in the .NET ecosystem. It support generating clients as well as OpenAPI contracts.

NSwag is actively maintained, but there’s a risk of it becoming eventually redundant since Microsoft revealed their plans.

Advantages:

  • Generated clients are compliant with the OpenAPI contract.

  • Generated clients can be refreshed when the OpenAPI contract changes.

  • Saves development time.

  • It’s well maintained.

  • It is mature.

  • It is feature-rich.

Disadvantages:

  • There’s a risk Microsoft will eventually make NSwag redundant.

Example code using the generated client:

var baseUrl = "http://localhost:5182";

ITenantClient client = new TenantClient(baseUrl, new HttpClient());

var tenant = await client.CreateTenantAsync(new NewTenant(
    name: $"Bob {Guid.NewGuid()}",
    contactInfo: new TenantContactInfo(
        email: "bob@example.com",
        phone: "07923234234"
    )
));

var foundTenant = await client.GetTenantAsync(tenant.Id);

Assert.Equal(
    new Tenant
    (
        id: tenant.Id,
        name: tenant.Name,
        contactInfo: new TenantContactInfo(
            email: "bob@example.com",
            phone: "07923234234"
        )
    ),
    foundTenant
);
Decision

We will use the NSwag to generate API clients (option 3). The tool is feature-complete and should cause us the least headaches while saving development time in the same time.

On the server side we should leverage OpenAPI.NET as much as possible (avoid Swashbuckle or NSwag specific types).

Consequences

We will ensure that OpenAPI contracts are generated at build-time so that client libraries have access to them. Client libraries will generate the client based on the OpenAPI contract.

3. Applying database migrations

Date: 2024-04-03

Status

Accepted

Context

We need to evolve the database as we progress with development and, later on, with maintenance.

We have considered a number of options including those suggested in the official Entity Core documentation: https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli

Option 1: DbContext.Database.Migrate() during startup

The application already has access to the database context, the connection, and migration files. That means it’s capable of executing the migrations during startup.

To achieve this, the following code needs to be added the the startup Program:

if (app.Configuration.GetValue<bool>("RunMigrationsOnStartup"))
{
    using (var scope = app.Services.CreateScope())
    {
        var db = scope.ServiceProvider.GetRequiredService<TenantContext>();
        db.Database.Migrate();
    }
}

Full change is available in the following pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/44

Advantages:

  • Works with no additional effort.

Disadvantages:

  • Multiple instances of any given service will compete for making schema modifications.

  • It’s officially not recommended to use this method in production.

Running dotnet ef database update is similar and is also not recommended for production.

Option 2: Generated SQL scripts

SQL scripts generated out of EF migrations are the most reliable option. An SQL script can be reviewed before applying and the changes are idempotent. It’s recommended SQL scripts are version controlled.

To generate a migration:

dotnet ef migrations script \
  -p Services/CO.CDP.Tenant.Persistence \
  -s Services/CO.CDP.Tenant.WebApi \
  --idempotent \
  -o Services/CO.CDP.Tenant.Persistence/Migrations/SQL/$(date +%Y-%m-%d_%H%M)_CreateTenants.sql

To apply it in our local development setup:

docker compose exec \
  -it db psql -U cdp_user \
  -f - cdp < Services/CO.CDP.Tenant.Persistence/Migrations/SQL/2024-03-23_2303_CreateTenants.sql

Example implementation can be found in the following pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/34/files

Advantages:

  • Reliability.

  • Simplicity.

Disadvantages:

  • Requires additional tools to execute the migration (a database client, like psql).

  • Since there’s no way to know which migrations were already executed, all the migrations need to be re-applied. It’s mostly fine since they’re idempotent.

Option 3: Generated SQL scripts (containerised)

One limitation of option 2 is the availability of the database client during the deployment. This could be fixed by building a dedicated migrations container to be run during the deployment.

Advantages:

  • Reliability.

Disadvantages:

  • Since there’s no way to know which migrations were already executed, all the migrations need to be re-applied. It’s mostly fine since they’re idempotent.

Option 4: Migration bundles

Migration bundles are single-file executables that can be used to apply migrations to a database. They’re meant to address the shortcomings of SQL scripts and command-line tools.

Migration bundles can be self contained, which means they won’t depend on anything else (like dotnet runtime).

Here’s an example of how to generate a bundle:

dotnet ef migrations bundle \
  -p Services/CO.CDP.Tenant.Persistence \
  -s Services/CO.CDP.Tenant.WebApi \
  --self-contained

Bundle is self contained and all it needs to apply migrations is the connection string:

./efbundle --connection 'Server=localhost;Database=MYDB;Username=MYUSER;Password=MYPASS;'

Advantages:

  • Self contained

Disadvantages:

  • It’s not very well documented how bundles work and how they’re different to migration command-line tools.

  • It’s a manual step that needs to be run in development.

Option 5: Migration bundles (containerised)

Similarly to containerising SQL scripts, we can containerise bundles, and ship them with the application or a dedicated container.

An implementation of this approach is provided in this pull request: https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/51

Advantages:

  • Self contained.

  • Container version of migrations matches the application’s version.

  • The container can be run automatically when the development database starts.

Disadvantages:

  • It’s not very well documented how bundles work and how they’re different to migration command-line tools.

Option 6: Use a specialized migrations library

Finally, a dedicated migrations library can be used to manage and execute migrations. We have not explored this option much.

Decision

We’re going to go with option 5 - Migration bundles (containerised). Bundles are one of the recommended solutions. Putting bundles inside a container gives us several options we could execute them during the deployment.

Option 5 is slitghtly easier to implement than option 3 while also being less flexible. Option 3 allows for tweaking generated SQL scripts. We might revisit the decision in future if it turns out we need this flexibility.

Consequences

We’re going to merge https://github.com/cabinetoffice/GCGS-Central-Digital-Platform/pull/51. The pull request introduces a migration container that’s executed with docker compose in local development environment. The same container will be executed during the deployment.

4. Error Handling and Validation

Date: 2024-04-04

Status

Draft

Context

We need to establish a standardised approach to error handling and validation.

Error handling
Option 1: Problem details response

Utilise the Results.Problem method to generate standardized HTTP problem details responses.

Advantages:

  • Preferred Approach: Results.Problem is Microsoft’s recommendations for handling errors.

  • Standards: This approach adheres to the RFC 7807 standard, providing a uniform format for error messages that include details such as status code, error type, and a readable description.

  • Rich Error Information: Offers a structured format that can include extensive information about the nature of the error, which can be extremely helpful for debugging by clients.

Disadvantages:

  • Heavy for Simple projects: Might be overly complex for simple error messages where extensive details are unnecessary or out of scope.

  • Potential Information Leakage: If not carefully managed, it could inadvertently expose sensitive information about the API’s internal workings.

Option 2: Validation Action Filters

Validation Action Filters intercept the request before it reaches the controller action, validating input data and returning appropriate error responses.

Advantages:

  • Pre-Controller Validation: Errors can be caught and handled before reaching the business logic.

  • Reuse: Logic can be encapsulated and reused across multiple controllers or actions, promoting code reuse.

Disadvantages:

  • Limited Scope: Primarily useful for handling model validation errors. Other types of errors (e.g., those thrown from within service layers or unhandled exceptions) would not be caught by these filters.

  • Coupling: Ties validation logic to the MVC framework, which may not be ideal if some parts of your application require different handling or operate outside the typical request pipeline.

Option 3: Custom Exception Middleware

Implement custom exception middleware that catches exceptions globally across the API, allowing for error handling.

Advantages:

  • Central Error Handling: Provides a single point within your application to handle exceptions, simplifying & reducing code duplication.

  • Flexibility: Allows you to create custom error responses based on exception types or other criteria.

  • Consistency: Can be combined with Results.Problem to ensure that all errors, handled or unhandled, return a consistent response format.

Disadvantages:

  • Complexity: Requires a more detailed setup and understanding of the middleware pipeline in ASP.NET Core.

  • Maintenance: As the application grows, the middleware might need continuous updates to handle new changes.

  • Debugging: This can make it difficult to trace back to the original source of the error, can be complex because errors are handled globally and may be stripped of their context.

Validation
Option 1: Data Annotations

Use attributes from the System.ComponentModel.DataAnnotations namespace to declare validation rules directly in your model classes.

This method is deeply integrated with ASP.NET Core and is supported out of the box, requiring minimal setup for basic validation scenarios.

Advantages:

  • Easy of Implementation: Simple to apply with minimal code, making it an efficient option for straightforward validation requirements.

  • Automatic Enforcement: Automatically enforced by ASP.NET Core during model binding.

  • Framework Integration: Seamlessly integrates with ASP.NET Core, providing error handling built-in stadnard model validation architecture.

Disadvantages:

  • Cluttered Model Classes: Adding validation attributes directly to model properties can lead to cluttered and bloated model classes.

  • Flexibility: Can de difficult and messy to implement more complex validation and cannot handle cross-field dependencies without additional overhead.

Option 2: Fluent Validation

A third-party library that uses a fluent interface to build validation rules separately from your models.

Defined separately from model classes, offering a powerful alternative to data annotations with support for complex validation scenarios.

Advantages:

  • Greater Flexibility and Control: Provides extensive control over validation logic, allowing for complex rules and dependencies.

  • Separation of Concerns: Keeps validation logic decoupled from model definitions, aiding in the maintenance and scalability of the application.

  • Consitency with Unit Testing: We are using FluentAssertions in our unit tests, which complement Fluent Validations.

Disadvantages:

  • Additional Dependency: Incorporates an external package into the project, introducing potential issues with third-party dependencies and updates.

  • External Configuration: Validation rules are maintained outside the model, potentially complicating the development process.

Option 3: Input Models with Built-in Validation

Use dedicated input models (DTOs) that include validation logic, separating it from your domain models.

This approach utilises either data annotations or custom validation methods embedded within the DTOs themselves, providing a clean separation between the data used for input and the internal domain models.

Advantages:

  • Clear Separation of Concerns: Keeps API input validation separate from business logic, reducing the complexity of domain models.

  • Specialised Validation Logic: Allows for validation rules that are specific to the API layer, which can differ from how data validation is handled internally.

  • Flexibility: More control over how data is validated and transformed before being passed to the domain model or business logic.

Disadvantages:

  • Duplication of code: May lead to duplication of code or additional boilerplate, potentially increasing the overhead of maintaining similar models.

  • Additional Mapping Required: Requires mapping between DTOs and domain models, which can introduce additional complexity and overhead.

Option 4: Custom Validation Attributes

Create custom attributes by extending the ValidationAttribute class to define specific validation rules that are not provided out of the box by data annotations.

Advantages:

  • Highly Customizable: Enables the creation of validation rules that are perfectly tailored to specific business requirements.

  • Reusable: Custom attributes can be reused across different models and projects, promoting consistency.

  • Integrates with MVC: Works seamlessly with ASP.NET Core’s model binding and validation frameworks.

Disadvantages:

  • Increased Complexity: Requires a deeper understanding of attribute-based programming and reflection.

  • Development Overhead: More time-consuming to develop and test compared to using standard data annotations.

Decisions
Validation Decision

We will adopt FluentValidation for validation, enhancing both maintainability and scalability (related to Validation Option 2). This approach allows for pre-controller validation, catching errors early and enabling us to reuse validation logic across multiple controllers, simplifying error handling and promoting code reuse.

Error Handling Decision

We have decided to implement Custom Exception Middleware as our error handling strategy (related to Error Handling Option 3). This approach centralises error handling at a single point within the application, simplifying the process and reducing code duplication. It also offers the flexibility to create custom error responses based on different exception types or criteria, and ensures consistency by combining with the Results.Problem method to standardise response formats across the API, according to RFC 7807.

Consequences

These updates require modifications to existing validation logic and some refactoring, since work has already commenced on creating APIs.

5. Database naming conventions

Date: 2024-06-21

Status

Accepted

Context

PostgreSQL clients are forced to quote table and column names in SQL queries, if they’re not lower-cased:

SELECT "Name" FROM "Organisations";

That’s because they’re lowercased by default unless quoted.

This is inconvenient when writing queries by hand.

Decision

We will follow a lowercase naming strategy with _ separated words.

SELECT name FROM organisations;
Consequences

We will switch to the snake case naming convention and migrate existing schema to follow it as well.

6. Acceptance tests

Date: 2024-05-28

Status

Draft

Context

We heavily rely on developer tests for small units of isolated code. For various reasons, we have not developed an acceptance test suite so far. As the project and the team matures, acceptance tests would be helpful to clearly state business requirements and make them self-verifiable as tests.

These tests would be run end-to-end as much as possible and therefore verify user journeys better.

We should still be conscious of costs that such tests bring and attempt to make them as lightweight as possible. The frontend is not javascript heavy and there’s no need to run these tests in a browser most of the time.

Option 1: xUnit

Use a library like Alba or Playwright directly in xUnit tests to exercise the Frontend application. Start all the application services with the dotnet web application factory, and any external dependencies (like the database) with Testcontainers.

Alba integrates very well with dotnet core applications out of the box. It’s possible to run and customise the application in-memory, without a need to execute tests in a browser.

Playwright is a very well established tool. With a little bit of customisation it should also be possible to run it against the application run in-memory, but it’s not available out of the box.

Advantages:

  • We continue using the technology we’re familiar with

  • It’s fast to write tests

Disadvantages:

  • Tests focus on "How" rather than "What", i.e. tests describe visiting pages and filling out forms, rather than tasks that users try to accomplish (i.e. Provide company information). This could be improved by introducing a layer of abstraction on top of user actions.

  • Tests are only readable by developers and can only be maintained by developers.

Option 2: SpecFlow

SpecFlow is the industry standard for acceptance tests in dotnet.

It’s great advantage is a separation of "What" from the "How". Tests can be expressed in a business language:

Feature: Confirming Personal Name on Supplier Information Portal

    As a representative of an organisation
    I want to confirm my first and last name
    So that my personal identity is correctly associated with the organisation's profile.

    Scenario: Successfully confirming a user's name
        Given the user is on the 'Confirm your name' page
        When the user enters their first and last name into the input fields
        And clicks the 'Continue' button
        Then the user should be taken to the next step in the registration process

It’s a structured language, called Gherkin.

Scenarios like the one above are then translated to actions (navigating to a page, making an api call etc).

Advantages:

  • Human-readable tests, a separation of "How" from "What".

  • Tests could be written by QAs (to some extent).

  • Nice reports available out of the box.

Disadvantages:

  • SpecFlow requires an IDE plugin that is not kept up to date for some IDEs (like Rider)

  • Tests can be executed with xunit, but the IDE plugin is required.

Option 3: Xunit.Gherkin.Quick

Quick is a lightweight "BDD" framework, similar to SpecFlow. It also enables us write tests as human-readable scenarios in Gherkin. The main difference to SpecFlow is that it’s built on top of xUnit.

Advantages:

  • Human-readable tests, a separation of "How" from "What".

  • Tests could be written by QAs (to some extent).

  • xUnit integration. All the tools we use in xUnit can be used here (like testcontainers).

Disadvantages:

  • It’s less popular than SpecFlow.

Decision

TBD

7. Shared user sessions

Date: 2024-11-13

Status

Accepted

Context

We need shared sessions between containers. In dev we only have one instance of the organisation-app that handles user session. In production we’ll be scaling app the organisation-app depending on current demand, so it will never be a single instance.

We need sessions to be stored outside of the running container (current behaviour), to provide users with a seamless experience.

There are no Non Functional Requirements provided by the stakeholders, so it’s difficult to estimate the number of concurrent users we have to to prepare for. We’ve ballpark estimated it to be anything between 50k to 500k, likely double that at the initial rollout. Customer expects 3.5m active users within first 2 years.

Option 1: Sticky sessions

Sticky sessions can be enabled at the infrastructure level, so user’s requests are handled by a single container.

Advantages:

  • Easy to set up

  • No code changes required

  • No additional cost

Disadvantages:

  • User journeys will be disrupted when containers are restarted, i.e. during the deployment

  • Sub-optimal user distribution per running containers

Option 2: DynamoDB

Advantages:

  • Easy to implement

  • Cheaper than Redis

Disadvantages:

  • Slow compared to other solutions and may not be efficient enough at peak times

  • Manual task required to clear expired sessions

Option: Redis

Advantages:

  • Serverless - easy to scale up or down depending on demand

  • Easy to implement

  • Low latency - can provide session details with a minimal lag

  • Can set up TTL, so expired sessions will be automatically removed

Disadvantages:

  • Potentially more expensive than DynamoDB

Decision

Redis

  • It has lower latency compared to DynamoDB

  • Future proof given the fact we don’t have any metrics provided by the client

Consequences

We’ll use Redis as a service on AWS, docker container on local developer set up. This will require additional DevOps work as well as modifications to the OrganisationApp to handle shared sessions.

Given the fact we don’t have metrics to design the system for, we should plan for the worst case scenario. Once service is live we can monitor the number of concurrent users and re-evaluate the solution.