Skip to content

FEAT-006: Docker Containerization for Production Deployment

ID: FEAT-006 Status: Planned Created: 2026-03-05 Updated: 2026-03-05 Priority: Medium

Scope note: FEAT-006 is an optional deployment path for development, testing, and non-constrained environments (full Windows Server with Docker support). The primary production target -- Siemens S7-1500 Open Controller -- does not support Docker/Hyper-V. See FEAT-007 for the native Windows Service deployment strategy targeting the S7-1500.

URS References

URS ID Requirement Impact
URS-INT-001.5 Container-based deployment New
URS-INT-001.1 REST API Verified
URS-INT-001.2 OPC UA Protocol Verified
URS-INT-001.3 GraphQL API Verified
URS-INT-001.4 Cross-protocol consistency Modified
URS-QUA-001.5 Real-time notifications Modified
URS-SYS-001.3 Health check Verified

Implementation Progress

Phase Description Status Commit
1 Requirements & GAMP5 documentation Planned
2 Docker infrastructure files (Dockerfiles, .dockerignore) Planned
3 Docker Compose orchestration (incl. Redis) Planned
4 Distributed event bus (Redis Pub/Sub for cross-container events) Planned
5 Verification & arc42 deployment documentation Planned

Summary

Essert.MF currently deploys as Windows Services on bare metal (see arc42 07-Deployment-View). This requirement introduces Docker containerization for the three API services (REST, GraphQL, OPC UA) using Docker Compose for production deployment.

Containerization enables reproducible builds, consistent environments across customer deployments, easier scaling, and simplified deployment workflows. The MariaDB database is included as an optional compose service for self-contained development and testing; in production, containers can connect to existing external database servers via environment variable configuration.

Because each API runs in a separate container (separate process), the current in-memory ICurrentMessageEventService singleton cannot share events across containers. Phase 4 introduces Redis Pub/Sub as a distributed message bus so that events published by any API (e.g., REST adding a current message) are received by all other APIs (e.g., GraphQL subscriptions delivering real-time notifications to connected clients).


Phase 1: Requirements & GAMP5 Documentation

Goal: Create lifecycle documentation for the containerization feature Status: Planned Estimated impact: 5 modified files, 1 new file

1.1 GAMP5 Document Changes

File Action Description
docs/requirements/Planned/FEAT/FEAT-006-docker-containerization.md Create This requirement document
docs/requirements/BACKLOG.md Modify Add FEAT-006 to Active & Planned Work
docs/gamp5/URS/URS-INT-001-integration.md Modify Add URS-INT-001.5 (Container Deployment)
docs/gamp5/FS/FS-INT-001-integration.md Modify Add FS-INT-001.5 (Container Deployment Functions)
docs/gamp5/RTM/RTM-001-traceability-matrix.md Modify Add URS-INT-001.5 row
docs/gamp5/RA/RA-001-risk-assessment.md Modify Add URS-INT-001.5 risk row

1.2 Implementation Notes

{Captured during implementation.}


Phase 2: Docker Infrastructure Files

Goal: Create Dockerfiles and build configuration for all three API services Status: Planned Estimated impact: 4 new files

2.1 Build Infrastructure

File Action Description
.dockerignore Create Exclude bin/, obj/, tests, docs, IDE files from build context
Essert.MF.API.Rest/Dockerfile Create Multi-stage build: SDK 9.0 → aspnet:9.0 runtime
Essert.MF.API.GraphQL/Dockerfile Create Multi-stage build: SDK 9.0 → aspnet:9.0 runtime
Essert.MF.API.OpcUa/Dockerfile Create Multi-stage build: SDK 9.0 → runtime:9.0 (console app)

2.2 Dockerfile Design

  • Build context: Solution root (required for project references across layers)
  • Build stage: mcr.microsoft.com/dotnet/sdk:9.0 — restore, build, publish in Release mode
  • Runtime stage (REST/GraphQL): mcr.microsoft.com/dotnet/aspnet:9.0 — lightweight ASP.NET runtime
  • Runtime stage (OPC UA): mcr.microsoft.com/dotnet/runtime:9.0 — .NET runtime for console app
  • Layer caching: Copy .csproj files first for NuGet restore, then copy source for build
  • Environment: ASPNETCORE_ENVIRONMENT=Production, HTTP only (no HTTPS in container)

2.3 Implementation Notes

{Captured during implementation.}


Phase 3: Docker Compose Orchestration

Goal: Create Docker Compose configuration for service orchestration including Redis Status: Planned Estimated impact: 2 new files

3.1 Compose Files

File Action Description
docker-compose.yml Create Service definitions for REST, GraphQL, OPC UA, MariaDB, Redis
.env.example Create Environment variable template (connection strings, passwords)

3.2 Service Design

Service Base Image Host Port Container Port Health Check
essert-mf-rest Built from REST Dockerfile 5000 5000 GET /health
essert-mf-graphql Built from GraphQL Dockerfile 5010 5000 GET /health
essert-mf-opcua Built from OPC UA Dockerfile 48400 48400
mariadb mariadb:11.7 3306 3306 healthcheck --connect
redis redis:7-alpine 6379 6379 redis-cli ping

3.3 Configuration Strategy

  • Connection strings via environment variables (override appsettings.json)
  • Redis connection string via Redis__ConnectionString environment variable
  • .env file for secrets (gitignored) — .env.example committed as template
  • MariaDB data persisted via named Docker volume
  • Redis data persisted via named Docker volume (optional, events are transient)
  • OPC UA log files and CertificateStores via volume mounts
  • Single bridge network for inter-container communication
  • API services depend on MariaDB and Redis with health check conditions

3.4 Key Design Decisions

# Decision Rationale
1 No HTTPS in containers TLS termination via reverse proxy (nginx/traefik) in front
2 GraphQL on host port 5010 Avoid port conflict with REST API on 5000
3 MariaDB included but optional Self-contained for dev/test; production can use external DB
4 HTTP only inside containers Standard container practice; Kestrel on port 5000
5 No database schema init Schemas must pre-exist (constraint OC-04)
6 Redis for cross-container events Replaces in-memory Subject<T> with distributed Pub/Sub
7 Redis 7 Alpine Lightweight, proven, minimal config needed for Pub/Sub

3.5 Implementation Notes

{Captured during implementation.}


Phase 4: Distributed Event Bus (Redis Pub/Sub)

Goal: Replace in-memory ICurrentMessageEventService with Redis Pub/Sub so events published by any API container are received by all others Status: Planned Estimated impact: 3 modified files, 1 new file, 1 new NuGet package

4.1 Problem Statement

The current CurrentMessageEventService uses an in-memory System.Reactive.Subject<T> as event bus:

┌─────────────────────────────────────────────────────┐
│  Single Process (current monolithic deployment)      │
│                                                      │
│  REST ──publish──► Subject<T> ──subscribe──► GraphQL │
│  OPC UA ──publish──┘            └──subscribe──► OPC  │
│                                                      │
│  ✅ Works: all APIs share same singleton instance     │
└─────────────────────────────────────────────────────┘

In separate containers, each process has its own Subject<T> — events don't cross process boundaries:

┌──────────────────┐  ┌──────────────────┐  ┌──────────────────┐
│  REST Container   │  │ GraphQL Container │  │ OPC UA Container  │
│                   │  │                   │  │                   │
│  Subject<T> (A)   │  │  Subject<T> (B)   │  │  Subject<T> (C)   │
│  publish ──► A    │  │  subscribe ◄── B  │  │  subscribe ◄── C  │
│                   │  │                   │  │                   │
│  ❌ Events stay   │  │  ❌ Never sees    │  │  ❌ Never sees    │
│     in container  │  │     REST events   │  │     REST events   │
└──────────────────┘  └──────────────────┘  └──────────────────┘

4.2 Solution: Redis Pub/Sub

Redis Pub/Sub provides a lightweight distributed message bus. All containers connect to the same Redis instance and publish/subscribe on a shared channel:

┌──────────────────┐  ┌──────────────────┐  ┌──────────────────┐
│  REST Container   │  │ GraphQL Container │  │ OPC UA Container  │
│                   │  │                   │  │                   │
│  RedisEventSvc    │  │  RedisEventSvc    │  │  RedisEventSvc    │
│  publish ──────┐  │  │  ┌── subscribe    │  │  ┌── subscribe    │
│                │  │  │  │                │  │  │                │
└────────────────┼──┘  └──┼────────────────┘  └──┼────────────────┘
                 │        │                      │
                 ▼        ▼                      ▼
          ┌──────────────────────────────────────────┐
          │              Redis (redis:7-alpine)       │
          │                                           │
          │  Channel: essert:mf:currentmessages       │
          │                                           │
          │  ✅ All containers see all events          │
          └──────────────────────────────────────────┘

4.3 Implementation Design

Architecture Compliance

The hexagonal architecture is preserved. The port interface (ICurrentMessageEventService) stays in Application layer unchanged. Only the Infrastructure adapter changes:

Application.Ports                     Infrastructure.Services
┌──────────────────────────┐         ┌─────────────────────────────────┐
│ ICurrentMessageEventService │ ◄───── │ RedisCurrentMessageEventService │
│                            │         │ (replaces CurrentMessageEventSvc)│
│ - PublishMessageAdded()    │         │                                 │
│ - PublishMessageRemoved()  │         │ - Publishes to Redis channel    │
│ - OnCurrentMessageChanged()│         │ - Subscribes to Redis channel   │
│ - OnCurrentMessageAdded()  │         │ - Bridges Redis → local Subject │
│ - OnCurrentMessageRemoved()│         │   for IObservable<T> consumers  │
└──────────────────────────┘         └─────────────────────────────────┘

File Changes

File Action Description
Essert.MF.Infrastructure/Essert.MF.Infrastructure.csproj Modify Add StackExchange.Redis NuGet package
Essert.MF.Infrastructure/Services/RedisCurrentMessageEventService.cs Create New implementation using Redis Pub/Sub
Essert.MF.Infrastructure/DependencyInjection/ServiceCollectionExtensions.cs Modify Register RedisCurrentMessageEventService when Redis is configured, fall back to in-memory
Essert.MF.Infrastructure/Services/CurrentMessageEventService.cs Keep Retained as in-memory fallback for non-containerized deployments

New Implementation: RedisCurrentMessageEventService

public class RedisCurrentMessageEventService : ICurrentMessageEventService, IDisposable
{
    private readonly Subject<CurrentMessageChangeEvent> _localSubject = new();
    private readonly ISubscriber _subscriber;
    private readonly IPublisher _publisher;
    private const string Channel = "essert:mf:currentmessages";

    // On Publish: serialize event → Redis PUBLISH
    public void PublishMessageAdded(long messageId, DateTime timestamp, CurrentMessageDto? message)
    {
        var evt = new CurrentMessageChangeEvent(Added, messageId, timestamp, message);
        var json = JsonSerializer.Serialize(evt);
        _publisher.Publish(Channel, json);
    }

    // On Redis message received: deserialize → push to local Subject<T>
    // This bridges Redis events into the IObservable<T> stream that
    // GraphQL subscriptions and other consumers already use
    private void OnRedisMessage(RedisChannel channel, RedisValue value)
    {
        var evt = JsonSerializer.Deserialize<CurrentMessageChangeEvent>(value!);
        _localSubject.OnNext(evt);
    }

    // Existing IObservable<T> methods unchanged — consumers don't know about Redis
    public IObservable<CurrentMessageChangeEvent> OnCurrentMessageChanged()
        => _localSubject.AsObservable();
}

Registration Strategy (Conditional)

// In ServiceCollectionExtensions.AddDomainServices()
var redisConnectionString = configuration.GetValue<string>("Redis:ConnectionString");

if (!string.IsNullOrEmpty(redisConnectionString))
{
    // Containerized deployment: use Redis Pub/Sub
    services.AddSingleton<IConnectionMultiplexer>(
        ConnectionMultiplexer.Connect(redisConnectionString));
    services.AddSingleton<ICurrentMessageEventService, RedisCurrentMessageEventService>();
}
else
{
    // Monolithic deployment: use in-memory (existing behavior)
    services.AddSingleton<ICurrentMessageEventService, CurrentMessageEventService>();
}

Configuration

// appsettings.json (or via environment variable Redis__ConnectionString)
{
  "Redis": {
    "ConnectionString": "redis:6379"  // Docker service name
  }
}

4.4 Why Redis Pub/Sub (Not Alternatives)

Option Pros Cons Verdict
Redis Pub/Sub Minimal infrastructure, simple API, already common in Docker stacks, no message persistence needed Fire-and-forget (no replay) Selected — events are transient notifications, persistence not required
Redis Streams Message persistence, consumer groups Overkill for notification events, more complex API Not needed — subscribers are always connected
RabbitMQ Full message broker, routing, durability Heavy infrastructure, complex setup, new dependency Overkill for single-channel pub/sub
Database polling No new infrastructure Adds latency (polling interval), unnecessary DB load Poor fit — defeats real-time purpose

4.5 Event Serialization

Events are serialized as JSON for Redis transport:

{
  "ChangeType": "Added",
  "MessageId": 42,
  "Timestamp": "2026-03-05T14:30:00Z",
  "Message": {
    "Uid": 42,
    "MessageText": "Process started",
    "Timestamp": "2026-03-05T14:30:00Z"
  }
}

CurrentMessageDto is already a simple record type — no circular references, no complex object graphs.

4.6 Backward Compatibility

  • Monolithic deployment (no Docker): No Redis:ConnectionString configured → in-memory CurrentMessageEventService used. Zero behavioral change.
  • Containerized deployment: Redis:ConnectionString set via environment variable → RedisCurrentMessageEventService used. Cross-container events work.
  • Port interface unchanged: ICurrentMessageEventService in Application.Ports is not modified. No changes to command handlers, GraphQL subscriptions, or any consumer.

4.7 Testing Strategy

Test Layer Description
Unit test Infrastructure RedisCurrentMessageEventService with mocked IConnectionMultiplexer — verify publish serializes correctly, subscribe deserializes correctly
Unit test Infrastructure Verify conditional registration: with Redis config → Redis impl, without → in-memory impl
Integration test Infrastructure Real Redis instance (Docker) — publish from one instance, verify other instance receives
Existing tests All layers All existing CurrentMessageEventService tests continue to pass (in-memory fallback)

4.8 Implementation Notes

{Captured during implementation.}


Phase 5: Verification & Arc42 Documentation

Goal: Verify the Docker setup works end-to-end and update deployment documentation Status: Planned Estimated impact: 1 modified file

5.1 Documentation Changes

File Action Description
docs/arc42/07-Deployment-View/07-Deployment-View.md Modify Add Docker deployment section alongside existing Windows Service deployment

5.2 Implementation Notes

{Captured during implementation.}


Dependency Graph

Phase 1 (Requirements) ──► Phase 2 (Dockerfiles) ──► Phase 3 (Compose + Redis) ──► Phase 4 (Redis Event Bus) ──► Phase 5 (Verify & Docs)

Phase 4 depends on Phase 3 because Redis must be in the compose stack before the event service can use it.

Risks and Considerations

  1. OPC UA in Docker — OPC UA uses TCP (not HTTP), needs explicit port exposure and configurable endpoint URL. Mitigation: OpcUaServer:EndpointUrl configurable via environment variable.
  2. Redis availability — If Redis goes down, event publishing fails silently (fire-and-forget). Mitigation: Redis health check in compose; RedisCurrentMessageEventService logs errors but doesn't crash the API. Database operations remain unaffected.
  3. Database schema initialization — Docker Compose doesn't create schemas. Mitigation: document prerequisite; optionally provide init scripts for MariaDB volume.
  4. Air-gapped environments — Production environments may lack Docker Hub access. Mitigation: document docker save/docker load workflow for offline deployment.
  5. Linux container compatibility — OPC UA library (Opc.UaFx.Advanced) may have platform-specific behavior. Mitigation: verify Linux container compatibility during Phase 2; fall back to Windows containers if needed.
  6. Event ordering — Redis Pub/Sub delivers messages in order per publisher, but concurrent publishers may interleave. Mitigation: acceptable for notifications; each event is self-contained with timestamp. Consumers don't depend on strict ordering.
  7. Backward compatibility — Adding StackExchange.Redis to Infrastructure project. Mitigation: conditional registration; no Redis config = in-memory fallback. Monolithic deployments unaffected.

Verification Plan

  1. docker compose build — all 3 images + Redis build/pull successfully
  2. docker compose up -d — all services start and reach healthy state
  3. curl http://localhost:5000/health — REST API returns healthy
  4. curl http://localhost:5010/health — GraphQL API returns healthy
  5. docker compose logs essert-mf-opcua — OPC UA server started without errors
  6. Cross-container event test — POST a current message via REST (localhost:5000) → verify GraphQL subscription (localhost:5010) receives the event via WebSocket
  7. docker compose down — clean shutdown, no orphan containers
  8. dotnet build — solution compiles
  9. dotnet test Essert.MF.Infrastructure.Tests — existing tests pass (in-memory fallback)
  10. dotnet test Essert.MF.Application.Tests — handler tests pass (port interface unchanged)