Docs Multi-Service Setup

Multi-Service Setup

As systems grow, a single service often becomes insufficient.

New requirements emerge:

  • Independent scaling
  • Fault isolation
  • Separate deployment cycles
  • Team ownership boundaries

At this point, the question is no longer whether to introduce multiple services,
but how to do so without losing clarity.

Plumego does not prescribe a microservices architecture —
but it provides strong constraints that make multi-service systems more predictable.


First Principle: Services Are a Deployment Concern

In Plumego’s worldview:

“Service” is primarily a deployment and ownership boundary, not a framework feature.

This means:

  • Plumego does not know about services
  • Plumego does not manage service discovery
  • Plumego does not enforce inter-service communication styles

Those concerns live outside the core.

This separation keeps Plumego stable and adaptable.


When to Split into Multiple Services

Splitting too early creates unnecessary complexity.
Splitting too late creates bottlenecks.

Common valid signals include:

  • Different scaling profiles (CPU vs I/O heavy)
  • Distinct failure domains
  • Separate teams owning different capabilities
  • Regulatory or security isolation requirements
  • Long-running or blocking workloads

“Codebase size” alone is not a sufficient reason.


Service Boundaries Follow Usecases

A crucial guideline:

Service boundaries should align with usecase boundaries, not technical layers.

Good service boundaries:

  • Own a coherent set of behaviors
  • Expose explicit APIs
  • Hide internal domain models
  • Control their own persistence

Bad boundaries split:

  • Repositories from usecases
  • Domain logic across services
  • Shared databases across “services”

If services share a database, they are not independent.


A Typical Multi-Service Topology

A common, healthy topology looks like this:

[ API Gateway ]
|
v
+-------------+     +-------------+
| Order Svc   | --> | Payment Svc |
+-------------+     +-------------+
|
v
+-------------+
| User Svc    |
+-------------+

Each service:

  • Is a standalone Plumego app
  • Has its own HTTP server
  • Has its own lifecycle
  • Has its own deployment pipeline

Plumego runs inside each service, unchanged.


Avoiding a Distributed Monolith

The biggest risk in multi-service setups is the distributed monolith.

Warning signs:

  • Synchronous calls chained across many services
  • Shared domain concepts leaking across boundaries
  • Tight version coupling
  • Cascading failures

Plumego helps avoid this by enforcing:

  • Explicit boundaries
  • Explicit adapters
  • Explicit error handling

But discipline is still required.


Inter-Service Communication

Plumego does not mandate a communication protocol.

Common options include:

  • HTTP/JSON
  • HTTP/gRPC
  • Message queues
  • Event streams

Regardless of protocol, the same rule applies:

Treat other services as external systems.

This means:

  • Define clear contracts
  • Handle failures explicitly
  • Avoid leaking internal errors
  • Expect partial availability

Service APIs as Stable Contracts

Service APIs must be:

  • Explicit
  • Versioned
  • Backward compatible (when possible)

Avoid sharing Go types across services.

Instead:

  • Define transport-level DTOs
  • Map them explicitly at boundaries
  • Treat schemas as contracts, not conveniences

Plumego handlers are ideal places for this translation.


Configuration and Secrets per Service

Each service should have:

  • Its own configuration slice
  • Its own secrets
  • Its own environment

Avoid global configuration repositories.

A multi-service system with shared config is operationally fragile.


Observability Across Services

Multi-service systems demand strong observability.

Key practices:

  • Propagate trace IDs across service boundaries
  • Use consistent logging formats
  • Centralize metrics and traces
  • Make failures visible, not silent

Plumego’s explicit middleware model makes this integration straightforward.


Failure Handling and Timeouts

In a multi-service setup:

  • Network calls fail
  • Dependencies time out
  • Partial results occur

Rules of thumb:

  • Always set timeouts on outbound calls
  • Treat remote failures as expected
  • Fail fast where possible
  • Avoid retry storms

Resilience is an application responsibility, not a framework feature.


Deployment and Versioning Strategy

Because Plumego apps are explicit and minimal:

  • Services can be deployed independently
  • Rolling upgrades are straightforward
  • Canary deployments are possible
  • Version skew is manageable

Avoid tight coupling between service versions.


Testing Multi-Service Systems

Testing strategies should be layered:

  • Unit tests per service (domain + usecases)
  • Contract tests between services
  • A small number of integration tests
  • End-to-end tests only where justified

Do not rely solely on E2E tests to validate correctness.


Migration: Monolith to Multiple Services

A recommended approach:

  1. Start with a modular monolith
  2. Identify stable usecase boundaries
  3. Extract one service at a time
  4. Introduce network boundaries last

Plumego’s explicit wiring and usecase-centric design
make this transition significantly safer.


What Plumego Deliberately Avoids

Plumego does not provide:

  • Service registries
  • API gateways
  • Distributed tracing systems
  • Circuit breakers
  • Retry policies

These are ecosystem concerns.

Plumego’s role is to ensure that when you introduce them,
you do so explicitly and intentionally.


Summary

In a multi-service setup with Plumego:

  • Each service is a standalone Plumego app
  • Boundaries align with usecases
  • Communication is explicit
  • Failures are expected and handled
  • Observability is essential
  • Deployment remains independent

Plumego does not make multi-service systems “easy” —
but it makes them understandable and controllable.


Next

A natural next advanced topic is:

Advanced / Performance Considerations

This explores how multi-service setups interact with latency,
throughput, and resource usage at scale.