MAISON CODE .
/ Tech · Architecture · Microservices · Backend · Distributed Systems

Microservices: The Distributed Monolith Trap

Splitting your app into 10 services doesn't make it faster. It makes it harder to debug. A technical guide to Modular Monoliths, Domain-Driven Design (DDD), and the Saga Pattern.

AB
Alex B.
Microservices: The Distributed Monolith Trap

In 2015, the industry mantra was absolute: “Monoliths are dinosaurs. Microservices are the future.” By 2025, the hangover has set in. Companies that blindly split their applications are now drowning in what we call the Distributed Monolith: A system with all the complexity of distributed computing and none of the benefits of decoupling.

Why Maison Code Discusses This

At Maison Code Paris, we act as the architectural conscience for our clients. We often inherit “Microservices” projects where a simple user login touches six different services, has a latency of 2 seconds, and costs $5,000/month in cloud ingress fees.

This article is a sober, technical guide on when to adopt Microservices, and more importantly, when to run screaming from them.

The Case for the Modular Monolith

Before you type mkdir service-user, consider the Modular Monolith. This is a single deployable unit (one binary/container) where the code is strictly separated into modules (packages) that mirror business domains.

Structure Matters

A “Spaghetti Monolith” has files importing each other randomly. A Modular Monolith has strict boundaries enforced by linting rules or compilation targets.

src/
  modules/
    auth/           # Bounded Context: Authentication
      api/          # Public Interface
      internal/     # Private Implementation (Database, Logic)
    inventory/      # Bounded Context: Inventory
      api/
      internal/
    billing/        # Bounded Context: Billing
      api/
      internal/

The Law of Demeter: Code in billing cannot import inventory/internal. It can only call inventory/api. If you follow this validation, you get 90% of the benefits of Microservices (Team separation, cleaner code) with 0% of the drawbacks (Network latency, eventual consistency, deployment complexity).

When to Split (The Boundary Conditions)

We only advise splitting a module into a separate Microservice if it satisfies one of these strict criteria:

  1. Heterogeneous Hardware Requirements:

    • The Core API is I/O bound. It needs 1GB RAM and 0.5 CPU.
    • The Image Processor is CPU bound. It needs 16GB RAM and GPU access.
    • Decision: Extract the Image Processor. Deploying GPU nodes for the API is a waste of money.
  2. Independent Release Velocity:

    • The Checkout Team pushes code hourly.
    • The Ledger Team (Accounting) pushes once a month after rigorous auditing.
    • Decision: Extract Checkout to avoid blocking the high-velocity team.
  3. Fatal Isolation (The “Blast Radius”):

    • If the PDF Generator has a memory leak and crashes, it shouldn’t take down the Payment Gateway.
    • Decision: Isolate unstable or risky components.

Communication Patterns: REST vs. gRPC vs. Events

Once you have multiple services, the hardest problem is communication. “Function calls” become “Network Packets.”

Synchronous: The HTTP Trap

Service A calls Service B via HTTP/REST. GET /users/123.

This couples the availability of A to B. If B is down, A fails. If B is slow, A hangs. If A calls B, which calls C, which calls D, you have a Call Chain. If each service has 99.9% availability, a chain of 4 services has $0.999^4 \approx 99.6%$ availability. You are engineering valid failure.

gRPC: The High-Performance Variant

For internal communication, we prefer gRPC (Protobufs) over REST/JSON.

  • Binary Packed: 10x smaller payloads.
  • Strongly Typed: Code generation ensures Service A knows exactly what Service B expects.
  • Multiplexing: HTTP/2 support out of the box.

Asynchronous: The Event-Driven Architecture

This is the holy grail of decoupling. Service A does not call Service B. Service A emits an event.

  1. User registers (Service A).
  2. Service A publishes event USER_REGISTERED to Kafka/RabbitMQ.
  3. Service B (Email) consumes event -> Sends Welcome Email.
  4. Service C (Analytics) consumes event -> Updates Dashboard.

If Service B is offline (Maintenance, Crash), Service A continues working. The message sits in the queue. When Service B comes back online, it processes the backlog. This is Resiliency.

Data Consistency: The Hardest Part

In a Monolith, you have ACID Transactions. BEGIN TRANSACTION; INSERT User; INSERT Account; COMMIT; Either both happen, or neither happens.

In Microservices, you have Database per Service. Service A has users_db. Service B has accounts_db. You cannot run a transaction across two different database servers.

The Saga Pattern

How do you handle a distributed transaction? Scenario: User places an order.

  1. Inventory Service: Reserve stock.
  2. Payment Service: Charge card.
  3. Shipping Service: Print label.

What if “Charge Card” fails? You have already reserved stock. You must “Undo” step 1. This is a Compensating Transaction.

sequenceDiagram
    participant O as Order Orchestrator
    participant I as Inventory
    participant P as Payment
    
    O->>I: Reserve Stock (Item X)
    I-->>O: Success
    O->>P: Charge $100
    P-->>O: Failure (nsf)
    O->>I: RELEASE Stock (Compensate)
    I-->>O: Success
    O-->>User: Order Failed

Implementing Sagas correctly is incredibly complex. You need a state machine orchestrator (like standard AWS Step Functions or Temporal.io). If you don’t need this complexity, don’t build Microservices.

Observability: Seeing in the Dark

In a monolith, debugging is tail -f /var/log/app.log. In microservices, a single user request might hit 10 different containers on 10 different nodes.

You Must implement Distributed Tracing (OpenTelemetry). Every request gets a TraceID at the Ingress Load Balancer. This ID is propagated in HTTP headers (X-B3-TraceId) to every downstream service.

Tools like Jaeger, Datadog, or Honeycomb then visualize the “Waterfall” of the request.

  • “Why was this 500ms slow?”
  • “Oh, Service D took 450ms on a DB query.”

Without tracing, you are flying blind.

Infrastructure: Code Example

We use Terraform to provision the infrastructure for independent services.

# service-payment.tf
resource "kubernetes_deployment" "payment" {
  metadata {
    name = "payment-service"
  }
  spec {
    replicas = 3
    selector {
      match_labels = {
        app = "payment"
      }
    }
    template {
      metadata {
        labels = {
          app = "payment"
        }
      }
      spec {
        container {
          image = "maisoncode/payment:v1.2"
          name  = "payment"
          
          # The Golden Rule: Limit Resources
          resources {
            limits = {
              cpu    = "500m"
              memory = "512Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "256Mi"
            }
          }
          
          env {
             name = "DB_HOST"
             value_from {
               secret_key_ref {
                 name = "payment-db-creds"
                 key  = "host"
               }
             }
          }
        }
      }
    }
  }
}

The Decomposition Strategy: The Strangler Fig

If you have a legacy monolith, do not rewrite it from scratch. Use the Strangler Fig Pattern.

  1. Put a Proxy (API Gateway) in front of the Monolith.
  2. Route all traffic to Monolith.
  3. Identify one bounded context (e.g., “Reviews”).
  4. Build “Reviews Service”.
  5. Change Proxy: If path is /api/reviews, route to New Service. Else, route to Monolith.
  6. Repeat until the Monolith is gone.

11. The Role of the API Gateway

You don’t want 10 services to handle Authentication independently. You place a Gateway (Kong, Tyk, or AWS API Gateway) at the edge. It handles:

  • Auth: JWT Validation.
  • Rate Limiting: 100 req/min per IP.
  • Caching: Cache public GET requests.
  • Routing: /api/v1/users -> User Service. This keeps your internal services “dumb” and focused on business logic.

12. Service Mesh (Istio / Linkerd)

When you have 50 services, Service A talking to Service B needs encryption (mTLS). Managing certificates manually is hell. A Service Mesh injects a “Sidecar Proxy” (Envoy) next to every container. The proxy handles:

  • mTLS: Automatic encryption.
  • Retries: “If fail, retry 3 times with exponential backoff”.
  • Circuit Breaking: “If Service B is 50% errors, stop calling it for 1 minute”. It decouples networking logic from application code.

13. Contract Testing (Pact)

How do you test Service A without spinning up Service B? Contract Testing. Service A (Consumer) writes a “Pact File”: “I expect GET /user to return { id: string }”. Service B (Provider) runs a test against this Pact File in its CI pipeline. If Service B changes id to userId, the build fails. This catches breaking changes before deployment.

14. Conclusion

Microservices are a organizational scaling pattern, not a performance optimization. They introduce Network Latency, Consistency Issues, and Operational Complexity in exchange for Team Velocity and Independent Scalability.

If you are a team of 5 developers, you are building a Monolith. You might call it Microservices, but you are just building a distributed pain machine.

Scale your code structure before you scale your infrastructure.


Need Architecture Advice?

We rescue startups from distributed hell.

Hire our Architects.