The conversation around microservices often skips an important question: what problem are you actually solving? For most teams — especially early-stage or teams smaller than twenty engineers — the operational overhead of microservices (distributed tracing, independent deploys, network failures, service meshes) creates more problems than it solves. A modular monolith is frequently the right default: single deployable, strong internal boundaries, easy to evolve.
This post is about building one in Go.
What “modular” actually means #
A module isn’t just a package or a directory. It’s a unit of code that owns its domain, defines a minimal public interface, and doesn’t leak its internals to the rest of the system. In practice:
- Each module owns its types, storage access, and business logic
- Other modules interact only through the module’s published interface
- An orchestration layer (
internal/app) wires everything together — the modules themselves don’t know about each other
If you can replace a module’s implementation without touching anything outside its interface, the boundary is drawn correctly.
Directory layout #
cmd/service/
main.go ← wire dependencies, start server
internal/
app/ ← composition root, owns all wiring
orders/
service.go ← public interface + types
repository.go
model.go
billing/
service.go
repository.go
users/
service.go
repository.go
pkg/ ← shared helpers (logging, retry, pagination)
api/ ← protobuf / OpenAPI definitions
migrations/
go.mod
internal/app is the composition root — it instantiates all modules and injects their dependencies. main.go calls app.New() and starts the server. Nothing outside app knows about the concrete implementations.
Module composition #
graph TD cmd["cmd/service"] -->|starts| app["internal/app"] app -->|injects| orders["orders"] app -->|injects| users["users"] app -->|injects| billing["billing"] orders -->|"OrderCreated event"| billing
app holds references to all modules. It is the only place in the codebase that sees concrete types. Every module receives its dependencies as interfaces.
Three patterns that make this work #
1. Interface-driven boundaries #
Each module publishes a small, focused interface:
// internal/orders/service.go
type Service interface {
Create(ctx context.Context, o Order) (Order, error)
Get(ctx context.Context, id string) (Order, error)
}
internal/app injects this interface into billing — not *orders.ServiceImpl. This keeps modules independently testable and decouples their evolution.
2. In-process events for async flows #
Not everything needs a direct call. When an order is created, billing needs to react — but billing shouldn’t block order creation.
Use a simple in-process event bus:
// orders publishes after a successful Create
bus.Publish(ctx, "orders.created", OrderCreatedEvent{OrderID: o.ID, Amount: o.Total})
// billing subscribes at startup
bus.Subscribe("orders.created", billing.HandleOrderCreated)
This keeps the order creation path fast and makes billing’s behavior independently testable. When you eventually need durability, you swap the in-process bus for a real queue — the module code doesn’t change.
3. Anti-corruption at boundaries #
When one module needs data from another, translate it at the boundary. Don’t pass orders.Order into billing — define a billing.OrderSummary and convert explicitly. This prevents your internal representation from bleeding across modules and making future refactoring painful.
Testing strategy #
- Unit tests: each module in isolation with mocked interfaces. Fast, no I/O.
- Integration tests: spin up the real
appwith an in-memory or test database and exercise the composed system end to end. - Don’t share test helpers across modules. A little duplication is far less harmful than cross-module coupling in test code.
When to actually split into services #
Split when a module genuinely needs:
- Independent scaling — this module gets 10× the traffic of everything else and you’ve measured it
- Independent deployment — compliance or regulatory requirements isolate it
- A different technology — the module has constraints the monolith can’t satisfy
“It might need to scale someday” is not a reason. Operational complexity is a real cost — pay it only when the alternative is worse.
Closing thoughts #
A modular monolith isn’t a compromise on the way to microservices — it’s a deliberate architectural choice that keeps operational complexity low while preserving the ability to refactor and extract later. In Go, internal/, small interfaces, an in-process event bus, and a thin composition root are all you need to build a system that scales in team size without exploding in production complexity.
Start here. Extract services when the pressure is real and the boundary is proven.
This is Part 2 of the Go Architecture series. Start with How to structure Go projects if you haven’t already.