Your Startup Doesn't Need Microservices

TL;DR

Microservices add operational complexity that small teams can't afford. Monoliths are faster to build, easier to debug, and simpler to deploy. Split services when you have a reason, not because it's trendy.

I used to think microservices were the mark of a serious engineering team. Monoliths were legacy, something you had to eventually "grow out of." Real engineers built distributed systems.

Then I watched a 4-person startup spend six months building microservices infrastructure before shipping a single feature. They had Kubernetes configs, service meshes, and distributed tracing. They also had massive AWS bills and a product that didn't exist yet.

Meanwhile, their competitor launched in 3 weeks with a single Django app on a $5 VPS and started making money.

That's when I realized most teams are cargo-culting architecture from companies that have problems they'll never have.

The Microservices Trap

Here's what "properly" building microservices looks like for a simple e-commerce site:

# docker-compose.yml - just to run locally
version: '3.8'
services:
  user-service:
    build: ./services/user
    environment:
      - DATABASE_URL=postgres://...
      - KAFKA_BROKERS=kafka:9092
      - CONSUL_ADDR=consul:8500
    depends_on:
      - postgres-users
      - kafka
      - consul

  product-service:
    build: ./services/product
    environment:
      - DATABASE_URL=postgres://...
      - KAFKA_BROKERS=kafka:9092
      - CONSUL_ADDR=consul:8500
    depends_on:
      - postgres-products
      - kafka
      - consul

  order-service:
    build: ./services/order
    environment:
      - DATABASE_URL=postgres://...
      - KAFKA_BROKERS=kafka:9092
      - CONSUL_ADDR=consul:8500
      - USER_SERVICE_URL=http://user-service:8001
      - PRODUCT_SERVICE_URL=http://product-service:8002
    depends_on:
      - postgres-orders
      - kafka
      - consul

  payment-service:
    build: ./services/payment
    # ... you get the idea

  postgres-users:
    image: postgres:15

  postgres-products:
    image: postgres:15

  postgres-orders:
    image: postgres:15

  kafka:
    image: confluentinc/cp-kafka:latest

  zookeeper:
    image: confluentinc/cp-zookeeper:latest

  consul:
    image: consul:latest

  api-gateway:
    build: ./gateway

  # Plus monitoring, tracing, service mesh...

Just to run this locally, you need:

  • 4 separate services with their own codebases
  • 3 PostgreSQL databases
  • Kafka + Zookeeper for messaging
  • Consul for service discovery
  • An API gateway
  • 8+ GB of RAM just to start the damn thing

And you still haven't written a line of business logic.

Compare to the monolith:

# main.py
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/users', methods=['POST'])
def create_user():
    # User logic here
    pass

@app.route('/products', methods=['GET'])
def list_products():
    # Product logic here
    pass

@app.route('/orders', methods=['POST'])
def create_order():
    # Order logic here
    pass

if __name__ == '__main__':
    app.run()
# Run it
python main.py

# Deploy it
scp main.py server:/app/
ssh server 'systemctl restart app'

Which team ships features faster?

The Problems Microservices Actually Solve

Microservices aren't inherently bad. They solve real problems at scale:

1. Truly Independent Team Scaling

When you have 50+ engineers and need teams to deploy independently without coordinating:

Team A deploys payment-service v2.3.1
Team B deploys user-service v1.8.2
Team C deploys order-service v3.1.0

All happening simultaneously without conflicts

But if you have 5 engineers, they can just... talk to each other.

2. Different Scaling Requirements

When different parts of your app have wildly different resource needs:

Product search: 10,000 req/sec, CPU-intensive, needs 20 instances
User auth: 100 req/sec, IO-bound, needs 2 instances
Admin panel: 5 req/sec, needs 1 instance

But horizontal scaling a monolith is often good enough:

# Just run more instances behind a load balancer
for i in {1..10}; do
    pm2 start app.js -i $i
done

3. Technology Diversity

When you need different languages for different problems:

Image processing: Python with OpenCV
Real-time chat: Go for concurrency
Admin dashboard: Ruby on Rails for speed

But most apps don't need this. JavaScript/Python/Go can handle 95% of web applications.

What You Actually Get With Microservices

Debugging Becomes a Nightmare

Monolith:

# Error happens, you see the full stack trace
Traceback (most recent call last):
  File "app.py", line 45, in create_order
    user = get_user(user_id)
  File "users.py", line 23, in get_user
    return db.query(User).filter_by(id=user_id).first()
  File "database.py", line 67, in query
    raise DatabaseError("Connection timeout")
DatabaseError: Connection timeout at line 67 in database.py

Microservices:

Order service: Failed to create order
  → Called user service
    → user-service returned 500
      → Checked Grafana: user-service is up
        → Checked logs: "Timeout connecting to database"
          → Which database? Checked Consul
            → postgres-users-01 is the leader
              → SSH'd into postgres-users-01
                → Disk is full
                  → Why? Log rotation failed
                    → Because the cron job has wrong permissions
                      → Because someone changed the Docker image
                        → 3 hours later: found the problem

One error, three hours of distributed debugging across multiple systems.

Transactions Become Impossible

Monolith:

@transaction.atomic
def transfer_money(from_user, to_user, amount):
    from_user.balance -= amount
    to_user.balance += amount
    from_user.save()
    to_user.save()
    create_transaction_record(from_user, to_user, amount)
    # All succeeds or all fails atomically

Microservices:

// Order service
async function createOrder(userId, productId) {
    // Step 1: Reserve inventory
    await productService.reserveInventory(productId);

    // Step 2: Charge user (OH NO, this failed!)
    try {
        await paymentService.charge(userId, amount);
    } catch (e) {
        // Now we need to unreserve inventory
        await productService.unreserveInventory(productId);
        throw e;
    }

    // Step 3: Update user points
    try {
        await userService.addPoints(userId, points);
    } catch (e) {
        // Crap, need to reverse the charge AND unreserve inventory
        await paymentService.refund(userId, amount);
        await productService.unreserveInventory(productId);
        throw e;
    }
}

Welcome to distributed transactions, saga patterns, and eventual consistency. Your simple operation now has 8 failure modes and requires complex compensation logic.

Local Development Becomes Painful

Monolith:

git clone repo
npm install
npm run dev
# App running in 30 seconds

Microservices:

git clone user-service
git clone product-service
git clone order-service
git clone payment-service
docker-compose up  # Downloads 8GB of images
# Wait 5 minutes
# Out of memory, need to adjust Docker settings
# Change Docker to 8GB RAM
docker-compose up
# Wait another 5 minutes
# kafka won't start, some port conflict
# Fix port conflicts
docker-compose up
# Finally running
# Change one line of code
# Need to rebuild Docker image
# Wait 2 minutes
# Repeat forever

New engineers spend their first week just getting the dev environment running.

Deployment Complexity Explodes

Monolith:

# Deploy script
git pull
npm install
npm run build
pm2 restart app
# Done in 60 seconds

Microservices:

# Kubernetes deployment for ONE service
apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
      - name: user-service
        image: registry.example.com/user-service:v1.2.3
        ports:
        - containerPort: 8080
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: user-db-secret
              key: url
        - name: KAFKA_BROKERS
          valueFrom:
            configMapKeyRef:
              name: kafka-config
              key: brokers
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: user-service
spec:
  selector:
    app: user-service
  ports:
  - port: 80
    targetPort: 8080
---
# And this is just ONE service...

Now multiply by 10 services, add Ingress configs, service meshes, and monitoring. You need a dedicated DevOps engineer just to keep things running.

When Monoliths Actually Scale

Stack Overflow ran on a monolith serving billions of requests with a handful of servers.

Shopify handles Black Friday traffic (80k req/sec) with a Rails monolith.

GitHub ran on a monolith until they had 350+ engineers.

If your app doesn't have Stack Overflow traffic, you don't need microservices.

The Right Way to Build

Start With a Well-Structured Monolith

app/
  models/          # Database models
  services/        # Business logic
    user_service.py
    product_service.py
    order_service.py
  api/             # HTTP handlers
    user_routes.py
    product_routes.py
    order_routes.py
  lib/             # Shared utilities
    database.py
    cache.py
  tests/

Keep services as separate modules within the monolith. If you later need to extract one, the boundaries are already clear.

Use Feature Flags Instead of Services

# Instead of a separate recommendation-service
@feature_flag('ml_recommendations')
def get_recommendations(user_id):
    # New ML-based recommendations
    return ml_model.predict(user_id)

# Fallback
def get_recommendations(user_id):
    # Simple rule-based recommendations
    return get_popular_products()

Turn features on/off without deploying new services.

Scale Vertically First

# Most apps never need more than this
# 32 core CPU, 128GB RAM, 2TB NVMe SSD
# $400/month on Hetzner
# Handles 100k+ requests/second

Vertical scaling is simpler, cheaper, and faster than managing distributed systems.

When You Actually Need to Split

Split services when you have concrete reasons:

Boundary is clear and stable:

Split: Payment processing (PCI compliance requirements)
Don't split: User profiles (changes frequently)

Scaling requirements are dramatically different:

Split: Image processing (CPU-bound, needs GPU instances)
Don't split: Admin panel (5 requests/day)

Teams are large and independent:

Split: 3 teams of 10 engineers each
Don't split: 5 engineers total

Technology requirements are incompatible:

Split: Real-time video transcoding (needs C++ with CUDA)
Don't split: REST API (can use any language)

Real-World Example: The Pivot

I worked with a startup that built their MVP as microservices. They had:

  • User service
  • Product service
  • Order service
  • Payment service
  • Notification service
  • Analytics service

Three months in, they realized their product wasn't working. Users wanted something completely different. Time to pivot.

With microservices:

  • 6 services to rewrite
  • Dozens of API contracts to change
  • Message schemas to update
  • Database migrations across 6 databases
  • Integration tests that spanned services
  • Estimated time: 4-6 weeks

A competitor with a monolith pivoted in 5 days and captured the market.

The Monolith Pattern I Use

# app/services/base.py
class Service:
    """Base class for all services"""
    def __init__(self, db, cache, logger):
        self.db = db
        self.cache = cache
        self.logger = logger

# app/services/user_service.py
class UserService(Service):
    def create_user(self, email, password):
        # All user logic here
        pass

    def get_user(self, user_id):
        # Check cache first
        cached = self.cache.get(f'user:{user_id}')
        if cached:
            return cached

        # Then database
        user = self.db.query(User).get(user_id)
        self.cache.set(f'user:{user_id}', user)
        return user

# app/services/order_service.py
class OrderService(Service):
    def __init__(self, db, cache, logger, user_service, product_service):
        super().__init__(db, cache, logger)
        self.user_service = user_service
        self.product_service = product_service

    def create_order(self, user_id, product_id):
        # Direct function calls - no network
        user = self.user_service.get_user(user_id)
        product = self.product_service.get_product(product_id)

        # Atomic transaction
        with self.db.transaction():
            order = Order(user=user, product=product)
            self.db.add(order)
            product.inventory -= 1
            self.db.commit()

        return order

Clear boundaries, testable, but all in one codebase. When you need to split, you can extract a service without rewriting everything.

The Middle Ground: Modular Monolith

# Each module has its own database schema
# But runs in the same process

# users/models.py
class User(Model):
    __tablename__ = 'users.users'  # Schema: users

# products/models.py
class Product(Model):
    __tablename__ = 'products.products'  # Schema: products

# Later, splitting is easier:
# 1. Give each schema its own database
# 2. Change function calls to HTTP calls
# 3. Deploy separately

You get clear boundaries without operational complexity.

My Decision Framework

Team size < 10 engineers?

→ Monolith

You need speed, not scalability. One codebase, one deployment, one database.

Clear business domain boundaries?

→ Maybe split, but only if:

  • Teams are truly independent
  • Domains rarely change
  • Shared data is minimal

Scaling problems?

→ Try this first:

  1. Add indexes
  2. Add caching (Redis)
  3. Scale vertically (bigger server)
  4. Scale horizontally (more instances)
  5. Read replicas
  6. Only then: Consider splitting services

Need different tech stacks?

→ Consider split, but ask:

  • Can we use a library instead?
  • Can we use a subprocess?
  • Can we use a queue + worker?

If yes to any, stay monolith.

What You Gain With a Monolith

Faster development: Change 3 files instead of 3 services.

Easier debugging: Stack traces work. Debuggers work. grep works.

Atomic transactions: Database guarantees consistency.

Simple deployment: One build, one deploy, one rollback.

Lower costs: One server instead of 10. No Kubernetes.

Better performance: Function calls are faster than HTTP calls.

Easier testing: No mocking microservices. Integration tests actually test integration.

When Microservices Actually Win

You have 100+ engineers and need true team independence.

You have proven scaling issues that vertical/horizontal scaling can't solve.

You have genuine operational requirements like separate compliance boundaries or drastically different scaling needs.

You have organizational constraints like teams in different time zones that can't coordinate deployments.

Notice these are all organizational problems, not technical ones.

My Advice

Start with a monolith. Always. No exceptions.

Structure it well. Clear modules, clear boundaries, clear dependencies. Make it easy to split later if needed.

Split when you have a reason. Not because it's trendy, not because you might need it later. When you have a concrete problem that microservices solve.

Measure the cost. Every service adds complexity. Make sure the benefit is worth it.

The best architecture is the simplest one that solves your actual problems. For most teams, that's a well-built monolith.

I've seen too many startups waste months building microservices infrastructure while their competitors ship features. Don't be one of them.

Build a monolith. Ship features. Make money. Scale when you need to, not before.