developer_experience_localhost

Posted on 30 September 2021.

Part of our job as software engineers involves building or using local development environments. When working on service-oriented architectures, these local environments rarely run a single service at once. We often end up with services listening on various ports: 8080, 8081, 8082, 3000, 5000. The list never really stops growing. Memorizing which port maps to which service adds unnecessary cognitive load when we want 100% of our focus to be on our code.

 

I believe we can improve the developer experience by making our local container-based environments slightly more production-like. We may never need to type localhost into our browser or terminal ever again.

Why I don't use localhost anymore

For engineers who work on many different services, localhost:8080 can have a different meaning depending on which day of the week it is. If I type localhost into my browser, I get seemingly random completions that mainly depend on what I was working on last.

localhost

People don't think in ports. We think of our applications as services, and that is what our architectures reflect. If I want to send a request to the authentication service running on my machine, I don't want to translate that destination into something unrelated like localhost:8080. The same goes for my web frontend, which listens on localhost:8081, and my backend on localhost:8082. Or was it the other way around?

multiple-ports.drawio

In production, we don't have this problem. Modern container-based environments often have a reverse proxy that forwards requests from external clients to internal services. With a setup like that, all we need, for instance, is a different hostname for each service.

reverse-proxy.drawio

Any .vcap.me hostname resolves to 127.0.0.1, just like localhost does. If I send a request to http://auth.vcap.me/, whatever is listening on port 80 on my machine receives the request. By making it a reverse proxy, I can have a hostname for each service I am running locally. I don't need to specify ports anymore. All I need is a good name.

I have been using this setup for several months now, and I love it. I can type front + ENTER in my browser to open my web frontend. I can search curl auth in my shell history for a list of requests sent to my authentication service. Every day I save time and can maintain focus on the task at hand.

Setup with Docker Compose

For a local environment based on Compose, I recommend using nginx-proxy as the reverse proxy. It reads running containers' metadata, finds environment variables like VIRTUAL_HOST, and uses that information to configure a high-performance web proxy. Here is what your docker-compose.yml file could look like:

version: "3.9"
services:
  reverse-proxy:
    image: jwilder/nginx-proxy:0.9-alpine
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
  auth:
    build: ./auth
    environment:
      - VIRTUAL_HOST=auth.vcap.me # for reverse proxy
      - VIRTUAL_PORT=8080 # for reverse proxy
  frontend:
    build: ./frontend
    environment:
      - VIRTUAL_HOST=front.vcap.me # for reverse proxy
      - VIRTUAL_PORT=3000 # for reverse proxy
  backend:
    build: ./backend
    environment:
      - VIRTUAL_HOST=back.vcap.me # for reverse proxy
      - VIRTUAL_PORT=8080 # for reverse proxy
 

Setup with Kubernetes

In a Kubernetes environment, the standard reverse proxy is the NGINX ingress controller. To run Kubernetes locally, I recommend using kind (aka Kubernetes in Docker). Follow these steps to deploy the ingress controller:

  1. Configure your cluster to allow the ingress controller to run and listen on your machine's port 80. Write the following to a file called kind-cluster.yml:

    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    nodes:
      - role: control-plane
        kubeadmConfigPatches:
          - |
            kind: InitConfiguration
            nodeRegistration:
              kubeletExtraArgs:
                node-labels: "ingress-ready=true"
        extraPortMappings:
          - containerPort: 80
            hostPort: 80
  2. Create the cluster and deploy the NGINX ingress controller:

    kind create cluster --config=kind-cluster.yml
    
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
    kubectl rollout status deployment --namespace=ingress-nginx ingress-nginx-controller
    kubectl delete validatingwebhookconfiguration ingress-nginx-admission # Workaround for this issue: https://github.com/kubernetes/ingress-nginx/issues/5401

All you need to expose a service with this setup is an Ingress resource with a .vcap.me hostname. The NGINX reverse proxy will configure itself based on the rules you specify. Try it out with these manifests that use httpbin.vcap.me:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpbin
spec:
  rules:
    - host: httpbin.vcap.me
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: httpbin
                port:
                  name: http

---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
spec:
  selector:
    app: httpbin
  ports:
    - name: http
      port: 80
      targetPort: http

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
        - name: httpbin
          image: kennethreitz/httpbin
          ports:
            - name: http
              containerPort: 80
          resources:
            limits:
              cpu: 1
              memory: 128Mi

What next?

Developer productivity is a determining factor in the success of any web product. The cognitive load of running many services locally can slow engineers down, but modern container-based development environments provide opportunities for improvement. Solutions used in production, like reverse proxies, can also apply to local development. What other enhancements can we make to improve our teams' efficiency further?