v1.2.0 Stable GitHub

Teeter Documentation

A high-performance Proxying Load Balancer built with Go. Designed to be extremely fast, infinitely resilient, and deeply observable. Whether you're routing web traffic or internal microservices, Teeter is built to withstand chaos.

Core Concepts

As modern web applications scale, relying on a single upstream server becomes a critical bottleneck. High traffic volumes can quickly saturate a server's memory, CPU, or network bandwidth, leading to degraded performance and eventual downtime.

To solve this, infrastructure scales horizontally across multiple servers. A Load Balancer (like Teeter) acts as the centralized ingress proxy. It sits directly in front of your upstream nodes, intercepts thousands of concurrent network requests, and intelligently distributes them across the backend fleet to guarantee high availability and minimal latency.

1. Routing (Prefixing)

Load balancers inspect the requested URL immediately upon ingress. If a client targets a specific endpoint (e.g., /api/login), Teeter parses the URI and strictly segregates traffic to the specific microservices configured for that domain.

2. Health Checking

Network components inevitably face transient failures. Teeter implements continuous background polling against configured upstream health-check endpoints. Any node failing to reply gracefully is autonomously evicted from the live traffic rotation until resolved.

How Teeter Works

Teeter combines the basics of load balancing with modern cloud-native resiliency patterns.

User Request
Gateway: Match URL path
Queue: Prevent sudden spikes
Rate Limit: Stop spam
Strategy: Pick a backend
Circuit Breaker: Fail fast on errors
Final Delivery to Target Server

Special Mechanisms in Teeter:

Complete File Tour

Every single line of code in Teeter belongs to a well-defined domain. Below is the precise meaning of every file in the repository.

Core Entry & Configuration

lb/cmd/lb/main.go

The main entry point. Reads the config, initializes every component (routers, strategies, health), and starts the primary HTTP server loop.

lb/pkg/config/config.go

Parses the config.yaml file into a Go struct so the application understands your routing rules and timeout preferences.

config.yaml

Your user-facing brain. It's where you define the routes, algorithms, and backend URLs.

The Traffic Logic (lb/internal/)

gateway/gateway.go

Accepts raw HTTP requests and funnels them into the load balancer lifecycle.

gateway/router.go & route.go

Inspects the request URL string and matches it against your configured Prefix filters.

registry/registry.go

A thread-safe memory map keeping strict track of which configured backend servers are currently alive vs. dead.

health/checker.go

Runs continuous background timers to ping every backend's health-check URL, automatically updating the registry if one goes down.

handler/handler.go

Generates the Reverse Proxy logic. It actually performs the act of taking user data and forwarding it securely to a target server.

handler/error_pages.go

If every single backend server is offline, this returns a beautiful "Service Unavailable" HTML page instead of an ugly raw text error.

Resiliency & Math (lb/internal/)

queue/queue.go

If Teeter receives 10,000 requests in a split second, the Queue briefly buffers them so backend servers don't crash from the flood.

retry/retry.go

Seamlessly recovers dropped requests transparently by routing to the next healthiest alternative.

circuitbreaker/circuitbreaker.go

Enforces maximum error thresholds, "tripping" and stopping all flow to failing domains for safety.

ratelimit/bucket.go

Limits the number of requests a single IP address can originate per minute, preventing DoS attacks.

Selection Strategies (lb/internal/strategy/)

strategy.go

The generic Interface defining how algorithms must be built.

roundrobin.go

Rotates sequentially through every active backend linearly.

leastconnections.go

Maintains a counter of active requests. Routes the next request to the server with the lowest current count.

weightedroundrobin.go

Allocates requests based on an assigned integer weight.

../consistenthashing/...

Uses a cryptographic hash of the User's IP so that a specific user ALWAYS gets routed to the same backend server.

Monitoring & DevOps

admin/admin.go

Spins up a separated private port (usually `:1997`) that returns JSON stats on the load balancer's current performance.

metrics/metrics.go

Records latency distributions and error codes format-ready for Prometheus scraping.

monitoring/...

Contains prometheus.yml and grafana/datasources which configure external tools to visualize stats.

Dockerfile & docker-compose.yml

Container tooling. Compiles everything into a tiny binary image and launches it immediately alongside its monitoring databases.

Routing Strategy Examples

Teeter supports four distinct mathematical strategies for distributing traffic. Below are real-world examples explaining when and how to apply each algorithm securely inside your config.yaml.

1. Round Robin

The Scenario: Standard Web APIs.
Imagine a Shoe Store API. You have 3 identical cloud servers. You simply want to distribute traffic evenly without overthinking it. Teeter gives Request #1 to Server A, Request #2 to Server B, Request #3 to Server C, Request #4 back to Server A.

- prefix: "/api/"
  strategy: "round_robin"
  backends:
    - url: "http://shoes-server-1:8080"
    - url: "http://shoes-server-2:8080"
    - url: "http://shoes-server-3:8080"

2. Least Connections

The Scenario: Heavy Image Processing.
Imagine an app where users upload large raw files to be converted to JPEG. Some files take 1 second to convert, others take 30 seconds. A simple Round Robin algorithm might accidentally give a massive 30-second job to a server that is already choking.

Least Connections dynamically monitors how many active requests each server is currently processing, and routes the new user to whoever is currently the most idle.

- prefix: "/upload/process"
  strategy: "least_connections"
  backends:
    - url: "http://worker-1:9090"
    - url: "http://worker-2:9090"

3. Weighted Round Robin

The Scenario: Uneven Hardware / Progressive Rollouts.
Imagine you have a massive Bare-Metal Mainframe Server, but you also plugged in a weak Raspberry Pi. You want them both to help process traffic, but you know the Pi will crash if it receives standard traffic.

You can assign weights. A weight of `8` for the Mainframe and `2` for the Pi means the Mainframe receives 80% of all traffic, and the Pi safely takes the remaining 20%.

- prefix: "/heavy-data"
  strategy: "weighted_round_robin"
  backends:
    - url: "http://powerful-mainframe:80"
      weight: 8
    - url: "http://weak-raspberry-pi:80"
      weight: 2

4. Consistent Hashing

The Scenario: Caching and User Sessions.
Imagine a service that holds a user's temporary shopping cart in local memory. If John logs in and hits Server A, his cart is on Server A. If his next click is routed to Server B, his cart disappears!

Consistent hashing uses cryptography on John's IP address. No matter how many times John refreshes the page, the load balancer's math ensures John is **always** predictably attached to Server A, ensuring his local sessions stay intact.

- prefix: "/cart"
  strategy: "consistent_hashing"
  backends:
    - url: "http://stateful-server-1:3000"
    - url: "http://stateful-server-2:3000"

Getting Started

Deploy Teeter in seconds using Docker. The ecosystem comes pre-configured for instant setup.

1. Start the Environment

Spin up the Gateway, Backend, and Prometheus natively.

docker-compose up -d

2. Verify Logs

Ensure all traffic processes boot successfully.

docker logs -f teeter

3. Access Dashboard

  • Admin API Health: http://localhost:1997/health
  • Live Statistics (Grafana): http://localhost:3000