Stay Hungry,Stay Foolish!

two load balancer methods

microservice without load balancer

 

不带负载均衡例子。

https://github.com/GavriloviciEduard/fastapi-microservices/tree/master

 

multiple upstream for load balancer

将上游服务看成独立的server

traefik multiple upstream

https://github.com/tanishqmanuja/demo.traefik-load-balancing

 

https://github.com/tanishqmanuja/demo.traefik-load-balancing/blob/main/config/traefik/config.yaml

http:
  routers:
    delhi:
      entryPoints:
        - web
      rule: PathPrefix(`/`) && Headers(`X-FOR-LOCATION`, `DEL`)
      service: delhi
    bombay:
      entryPoints:
        - web
      rule: PathPrefix(`/`) && Headers(`X-FOR-LOCATION`, `BOM`)
      service: bombay
    all:
      entryPoints:
        - web
      rule: PathPrefix(`/`)
      service: all

  services:
    delhi:
      loadbalancer:
        servers:
          - url: http://server-delhi-alpha:8080
          - url: http://server-delhi-bravo:8080
    bombay:
      loadbalancer:
        servers:
          - url: http://server-bombay-alpha:8080
          - url: http://server-bombay-bravo:8080
    all:
      weighted:
        services:
          - name: delhi
            weight: 1
          - name: bombay
            weight: 1

 

x-server: &server
  image: bun-server:latest
  pull_policy: never

services:
  server-delhi-alpha:
    <<: *server
    build: ./server # build once
    environment:
      LOCATION: DEL
      INSTANCE_ID: alpha

  server-delhi-bravo:
    <<: *server
    environment:
      LOCATION: DEL
      INSTANCE_ID: bravo

  server-bombay-alpha:
    <<: *server
    environment:
      LOCATION: BOM
      INSTANCE_ID: alpha

  server-bombay-bravo:
    <<: *server
    environment:
      LOCATION: BOM
      INSTANCE_ID: bravo

  traefik:
    image: traefik
    ports:
      - 8080:80
    volumes:
      - ./config/traefik:/etc/traefik

 

 

nginx multiple upstream

https://dev.to/mazenr/how-to-implement-a-load-balancer-using-nginx-docker-4g73

Implement Nginx Load Balancer

 

Let's consider having 3 Python servers each one is deployed to a container. After that we will use Nginx as a load balancer for the 3 servers.

Here is our files structure:



nginx-load-balancer
    |
    |---app1
    |     |-- app1.py
    |     |-- Dockerfile
    |     |-- requirements.txt
    |
    |---app2
    |     |-- app2.py
    |     |-- Dockerfile
    |     |-- requirements.txt
    |
    |---nginx
    |     |-- nginx.conf
    |     |-- Dockerfile
    |
    |------ docker-compose.yml


 

https://medium.com/@vinodkrane/microservices-scaling-and-load-balancing-using-docker-compose-78bf8dc04da9

https://stackoverflow.com/questions/50203408/docker-compose-scale-x-nginx-conf-configuration

 

Nginx

Dynamic upstreams are possible in Nginx (normal, sans Plus) but with tricks and limitations.

  1. You give up on upstream directive and use plain proxy_pass.

    It gives round robin load balancing and failover, but no extra feature of the directive like weights, failure modes, timeout, etc.

  2. Your upstream hostname must be passed to proxy_pass by a variable and you must provide a resolver.

    It forces Nginx to re-resolve the hostname (against Docker networks' DNS).

  3. You lose location/proxy_pass behaviour related to trailing slash.

    In the case of reverse-proxying to bare / like in the question, it does not matter. Otherwise you have to manually rewrite the path (see the references below).

Let's see how it works.

docker-compose.yml

version: '2.2'
services:
  reverse-proxy:
    image: nginx:1.15-alpine
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
    ports:
      - 8080:8080
  app:
    # A container that exposes an API to show its IP address
    image: containous/whoami
    scale: 4

nginx.conf

worker_processes  1;

events {
  worker_connections  1024;
}

http {
  access_log /dev/stdout;
  error_log /dev/stderr;

  server {
    listen 8080;
    server_name localhost;

    resolver 127.0.0.11 valid=5s;
    set $upstream app;

    location / {
      proxy_pass http://$upstream:80;
    }
  }
}

Then...

docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"

...produces something like the following which indicates the requests are distributed across 4 app containers:

IP: 172.30.0.2
IP: 172.30.0.2
IP: 172.30.0.3
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.3
IP: 172.30.0.6
IP: 172.30.0.5
IP: 172.30.0.5

References:

  1. Nginx with dynamic upstreams
  2. Using Containers to Learn Nginx Reverse Proxy
  3. Dynamic Nginx configuration for Docker with Python

Traefik

Traefik relies on Docker API directly and may be a simpler and more configurable option. Let's see it in action.

docker-compose.yml

version: '2.2'
services:
  reverse-proxy:
    image: traefik  
    # Enables the web UI and tells Traefik to listen to docker
    command: --api --docker  
    ports:
      - 8080:80      
      - 8081:8080  # Traefik's web UI, enabled by --api
    volumes:
      # So that Traefik can listen to the Docker events
      - /var/run/docker.sock:/var/run/docker.sock  
  app:
    image: containous/whoami
    scale: 4
    labels:
      - "traefik.frontend.rule=Host:localhost"

Then...

docker-compose up -d
seq 10 | xargs -I -- curl -s localhost:8080 | grep "IP: 172"

...also produces something the output that indicates the requests are distributed across 4 app containers:

IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5
IP: 172.31.0.6
IP: 172.31.0.4
IP: 172.31.0.2
IP: 172.31.0.5

In the Traefik UI (http://localhost:8081/dashboard/ in the example) you can see it recognised the 4 app containers:

Backends

References:

  1. The Traefik Quickstart (Using Docker)

 

https://github.com/mazen-r/articles/blob/main/nginx-docker-load-balancer/nginx/nginx.conf

upstream loadbalancer {
    server 172.17.0.1:5001 weight=5;
    server 172.17.0.1:5002 weight=5;
}

server {
    location / {
        proxy_pass http://loadbalancer;
    }
}

 

version: '3'
services:
  app1:
    build: ./app1
    ports:
    - "5001:5000"
  app2:
    build: ./app2
    ports:
    - "5002:5000"
  nginx:
    build: ./nginx 
    ports:
    - "8080:80"
    depends_on:
      - app1
      - app2

 

docker compose scale for load balancer

 

traefik example

https://doziestar.medium.com/effortless-scaling-and-deployment-a-comprehensive-guide-for-solo-developers-and-time-savers-88bfa4118940

 

Docker Compose — scale Command:

The docker-compose --scale command allows you to scale your Docker Compose services by specifying the number of replicas (instances) for each service. This command makes it easy to scale services up or down on demand.

docker-compose up --scale SERVICE=NUM_REPLICAS

To both services, we can simply run

docker-compose up --build --scale server=3 --scale computations=3

If we run this, we can see something like this

We have 3 instances of computations and 3 instances of the server

You can see how we are now running 3 instances of each service that we scale.

Load Balancing Strategy

By default, Traefik uses the Round Robin load balancing strategy, but you can change this by adding the appropriate label to your service. For example, to use the Weighted Round Robin strategy, you would add:

labels:
- "traefik.http.services.web.loadbalancer.method=wrr"

But without changing anything. Traefik will automatically discover the new instances of your computations and serverservice and load balance incoming requests.

 

nginx example

https://milaan.hashnode.dev/scaling-docker-containers-with-nginx-a-guide-to-reverse-proxy-and-load-balancing

Now that we have everything we need, let's use the following command to see if everything works out properly or not. If everything works properly, then nginx should act as a reverse proxy and load balancer.

 
 
docker compose up --scale api=3

docker compose up --scale api=3 is a command that is used to start a set of containers defined in a Docker Compose file and scale an API service in the Compose file to run three instances of that service.

The hostname that is displayed, is actually the container ID. Since we have 3 containers let's check if the load balancer is working properly or not. For context, the round-robin algorithm is used in this demo.

Here is the container ID.

Here we can see that every time we request for API server we see that a new container is responding.

Voilà, there you go that's how you scale docker containers with nginx. Big thanks to trulymittal <3

 

code

https://github.com/ofstudio/docker-compose-scale-example/tree/master

version: '2.3'
# Note: `services.app.scale` available only version 2.x
# In version 3.x scale option will produce error:
# "Unsupported config option for services.app: 'scale'"
# You can use `docker-compose up --scale app=3`
# instead of `scale` field for version 3.x compose files

services:

  app:
    build: .
    image: "scale-app-example"
    scale: 3

  nginx:
    image: nginx:stable-alpine
    ports:
      - 8000:80
    depends_on:
      - app
    volumes:
      - ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
      - ./var/log/nginx:/var/log/nginx

 

server {
  listen  80 default_server;
  location / {
    proxy_pass http://app:3000;
  }
}

 

Both tutorial

https://github.com/twtrubiks/docker-django-nginx-uwsgi-postgres-load-balance-tutorial/blob/master/nginx/my_nginx.conf

 

posted @ 2025-01-04 19:38  lightsong  阅读(20)  评论(0)    收藏  举报
千山鸟飞绝,万径人踪灭