服务发现/注册/分布式配置中心/负载均衡

Service Discovery

1. register & pull config info
2. register center itself should be distributed
3. health monitor
zookeeper(java)
consul & etcd

consul

 install

docker run -d -p 8500:8500 -p 8300:8300 -p 8301:8301 -p 8302:8302 -p 8600:8600/udp  consul consul agent  -dev -client=0.0.0.0

docker container update --restart=always 容器名字

浏览器访问 127.0.0.1:8500

consul提供dns功能,可以通过dig命令行来测试,consul默认的dns端口是8600

dig @192.168.1.103 -p 8600 consul.service.consul SRV

DNS

windows 可以配置domain对应的IP addr =>  C:\Windows\System32\drivers\etc\hosts 

对微服务来说,srv在注册中心注册,生成IP addr,gateway来查询IP

 

create  

[PUT] 192.168.2.112:8500/v1/agent/service/register 

headers:  Content-Type application/json 

JSON:

{
    "Name":"mxshop-web",
    "ID":"mxshop-web",
    "Tags":["mxshop","bobby","imooc","web"],
    "Address":"127.0.0.1",
    "Port":50051
}

delete 

  [PUT] 192.168.2.112:8500/v1/agent/service/deregister/mxshop-web 

go - consul

HTTP health check

在虚拟机上启动consul,consul健康检查通过 "http://192.168.2.178:8021/u/v1/base/health" 访问,服务要返回http.statusOk

 (不可以写127.0.0.1,应该是能访问到的本机局域网IP,cmd -> ipconfig)

package main

import "github.com/hashicorp/consul/api"

func Register(address string, port int, name string, tags []string, id string) error {
    cfg := api.DefaultConfig()
    cfg.Address = "192.168.2.112:8500"
    client, err := api.NewClient(cfg)
    if err != nil {
        panic(err)
    }
    // health check instance
    check := &api.AgentServiceCheck{
        HTTP:                           "http://192.168.2.178:8021/u/v1/base/health",
        Timeout:                        "5s",
        Interval:                       "5s",
        DeregisterCriticalServiceAfter: "10s",
    }
    // registration instance
    regis := &api.AgentServiceRegistration{
        Name:    name,
        ID:      id,
        Port:    port,
        Tags:    tags,
        Address: address,
        Check:   check,
    }
    err = client.Agent().ServiceRegister(regis)
    if err != nil {
        panic(err)
    }
    return err
}

func main() {
    _ = Register("192.168.2.112", 8021, "user-web", []string{"mxshop", "bobby"}, "user-web")

}

list 所有服务

func Services() {
    cfg := api.DefaultConfig()
    cfg.Address = "192.168.2.112:8500"
    client, err := api.NewClient(cfg)
    if err != nil {
        panic(err)
    }
    data, err := client.Agent().Services()
    if err != nil {
        panic(err)
    }
    for key, _ := range data {
        fmt.Println(key)
    }
}

filter service name

data, err := client.Agent().ServicesWithFilter(`Service == "user-web"`)

grpc health check 

import "google.golang.org/grpc/health/grpc_health_v1"

grpc_health_v1.RegisterHealthServer(server, health.NewServer())

+ Register(), 区别是grpc的写法

    check := &api.AgentServiceCheck{
        GRPC:                           fmt.Sprintf("%s:%d", grpcHost, port),
        Timeout:                        "5s",
        Interval:                       "5s",
        DeregisterCriticalServiceAfter: "10s",
    }

Load Balancer

get a free port 

func GetFreePort() (int, error) {
    addr, err := net.ResolveTCPAddr("tcp", "localhost:0")
    if err != nil {
        return 0, err
    }
    fmt.Printf("%v\n", *addr)
    l, err := net.ListenTCP("tcp", addr)
    if err != nil {
        return 0, err
    }
    fmt.Printf("%+v\n", *l)
    defer l.Close()
    return l.Addr().(*net.TCPAddr).Port, nil
}

在 Go 语言中,net.ResolveTCPAddr 函数用于解析一个 TCP 地址,并返回一个 *net.TCPAddr 类型的地址。这个函数的第一个参数是网络类型,这里是 "tcp",表示传输控制协议。第二个参数是一个字符串,表示 TCP 网络地址。

在这个特定的例子中,"localhost:0" 表示一个本地地址,其中 localhost 通常解析为回环地址 127.0.0.1,而端口号 0 是一个特殊值,告诉操作系统自动选择一个空闲的端口。

当 net.ListenTCP 使用由 net.ResolveTCPAddr 返回的地址调用时,它会请求操作系统为 TCP 网络监听一个空闲端口。操作系统会查找并分配一个当前未被使用的端口号,这样应用程序就可以使用这个端口来监听进入的 TCP 连接。

l.Addr().(*net.TCPAddr).Port 这部分代码从监听器 l 获取其地址,然后将其断言为 *net.TCPAddr 类型(这是安全的,因为监听器是一个 TCP 监听器),最后获取分配给监听器的端口号。

总的来说,这段代码的作用是让操作系统自动选择一个未被占用的端口,并返回这个端口号,以便应用程序可以使用它来监听网络连接。

用动态端口改造配置:

 

 有三种实现方式:

1. centralized LB

2. in-process LB:

goroutine:

  pull service list regularly

  keep long connection with each srvs

 

3. independent process LB

common LB strategies

1. Round Robin method
Polling is easy to implement, allocating requests to the background servers in sequence, treating each server in a balanced manner, regardless of the actual number of connections to the server and the current system load.


2. Random method
Through the system random function, one of the backend server lists is randomly selected for access based on the size value. It can be known from the probability and statistical theory that as the number of calls increases, the actual effect becomes closer and closer to evenly distributing traffic to each server in the background, which is the effect of the polling method.


3. Source address hashing method
The idea of the source address hashing method is to calculate a hash value through a hash function based on the IP address of the client requested by the service consumer, and perform a modulo operation on this hash value and the size of the server list. The result is The serial number of the accessed server address. The source address hash method is used for load balancing. If the server list remains unchanged for the same IP client, it will be mapped to the same backend server for access.

adv: each server can maintain a db -> same users come to same db

disadv: change the number of servers with change the serial number of the server address -> so the server list cannot be changed


4. Weighted Round Robin method
Different backend servers may have different machine configurations and current system loads, so their ability to withstand pressure is also different. Assign higher weights to machines with high configuration and low load so that they can handle more requests. Machines with low configuration and high load are assigned lower weights to reduce their system load. Weighted polling is very easy. Good handles this problem and distributes requests to the backend in order and based on weight.


5. Weighted random (Weight Random) method

The weighted random method is similar to the weighted polling method. Different weights are configured according to the different configurations and load conditions of the backend server. The difference is that it randomly selects servers based on weight, not order.


6. Minimum number of connections method
Earlier we worked hard to achieve a balanced distribution of service consumer request times. We know that this is correct. We can evenly distribute the workload to multiple back-end servers and maximize server utilization. However, in fact, , the balance of the number of requests does not mean the balance of the load. Therefore, we need to introduce the minimum number of connections method. The minimum number of connections method is more flexible and intelligent. Since the configuration of the back-end server is different, the processing of requests may be faster or slower. It is based on the current connection status of the back-end server and dynamically Select the server with the smallest current backlog of connections to handle the current request, improve the utilization of the background server as much as possible, and reasonably distribute the load to each server.

GRPC - LB

package main

import (
    "context"
    "fmt"
    "log"
    "google.golang.org/grpc"
    _ "github.com/mbobakov/grpc-consul-resolver" // It's important
    // All target URLs like 'consul://.../...' will be resolved by this resolver
    /**
    func init() {
        resolver.Register(&builder{})
    } import先初始化一个resolver
     */
    "GoProjects/grpc_lb_test/proto"
)

func main() {
    conn, err := grpc.Dial(
        "consul://192.168.2.112:8500/mxshop_srvs?wait=14s&tag=mxshop",  // tag is necessary!!
        grpc.WithInsecure(),
        grpc.WithDefaultServiceConfig(`{"loadBalancingPolicy": "round_robin"}`),
    )
    if err != nil {
        log.Fatal(err)
    }
    defer conn.Close()

    userSrvClient := proto.NewUserClient(conn)
    rsp, err := userSrvClient.GetUserList(context.Background(), &proto.PageInfo{
        Pn:    1,
        PSize: 2,
    })
    if err != nil {
        panic(err)
    }
    for index, data := range rsp.Data {
        fmt.Println(index, data)
    }

}

1. 如何测试?启动多个srvs instances(different id) with the same name !!!

--> UUID

import "github.com/satori/go.uuid"

ID换成 fmt.Sprintf("%s", uuid.NewV4()),

如果是用IDE run,只能起一个服务

所以用terminal command -> go run 目录/main.go (切到和woking space一致的目录,不然容易出错)

这样可以起多个instances

2. 如何quit the progress elegantly

ctrl+c : 退出进程,但是没有deregister the consul

optimize: ctrl+c -> deregister

SIGINT 2 Term 用户发送INTR字符(Ctrl+C)触发SIGINT 2 Term 用户发送INTR字符(Ctrl+C)触发
SIGTERM 15 Term 结束程序(可以被捕获、阻塞或忽略)

    quit := make(chan os.Signal)
    signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
    <-quit

 3. test

client发送10次srv userlist的请求:两个instance分别5个userlist表示round robin 达到分流效果

Configuration center

pain point: 

 --->

nacos

install

docker run --name nacos-standalone -e MODE=standalone -e JVM_XMS=512m -e JVM_XMX=512m -e JVM_XMN=256m -p 8848:8848 -d nacos/nacos-server:latest

访问:http://192.168.1.103:8848/nacos/index.html
用户名密码: nacos/nacos

Namespace: isolate micro-srv

Group: differentiate pro/test/local

dataid: a config file

Requirements for config center: 1. get the config 2. listen for config changes and pull the latest

 go - nacos - test 

package main

import (
    "fmt"
    "github.com/nacos-group/nacos-sdk-go/clients"
    "github.com/nacos-group/nacos-sdk-go/common/constant"
    "github.com/nacos-group/nacos-sdk-go/vo"
    "time"
)

func main() {
    //create ServerConfig
    sc := []constant.ServerConfig{
        *constant.NewServerConfig("192.168.2.112", 8848, constant.WithContextPath("/nacos")),
    }

    //create ClientConfig
    cc := *constant.NewClientConfig(
        constant.WithNamespaceId("09d9e80b-6932-4a58-ad14-4fec10bcef59"),
        constant.WithTimeoutMs(5000),
        constant.WithNotLoadCacheAtStart(true),
        constant.WithLogDir("tmp/nacos/log"),  // important !!! 
        constant.WithCacheDir("tmp/nacos/cache"),
        constant.WithLogLevel("debug"),
    )

    // create config client
    client, err := clients.NewConfigClient(
        vo.NacosClientParam{
            ClientConfig:  &cc,
            ServerConfigs: sc,
        },
    )
    if err != nil {
        panic(err.Error())
    }
    content, err := client.GetConfig(vo.ConfigParam{
        DataId: "usr-web.yaml",
        Group:  "dev",
    })
    fmt.Println("GetConfig,config :" + content)
    if err != nil {
        panic(err.Error())
    }
    err = client.ListenConfig(vo.ConfigParam{
        DataId: "usr-web.yaml",
        Group:  "dev",
        OnChange: func(namespace, group, dataId, data string) {
            fmt.Println("group:" + group + ", dataId:" + dataId + ", data:" + data)
        },
    })
    if err != nil {
        panic(err.Error())
    }
    time.Sleep(300 * time.Second)
}

 

posted @ 2023-11-06 00:40  PEAR2020  阅读(427)  评论(0)    收藏  举报