• 博客园logo
  • 会员
  • 众包
  • 新闻
  • 博问
  • 闪存
  • 赞助商
  • HarmonyOS
  • Chat2DB
    • 搜索
      所有博客
    • 搜索
      当前博客
  • 写随笔 我的博客 短消息 简洁模式
    用户头像
    我的博客 我的园子 账号设置 会员中心 简洁模式 ... 退出登录
    注册 登录
刺猬多看看
博客园    首页    新随笔    联系   管理    订阅  订阅

operator-demo编写过程

operator-demo编写过程

目录
  • operator-demo编写过程
      • 使用的组件及版本:
      • 开发准备
      • 前期配置
      • 创建项目
      • 修改代码
      • 修改yaml文件
      • 运行


使用的组件及版本:

  • operator-sdk v1.7.2
  • go 1.15 linux/amd64
  • git 1.8.3.1
  • k8s 1.17.5
  • docker 20.10.5
    yum install -y kubelet-1.17.5 kubeadm-1.17.5 kubectl-1.17.5 --disableexcludes=kubernete

开发准备

  • 工具: Visual Studio Code(Windows安装,代码及运行环境均在Linux机器上)

  • 开发方式:远程开发

    具体配置方式参考如下:

前期配置

以下安装均在linux机器上执行

  • 安装Git
yum install git
  • 安装docker
yum install docker-ce
  • 安装GO
curl -LO https://studygolang.com/dl/golang/go1.15.linux-amd64.tar.gz
rm -rf /usr/local/go  // 如果之前安装过go,先执行此步骤
tar -C /usr/local/ -xvf go1.15.linux-amd64.tar.gz
  • 配置Go的环境变量
// 将go配置到/etc/profile
export GOROOT=/usr/local/go
export GOPATH=/data/gopath // 路径自定义
export PATH=$GOROOT/bin:$PATH
source /etc/profile

// 将代理修改未国内代理
// 临时生效
export GO111MODULE=on
export GOPROXY=https://goproxy.cn

// 永久生效
echo "export GO111MODULE=on" >> /etc/profile
echo "GOPROXY=https://goproxy.cn" >> /etc/profile
source /etc/profile

// 采用go env的方式
// go版本1.13及以上推荐使用如下配配置, 修改为国内代理
go env -w GO111MODULE=on  // 默认有
go env -w GOPROXY=https://goproxy.cn,direct
  • GO111MODULE=off 无模块支持,go 会从 GOPATH 和 vendor 文件夹寻找包。
  • GO111MODULE=on 模块支持,go 会忽略 GOPATH 和 vendor 文件夹,只根据 go.mod 下载依赖。
  • GO111MODULE=auto 在 $GOPATH/src 外面且根目录有 go.mod 文件时,开启模块支持。
  • 安装operator-sdk(直接下载可执行文件)
curl -LO https://github.com/operator-framework/operator-sdk/releases/download/v1.7.2/operator-sdk_linux_amd64
chmod +x operator-sdk_linux_amd64
mv operator-sdk_linux_amd64 /usr/local/bin/operator-sdk

创建项目

  1. init

    创建一个项目目录,使用operator-sdk进行初始化

mkdir redis-operator
cd redis-operator
operator-sdk init --domain=example.com --repo=paas.mlm.com/redis/app
  1. create api

    创建api

    (注意,若使用低版本如v1.1.0则还需要加一个参数 --make=false,否则会报错。当前使用的版本只需要执行以下命令即可)

operator-sdk create api --group redis --version v1 --kind Redis --resource=true --controller=true

修改代码

  1. 修改type.go

    首先需要引入两个包 appsv1、corev1 (名字可以自定义)

import (
	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

​ 修改两个部分的内容,一个是spec,一个是status(状态直接引用了StatefulSet的状态,未自己定义)

type RedisSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file

	// Foo is an example field of Redis. Edit redis_types.go to remove/update
	// Foo string `json:"foo,omitempty"`

	// 定义属性
	Replicas  *int32                      `json:"replicas"`            // 副本数
	Image     string                      `json:"image"`               // 镜像
	Resources corev1.ResourceRequirements `json:"resources,omitempty"` // 资源限制
	Envs      []corev1.EnvVar             `json:"envs,omitempty"`      // 环境变量
	Ports     []corev1.ServicePort        `json:"ports,omitempty"`     // 服务端口
	Type      corev1.ServiceType          `json:"type"`                // 部署类型
}

// RedisStatus defines the observed state of Redis
type RedisStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
	// 直接引用statefulset的状态
	appsv1.StatefulSetStatus `json:",inline"`
}

​ 修改完后执行命令,重新生成 zz_generated.deepcopy.go 文件

make generate
  1. 在项目跟目录下创建目录resource,在resource目录下继续创建目录statefulset、service

statefulset目录下创建statefulset.go,内容如下:

package statefulset

import (
	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime/schema"
	appv1 "paas.cvicse.com/redis/app/api/v1"
)

func New(redis *appv1.Redis) *appsv1.StatefulSet {
	labels := map[string]string{"redis.example.com/v1": redis.Name}
	selector := &metav1.LabelSelector{MatchLabels: labels}
	return &appsv1.StatefulSet{
		TypeMeta: metav1.TypeMeta{
			APIVersion: "apps/v1",
			Kind:       "StatefulSet",
		},
		ObjectMeta: metav1.ObjectMeta{
			Name:      redis.Name,
			Namespace: redis.Namespace,
			OwnerReferences: []metav1.OwnerReference{
				*metav1.NewControllerRef(redis, schema.GroupVersionKind{
					Group:   appv1.GroupVersion.Group,
					Version: appv1.GroupVersion.Version,
					Kind:    "Redis",
				}),
			},
		},
		Spec: appsv1.StatefulSetSpec{
			Replicas: redis.Spec.Replicas,
			Selector: selector,
			Template: corev1.PodTemplateSpec{
				ObjectMeta: metav1.ObjectMeta{
					Labels: labels,
				},
				Spec: corev1.PodSpec{
					Containers: newContainers(redis),
				},
			},
		},
	}
}

func newContainers(redis *appv1.Redis) []corev1.Container {
	var containerPorts []corev1.ContainerPort
	for _, servicePort := range redis.Spec.Ports {
		var cport corev1.ContainerPort
		cport.ContainerPort = servicePort.TargetPort.IntVal
		containerPorts = append(containerPorts, cport)
	}
	return []corev1.Container{
		{
			Name:            redis.Name,
			Image:           redis.Spec.Image,
			Ports:           containerPorts,
			Env:             redis.Spec.Envs,
			Resources:       redis.Spec.Resources,
			ImagePullPolicy: corev1.PullIfNotPresent,
		},
	}
}

​ service目录下创建service.go,内容如下:

package service

import (
	corev1 "k8s.io/api/core/v1"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/runtime/schema"
	appv1 "paas.cvicse.com/redis/app/api/v1"
)

func New(redis *appv1.Redis) *corev1.Service {
	return &corev1.Service{
		TypeMeta: metav1.TypeMeta{
			Kind:       "Service",
			APIVersion: "v1",
		},
		ObjectMeta: metav1.ObjectMeta{
			Name:      redis.Name,
			Namespace: redis.Namespace,
			OwnerReferences: []metav1.OwnerReference{
				*metav1.NewControllerRef(redis, schema.GroupVersionKind{
					Group:   appv1.GroupVersion.Group,
					Version: appv1.GroupVersion.Version,
					Kind:    "Redis",
				}),
			},
		},
		Spec: corev1.ServiceSpec{
			Ports: redis.Spec.Ports,
			Selector: map[string]string{
				"redis.example.com/v1": redis.Name,
			},
			Type: redis.Spec.Type,
		},
	}
}

  1. 修改redis_controller.go逻辑,主要是修改方法 Reconcile,内容如下:
/*
Copyright 2022.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package controllers

import (
	"context"
	"encoding/json"
	"reflect"

	"k8s.io/apimachinery/pkg/api/errors"

	"github.com/go-logr/logr"
	"k8s.io/apimachinery/pkg/runtime"
	ctrl "sigs.k8s.io/controller-runtime"
	"sigs.k8s.io/controller-runtime/pkg/client"

	"paas.cvicse.com/redis/app/resource/service"
	"paas.cvicse.com/redis/app/resource/statefulset"

	appsv1 "k8s.io/api/apps/v1"
	corev1 "k8s.io/api/core/v1"
	redisv1 "paas.cvicse.com/redis/app/api/v1"
)

// RedisReconciler reconciles a Redis object
type RedisReconciler struct {
	client.Client
	Log    logr.Logger
	Scheme *runtime.Scheme
}

//+kubebuilder:rbac:groups=redis.example.com,resources=redis,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=redis.example.com,resources=redis/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=redis.example.com,resources=redis/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the Redis object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.8.3/pkg/reconcile
func (r *RedisReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	_ = r.Log.WithValues("redis", req.NamespacedName)

	//Step1: 获取Redis资源
	instance := &redisv1.Redis{}
	if err := r.Client.Get(ctx, req.NamespacedName, instance); err != nil {
		if errors.IsNotFound(err) {
			// r.Log.Info("Redis资源不存在")
			return ctrl.Result{}, nil
		}
	}

	//Step2: 如果Redis资源处于删除状态,直接返回
	if instance.DeletionTimestamp != nil {
		return ctrl.Result{}, nil
	}
	//Step3: 获取StatefulSet资源
	oldStatefulset := &appsv1.StatefulSet{}
	//TODO: 如果StatefulSet不存在,则创建
	if err := r.Client.Get(ctx, req.NamespacedName, oldStatefulset); err != nil {
		if errors.IsNotFound(err) {
			// r.Log.Info("Redis对应的StatefulSet不存在,执行创建过程")

			// 创建StatefulSet
			if err := r.Client.Create(ctx, statefulset.New(instance)); err != nil {
				return ctrl.Result{}, err
			}

			// 创建Service
			if err := r.Client.Create(ctx, service.New(instance)); err != nil {
				return ctrl.Result{}, err
			}

			// 更新资源的注解
			data, _ := json.Marshal(instance.Spec)
			if instance.Annotations != nil {
				instance.Annotations["spec"] = string(data)
			} else {
				instance.Annotations = map[string]string{"spec": string(data)}
			}
			if err := r.Client.Update(ctx, instance); err != nil {
				return ctrl.Result{}, err
			}

		} else {
			return ctrl.Result{}, err
		}
	} else {
		//TODO: 如果StatefulSet存在,则更新
		oldSpec := redisv1.RedisSpec{}
		if err := json.Unmarshal([]byte(instance.Annotations["spec"]), &oldSpec); err != nil {
			return ctrl.Result{}, nil
		}

		// 对比当前资源实例跟原来的定义, 不相等则更新,相等则不处理
		if !reflect.DeepEqual(oldSpec, instance.Spec) {
			// 更新StatefulSet, 只更换Spec
			newStatefulSet := statefulset.New(instance)
			oldStatefulset.Spec = newStatefulSet.Spec
			if err := r.Client.Update(ctx, oldStatefulset); err != nil {
				return ctrl.Result{}, err
			}

			// 更新service
			newService := service.New(instance)
			oldService := &corev1.Service{}
			if err := r.Client.Get(ctx, req.NamespacedName, oldService); err != nil {
				return ctrl.Result{}, err
			}
             // 创建出的Service的Spec中会生成一些其他内容,需要重新赋值
			clusterIP := oldService.Spec.ClusterIP
			oldService.Spec = newService.Spec
			oldService.Spec.ClusterIP = clusterIP // Service的ClusterIP, 10.254.x.x
			if err := r.Client.Update(ctx, oldService); err != nil {
				return ctrl.Result{}, err
			}

			// 更新资源的 Annotations
			data, _ := json.Marshal(instance.Spec)
			if instance.Annotations != nil {
				instance.Annotations["spec"] = string(data)
			} else {
				instance.Annotations = map[string]string{"spec": string(data)}
			}
			if err := r.Client.Update(ctx, instance); err != nil {
				return ctrl.Result{}, err
			}
		}
	}

	return ctrl.Result{}, nil
}

// SetupWithManager sets up the controller with the Manager.
func (r *RedisReconciler) SetupWithManager(mgr ctrl.Manager) error {
	return ctrl.NewControllerManagedBy(mgr).
		For(&redisv1.Redis{}).
		Complete(r)
}

修改yaml文件

kind为Redis的资源的yaml文件内容如下,该yaml文件的路径在项目的config/samples目录下

apiVersion: redis.example.com/v1
kind: Redis
metadata:
  name: redis-sample # 资源名称
  namespace: test # 命名空间
spec:
  # Add fields here
  replicas: 1 # 副本数
  image: 124.223.82.79:5000/redis:6.2.6 # 镜像
  type: NodePort # 定义了Service的部署方式  ClusterIP  or  NodePort
  ports: # 端口配置
  - nodePort: 30000
    protocol: TCP
    targetPort: 6379
    port: 6379
  envs:
  - name: DEMO
    value: redis
  - name: GOPATH
    value: gopath
  resources: # 资源配置
    limits:
      cpu: 100m
      memory: 100Mi
    requests:
      cpu: 100m
      memory: 100Mi

运行

运行方式有多种

  • 本地代码运行

    1.代码在k8s节点上,直接在项目的根目录运行如下命令(此方式主要开发测试使用)

    make generate && make manifests && make install && make run
    

注意:

  1. 本机需确保安装了 kubectl 工具,并且证书文件 ~/.kube/config 存在(保证为集群管理员权限)
  2. 测试完毕后使用 ctrl + c 停止程序,然后 make uninstall 删除 crd 定义
  3. 若没有发生依赖改变,可直接使用go run main.go指令执行代码

make generate:生成包含 DeepCopy、DeepCopyInto 和 DeepCopyObject 方法实现的代码

make manifests:生成 WebhookConfiguration、ClusterRole 和 CustomResourceDefinition 对象

make install:将 CRD 安装到 ~/.kube/config 中指定的 K8s 集群中

make run:运行代码

make uninstall: 从 ~/.kube/config 中指定的 K8s 集群中卸载 CRD

  • 在k8s集群中运行

    1. 修改Dockerfile文件(文件在项目的根目录下),修改内容如下
    # Build the manager binary
    FROM golang:1.15 as builder
    
    WORKDIR /workspace
    # Copy the Go Modules manifests
    COPY go.mod go.mod
    COPY go.sum go.sum
    # cache deps before building and copying source so that we don't need to re-download as much
    # and so that source changes don't invalidate our downloaded layer
    
    ENV GOPROXY https://goproxy.cn,direct
    
    RUN go mod download
    
    # Copy the go source
    COPY main.go main.go
    COPY api/ api/
    COPY controllers/ controllers/
    COPY resource/ resource/
    
    # Build
    RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -a -o manager main.go
    
    # Use distroless as minimal base image to package the manager binary
    # Refer to https://github.com/GoogleContainerTools/distroless for more details
    
    # 使用 distroless 作为最小基础镜像来打包管理器二进制文件
    # FROM gcr.io/distroless/static:nonroot
    FROM kubeimages/distroless-static:latest
    WORKDIR /
    COPY --from=builder /workspace/manager .
    USER 65532:65532
    
    ENTRYPOINT ["/manager"]
    
    • 增加了环境变量 ENV GOPROXY https://goproxy.cn,direct
    • 增加了COPY resource/ resource/
    • 修改FROM镜像 FROM kubeimages/distroless-static:latest
    1. 执行如下命令制作镜像
    make docker-build IMG=124.223.82.79:5000/redis-operator:v1.0
    
    1. 运行operator-controller-manager

      operator-controller-manager运行后会启动两个容器 [kube-rbac-proxy manager]

      容器manager所使用的镜像是【步骤2】所制作的镜像

      容器kube-rbac-proxy所使用的镜像需要修改, 路径: 项目根目录/config/default/manager_auth_proxy_patch.yaml,内容如下:

      # This patch inject a sidecar container which is a HTTP proxy for the
      # controller manager, it performs RBAC authorization against the Kubernetes API using SubjectAccessReviews.
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: controller-manager
        namespace: system
      spec:
        template:
          spec:
            containers:
            - name: kube-rbac-proxy
              image: 124.223.82.79:5000/kube-rbac-proxy:v0.11.0  # 默认镜像为grc.io/kubesphere/kube-rbac-proxy:v0.8.0, 拉取不到,可从dockerhub上拉取, 此镜像需要修改!!!
              args:
              - "--secure-listen-address=0.0.0.0:8443"
              - "--upstream=http://127.0.0.1:8080/"
              - "--logtostderr=true"
              - "--v=10"
              ports:
              - containerPort: 8443
                name: https
            - name: manager
              args:
              - "--health-probe-bind-address=:8081"
              - "--metrics-bind-address=127.0.0.1:8080"
              - "--leader-elect"
      

      执行如下命令创建 operator-controller-manager

      make deploy IMG=124.223.82.79:5000/redis-operator:v1.0
      
    2. 创建ClusterRoleBinding

      创建operator-controller-manager后,直接创建自定义资源,查看controller的日志会发现权限报错,报错信息如下:

      E0210 05:45:33.131287       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:redis-operator-system:redis-operator-controller-manager" cannot list resource "statefulsets" in API group "apps" at the cluster scope
      E0210 05:45:34.271962       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:redis-operator-system:redis-operator-controller-manager" cannot list resource "statefulsets" in API group "apps" at the cluster scope
      E0210 05:45:36.971944       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:redis-operator-system:redis-operator-controller-manager" cannot list resource "statefulsets" in API group "apps" at the cluster scope
      E0210 05:45:40.383080       1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.20.2/tools/cache/reflector.go:167: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:serviceaccount:redis-operator-system:redis-operator-controller-manager" cannot list resource "statefulsets" in API group "apps" at the cluster scope
      
      • 方案一:直接将controller-manager绑定到集群管理员cluster-admin

      创建cluster-admin.yaml文件,内容如下:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: cluster-admin-rolebinding
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
      - kind: ServiceAccount
        name: redis-operator-controller-manager
        namespace: redis-operator-system
      

      创建命令如下:

      kubectl apply -f clutser-admin.yml
      
      • 方案二:修改rbac目录下role.yaml
    3. 创建CRD资源

      文件路径: 项目根目录/config/samples/redis_v1_redis.yaml

      在项目根目录下执行如下命令进行创建:

      kubectl apply -f ./config/samples/redis_v1_redis.yaml
      
    4. 删除CRD资源

      make uninstall
      
    5. 删除controller-manager

      make undeploy
      

    make undeploy: 从 ~/.kube/config 中指定的 K8s 集群中卸载controller

posted @ 2025-04-09 14:55  刺猬多看看  阅读(16)  评论(0)    收藏  举报
刷新页面返回顶部
博客园  ©  2004-2025
浙公网安备 33010602011771号 浙ICP备2021040463号-3