一、沈阳项目离线部署规范部署步骤:
1、初始化数据盘,docker27/mysql8/redis6.2-7.2/minio/mongodb 数据都配置到/秦瑞数据盘挂载点
mkfs.xfs	/dev/vg1/text1
mkdir -p /秦瑞 && mount /dev/vg1/text1 /秦瑞
2、安装docker27
cp docker/* /usr/bin/
cat > /etc/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=65535
LimitNPROC=65535
LimitCORE=65535
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
mkdir -p /秦瑞/docker_data && mkdir /etc/docker
/etc/docker/daemon.json 配置内容
{
  "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
  "live-restore": true,
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "3"
  },
  "data-root": "/秦瑞/docker_data"
}
systemctl daemon-reload
systemctl enable --now docker
二、针对redis6.2/7.2部署:
拉取ARM64 镜像
docker pull --platform linux/arm64  redis:6.2-bookworm
验证拉取的镜像架构是否是arm64
docker inspect --format '{{.Architecture}}' redis:6.2-bookworm
镜像离线导出压缩
docker save redis:6.2-bookworm | gzip > redis-6.2-bookworm-arm64.tar.gz
镜像鲲鹏920 导入
gunzip -c redis-6.2-bookworm-arm64.tar.gz | docker load
准备redis.conf及持久化目录、密码
sed -i 's/^bind 127.0.0.1/# bind 127.0.0.1/'        /data/redis/conf/redis.conf
sed -i 's/^protected-mode yes/protected-mode no/'    /data/redis/conf/redis.conf
sed -i 's/^# requirepass/requirepass sy_123456/'    /data/redis/conf/redis.conf
sed -i 's/^appendonly no/appendonly yes/'            /data/redis/conf/redis.conf
docker run -d --restart=always --name redis62   -p 6379:6379   -v /秦瑞/redis_data/redis.conf:/usr/local/etc/redis/redis.conf:ro   -v /秦瑞/redis_data:/data   redis:6.2-bookworm redis-server /usr/local/etc/redis/redis.conf
报错:
redis7.2 可以用的配置文件 6.2报错
*** FATAL CONFIG FILE ERROR (Redis 6.2.18) ***
Reading the configuration file, at line 422
>>> 'locale-collate ""'
Bad directive or wrong number of arguments
6.2 分支并不认识这个指令——它是 Redis 7.0 以后才引入的本地化排序参数。
配置里只要出现 不识别的关键字,Redis 就会 Fatal 退出
Redis 6.2.18 再次因 ARM64 + 64 KB 大页内核 COW 缺陷 强制退出。
临时绕法:把官方提示的 ignore-warnings ARM64-COW-BUG 写进配置即可立即启动;生产根治仍需升级内核到 ≥ 5.10.110(或厂商最新 stable)。
redis.conf 配置增加
ignore-warnings ARM64-COW-BUG
三、minio存储RELEASE.2025-04-22T22-12-26Z
查看minio版本
docker inspect 8efaa72525a9 | grep version
  		"com.docker.compose.version": "2.35.0",
                "io.buildah.version": "1.39.0-dev",
                "version": "RELEASE.2025-04-22T22-12-26Z"
docker pull --platform linux/arm64 swr.cn-north-4.myhuaweicloud.com/ddn-k8s/quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
docker save swr.cn-north-4.myhuaweicloud.com/ddn-k8s/quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z | gzip > minio-20250422-arm64.tar.gz
gunzip -c minio-20250422-arm64.tar.gz | docker load
验证minio x86 用 --platform linux/arm64 拉取的是不是arm64版本
docker inspect --format '{{.Architecture}}' swr.cn-north-4.myhuaweicloud.com/ddn-k8s/quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
docker run -d -p 9010:9000 -p 9020:9020 \
--name minio  \
-e "MINIO_ACCESS_KEY=admin" \
-e "MINIO_SECRET_KEY=admin123456" \
-v /秦瑞/minio_data:/data \
quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z server /data --console-address ":9020"
真正拉取用如下
docker pull --platform linux/arm64 quay.io/minio/minio:RELEASE.2025-04-22T22-12-26Z
四、确定java版本
openjdk 17.0.2 2022-01-18
部署服务节点说明
五、初始化数据、目录、账号
docker run -d --name mysql8   --restart=always   -p 3306:3306   -e MYSQL_ROOT_PASSWORD=秦瑞@123   -v /秦瑞/mysql_data:/var/lib/mysql   mysql:8.0-arm64
mysql,minio
创建3个库
CREATE DATABASE `sy_uac` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci */ /*!80016 DEFAULT ENCRYPTION='N' */
sy_uac
sy_server_cloud
nacos_config
分别给3个库创建用户指定库
CREATE USER 'dev'@'%' IDENTIFIED BY 'pnh8M7gRt%6E';
GRANT ALL PRIVILEGES ON nacos_config.* TO 'dev'@'%';
FLUSH PRIVILEGES;
CREATE USER 'sy_dev'@'%' IDENTIFIED BY 'qvBYHowz_e36';
GRANT ALL PRIVILEGES ON sy_uac.* TO 'sy_dev'@'%';
FLUSH PRIVILEGES;
CREATE USER 'sy_dev'@'%' IDENTIFIED BY 'qvBYHowz_e36';
GRANT ALL PRIVILEGES ON sy_server_cloud.* TO 'sy_dev'@'%';
FLUSH PRIVILEGES;
六、nacos2.3.2
验证x86 打的jar 能否在鲲鹏920运行
jar tf coolguard-admin.jar | grep -E '\.(so|dll)$'
配置vpn 才能拉
docker pull --platform linux/arm64 pkyit/nacos:2.3.2-dm8
docker pull --platform=linux/arm64 harbor.evermodel.ai:8721/infra/pkyit/nacos:2.3.2-dm8-arm64
离线部署,镜像导出导入:
docker save harbor.evermodel.ai:8721/infra/pkyit/nacos:2.3.2-dm8-arm64 | gzip > nacos:2.3.2-dm8-arm64.tar.gz
gunzip -c nacos-2.3.2-dm8-arm64.tar.gz | docker load
docker run -d --name nacos-server \
  -p 8848:8848 \
  -p 9848:9848 \
  --privileged=true \
  --restart=always \
  -e MODE=standalone \
  -e JVM_XMS=4096m \
  -e JVM_XMX=4096m \
  -e JVM_XMN=2048m \
  -e MYSQL_SERVICE_HOST=10.41.46.2 \
  -e MYSQL_SERVICE_PORT=3306 \
  -e MYSQL_SERVICE_DB_NAME=nacos_config \
  -e MYSQL_SERVICE_USER=dev \
  -e MYSQL_SERVICE_PASSWORD=pnh8M7gRt%6E \
  -e NACOS_AUTH_ENABLE=true \
  -e NACOS_AUTH_IDENTITY_KEY=admin \
  -e NACOS_AUTH_IDENTITY_VALUE=admin \
  -e NACOS_AUTH_TOKEN=SecretKey10012345678901234567qwertyuioplkjhgfd8999987654901234567890123456789 \
  -v /秦瑞/nacos_data/application.properties:/home/nacos/conf/application.properties \
  harbor.evermodel.ai:8721/infra/pkyit/nacos:2.3.2-dm8-arm64
默认nacos 密码没有带进去 nacos/nacos
七、openjdk 17.0.2 2022-01-18 鲲鹏920
确定java版本
openjdk 17.0.2 2022-01-18
临时新增mongoDb8.0.13
docker run -d --name mongodb8 \
    -e MONGO_INITDB_ROOT_USERNAME=root \
    -e MONGO_INITDB_ROOT_PASSWORD=123456 \
    -p 27017:27017 \
    -v /秦瑞/mongodb_data/db:/bitnami/mongodb/data/db \
    mongodb/mongodb-community-server:8.0-ubuntu2204
八、业务running
中间件部署服务节点说明
以下节点先初始挂载数据盘,安装docker27
10.41.46.2		mysql8.0.39
10.41.45.2		redis7.2 + mongoDb8.0.13
10.41.45.3		redis6.2 + Nginx1.28
10.41.46.3		minio20254
10.41.45.5		nacos2.3.2
10.41.45.6		es8.11.3 + ragflow智能问答
应用部署:
后端 openjdk17
10.41.45.8
toDo:
aicsg-data-pedestal-service  服务需 部署  依赖数据集平台 不需部署【密钥对是 服务于 数据接口平台的,数据提供方,数据使用方,管理方,审核方  四方角色的】
url: http://192.168.200.31
    port: 31003、
 mogodb  规则引擎用,规则引擎先不部署,暂时不确定
 uri: mongodb://root:123456@192.168.200.162:27017/aicsg_alert:?authSource=admin
aicsg-alert-service  服务部署,依赖poc服务  不需部署
 http://192.168.200.31:30004/poc/bailian/kb-retrieve   需要服poc务端修改langflow 地址
chat-model-service: 强哥  确认是否需要部署   沈阳共治预警研判  依赖如下
  url: "http://192.168.200.252/api/v1/chats/257f70c85e2c11f08d85924017c13f33"
ragflow-minio:   确认 是否可以统一一套minio 版本问题
  endpoint: http://192.168.200.252:9000
es:   暂时不部署
  server: https://192.168.200.171:9200
lang-flow-service:  打通端口即可   用不到,不部署,用的ragflow
 url: http://192.168.200.22:3002
  apiKey: sk-n_kra-4Fp2SvgkWUL-SMHFu9pVKAWlR_RkzIIqyNTpg
	
aicsg-customdata-service.yaml  不用部署
aicsg-rule-engine-service.yaml 沈阳已部署,不部署,
nc -zv 10.100.54.2 8002  
不确认 共治 能否直接使用住建已部署完成的服务(langflow、规则引擎)
确认是否需要部署,二阶段  前后端 全部用https
大屏项目 https://workspace.easyv.cloud/shareScreen/eyJzY3JlZW5JZCI6MzUxODQzOX0=
打通端口 已部署 的langflow
nc -zv 10.100.54.2 3000
nc -zv 10.100.54.2 17860
tar -zxvf /tmp/openjdk-17.0.2_linux-aarch64_bin.tar.gz -C /usr/local/
~/.bashrc或/etc/profile文件,添加以下内容:
export JAVA_HOME=/usr/local/jdk-17.0.2
export PATH=$JAVA_HOME/bin:$PATH
验证节点间 端口是否正常,nc/telnet
nc -vz 10.41.46.2 3306
初始化nacos配置:
alias ls='ls --color=auto'
后端应用具体部署:
export SPRING_CLOUD_NACOS_CONFIG_SERVER_ADDR=10.41.45.5:8848
export SPRING_CLOUD_NACOS_DISCOVERY_SERVER_ADDR=10.41.45.5:8848
kubectl cp uac-manager-deploy-6d854c87f4-5z6k7:/uac-manager-1.2.0.jar -n sy-dev /opt/qinrui/uac-manager-1.2.0.jar
java -jar uac-manager-1.2.0.jar -Dspring.application.name=uac-manager
后端服务列表:
alert-service-deploy-66c855b8cc-r4rgw
gateway-deploy-677bc6d545-xvzpv		---暂未用到
pedestal-service-deploy-8d784655-fghww
portal-service-deploy-5c477f6b4b-cgdcj
rule-service-deploy-56d79cbf6b-bqqsl	---暂不部署
supervise-service-deploy-6b5b9b6f6d-8hd96
uac-manager-deploy-6d854c87f4-5z6k7
uac-server-deploy-5f9479b88f-dht9r
uac-service-deploy-d76c6fc4b-jhcf2
第一阶段共部署7个后端服务
java       1739  root   78u  IPv6 239538647      0t0  TCP *:9002 (LISTEN)
java       7466  root  184u  IPv6 240741188      0t0  TCP *:8001 (LISTEN)
java      14678  root  184u  IPv6 240364837      0t0  TCP *:8002 (LISTEN)
java      19482  root   78u  IPv6 239433037      0t0  TCP *:http-alt (LISTEN)
java      27660  root  145u  IPv6 242458989      0t0  TCP *:8101 (LISTEN)
java      28035  root   78u  IPv6 239505500      0t0  TCP *:9003 (LISTEN)
java      31817  root  184u  IPv6 240738400      0t0  TCP *:8003 (LISTEN)
nohup java -jar aicsg-portal-service.jar > output.log 2>&1 &
nohup java -jar aicsg-data-pedestal-service.jar > output.log 2>&1 &
nohup java -jar aicsg-alert-service.jar > output.log 2>&1 &
nohup java -jar aicsg-supervise-service.jar > output.log 2>&1 &
nohup java -jar uac-manager-1.2.0.jar > output.log 2>&1 &
nohup java -jar uac-service-1.2.0.jar > output.log 2>&1 &
nohup java -jar uac-server-1.2.0.jar > output.log 2>&1 &
新增rule-engine
nohup java -jar aicsg-rule-engine-service.jar > output.log 2>&1 &
前端 nginx
nginx:1.28 for 鲲鹏920
docker pull --platform=linux/arm64 linuxserver/nginx:1.28.0
docker save linuxserver/nginx:1.28.0 | gzip > nginx1.28-arm.tar.gz
docker run -d --name nginx -p 80:80 \
    -v /秦瑞/nginx_data/conf.d:/etc/nginx/conf.d \
    -v /秦瑞/nginx_data/html:/usr/share/nginx/html \
    -v /秦瑞/nginx_data/nginx.conf:/etc/nginx/nginx.conf \
    -v /秦瑞/nginx_data/nginx_logs:/var/log/nginx \
    nginx:1.28-alpine
docker run -itd \
--restart=always \
--privileged=true \
--name nginx \
--net host \
-v /秦瑞/nginx_data/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /秦瑞/nginx_data/conf.d:/etc/nginx/conf.d \
-v /秦瑞/nginx_data/html:/usr/share/nginx/html:ro \
-v /秦瑞/nginx_data/nginx_logs:/var/log/nginx/ \
-d nginx:stable-perl
具体web前端:
kubectl cp portal-web-deploy-65bf4b4bf9-67p8z:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/default.conf
kubectl cp manager-web-deploy-7cdb6968d7-z5k5m:/usr/share/nginx/manager-web.tgz -n sy-dev /opt/qinrui/manager-web.tgz
单nginx 规范化不同前端静态文件及日志
只保留 了 错误日志,访问文件,未单独生成单独虚拟主机日志
web 4步走:
准备静态文件
配置虚拟主机配置 kubectl cp manager-web-deploy-7cdb6968d7-z5k5m:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/manager-web.conf
找对应后端转发服务端口  使用maninfest find . -type f | xargs grep -l "32206"
本地绑定hosts域名
web前端主机 10.41.45.3
java后端主机 10.41.45.8
kubectl cp pedestal-web-deploy-797c46448d-8zqjj:/usr/share/nginx/pedestal-web.tgz -n sy-dev /opt/qinrui/pedestal-web.tgz
find . -type f | xargs grep -l "32206"
kubectl cp scenarios-web-deploy-56bbd98789-l8z2d:/usr/share/nginx/scenarios-web.tgz -n sy-dev /opt/qinrui/scenarios-web.tgz
kubectl cp scenarios-web-deploy-56bbd98789-l8z2d:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/scenarios-web.conf
kubectl cp sharing-web-deploy-565cfb896-l47gq:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/sharing-web.conf
kubectl cp sharing-web-deploy-565cfb896-l47gq:/usr/share/nginx/sharing-web.tgz -n sy-dev /opt/qinrui/sharing-web.tgz
kubectl cp supervise-web-deploy-6745bc9fcd-zs8w7:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/supervise-web.conf
kubectl cp warning-web-deploy-5ddff4ff5c-wpnbv:/etc/nginx/conf.d/default.conf -n sy-dev /opt/qinrui/warning-web.conf
web前端共部署7个
初步验证问题:
auth-manage 原本测试环境就有问题
沈阳新部署supervise-web 页面显示 服务不可用,可能后端服务问题 --已解决
rule-engine 待确认是否需要部署----对应port-web 智能引擎---nacos 依赖比较多
后端新增部署rule-engine 8103
kubectl cp rule-service-deploy-56d79cbf6b-bqqsl:/aicsg-rule-engine-service.jar -n sy-dev /opt/qinrui/aicsg-rule-engine-service.jar
tar: Removing leading `/' from member names
Dropping out copy after 0 retries
error: unexpected EOF
kubectl exec rule-service-deploy-56d79cbf6b-bqqsl -n sy-dev -- cat /aicsg-rule-engine-service.jar > /opt/qinrui/aicsg-rule-engine-service--.jar
port-web 工作台相关功能接口验证
ragflow部署相关配置arm本地已部署中间件
九、部署ragflow 及新增es 环境
ragflow使用内部自建minio、es、mysql、redis、
minio	http://10.41.46.3:9020/   admin    admin123456 另外接口地址9010
es		Elasticsearch 绑定地址和端口10.41.45.6  9200,Kibana的监听地址和端口 10.41.45.6  5601 访问Kibana界面 http://10.41.45.6:5601/   账号:elastic   密码:o9FlTCvaWeWcoWzWl29d
mysql	10.41.46.2	  端口:3306	账号:root  密码:秦瑞@123
redis		10.41.45.2	 端口:6379	密码:sy_123456
./bin/elasticsearch-users useradd rag -p 321abc


 
                    
                     
                    
                 
                    
                 
 
         
                
            
         浙公网安备 33010602011771号
浙公网安备 33010602011771号