cgroup与systemd: 通过src rpm获取systemd源代码,添加日志并使用rpmbuild重新打包

问题起源

服务跑在富容器中。容器使用init进程作为一号进程,然后用systemd管理所有service。
在做一次升级时,nginx启动脚本有更新,原来是root拉起,现在进行了去root改造,使用nginx用户拉起。


升级过程中,发现nginx进程无法被拉起,报错:

"Refusing to accept PID outside of service control group, acquired through unsafe symlink chain: %s", s->pid_file);

经排查,相同版本systemd和nginx,相同改动,在物理机上正常,但在k8s环境上有问题。因此猜测时容器化环境相关问题。
提出疑问:
●什么场景会触发这个问题?
●为什么物理化环境没有问题,而容器化环境有此问题?

问题分析

要确认问题所在,需要从systemd代码入手,从有问题的日志反查。
最初直接从github中查看代码仓库,但是难以与当前环境的版本对应,因此直接下载到src.rpm包,解压包出来,打上所有patch,来获取当前rpm的源代码:
现场systemd版本为systemd.x86_64:219-78.tl2.7.1 (该版本与219-78.el7_9.7代码相同)。
对应src rpm下载地址:https://mirrors.tencent.com/tlinux/2.4/tlinux/SRPMS/systemd-219-78.tl2.7.1.src.rpm

通过src rpm获取源代码:

# 起一个tlinux2.4容器作为编译机,挂载/data/目录:
IMAGE_ID='xxx'
NAME=tlinux2_compile
docker run --privileged -idt \
        --name $NAME \
        -v /data:/data \
        --net host \
        ${IMAGE_ID} \
        /usr/sbin/init
docker exec -it $NAME /bin/bash

# 设定rpmbuild目录为/data/rpmbuild,以方便后续安装rpm:
bash-4.2# cat ~/.rpmmacros 
%_topdir /data/rpmbuild

cd /data/rpmbuild
# 放置rpm到本目录, 安装基础systemd rpm:
rpm -ivh systemd-219-78.tl2.7.1.src.rpm
cd SOURCES
ls

使用如下脚本,获取打了所有patch的完整源码:

#!/bin/bash

# 检查是否提供了 SRPM 文件
if [ "$#" -ne 1 ]; then
    echo "Usage: $0 <path_to_srpm>"
    exit 1
fi

SRPM_FILE=$1

# 提取 SRPM 文件
# rpm -ivh $SRPM_FILE

# 获取包名和版本号
SPEC_FILE=$(find /data/rpmbuild/SPECS -name "*.spec" | head -n 1)
PACKAGE_NAME=$(rpmspec -q --qf "%{NAME}\n" $SPEC_FILE | head -n 1)
VERSION=$(rpmspec -q --qf "%{VERSION}\n" $SPEC_FILE | head -n 1)

# 解压源代码
SOURCE_TARBALL=$(find /data/rpmbuild/SOURCES -name "${PACKAGE_NAME}-${VERSION}*.tar.*" | head -n 1)
mkdir -p /tmp/${PACKAGE_NAME}
tar -xf $SOURCE_TARBALL -C /tmp/${PACKAGE_NAME}
cd /tmp/${PACKAGE_NAME}/${PACKAGE_NAME}-${VERSION}

# 应用补丁
PATCHES=$(grep '^Patch[0-9]*:' $SPEC_FILE | awk '{print $2}')
for patch in $PATCHES; do
    patch -p1 < /data/rpmbuild/SOURCES/$patch
done

echo "所有补丁已应用,打了补丁的源码在 /tmp/${PACKAGE_NAME}/${PACKAGE_NAME}-${VERSION} 目录中。"

添加打印日志,重新编译,查看日志,gdb

# 进入/data/rpmbuild目录:
cd /data/rpmbuild
# 修改代码,添加日志:

# 提交commit,生成patch:
git commit -m "xxx"
git format-patch HEAD^

# 将patch放到/data/rpmbuild/SOURCES:

# 修改/data/rpmbuild/SPECS/systemd.spec
## 添加该patch信息:
Patch0852: 0001-comment-to-test.patch

## 添加-g -O0 flag以允许gdb:
%configure "${CONFIGURE_OPTS[@]}"
export CFLAGS="-g -O0" #HERE
make %{?_smp_mflags} GCC_COLORS="" V=1

# 重新打包rpm
rpmbuild -ba SPECS/systemd.spec 

# 在目标机器上重新安装:
rpm -e --nodeps systemd
rpm -e --nodeps systemd-debuginfo
rpm -ivh /data/rpmbuild/systemd-219-78.tl2.7.1.x86_64.rpm
rpm -ivh /data/rpmbuild/systemd-debuginfo-219-78.tl2.7.1.x86_64.rpm

# tailf -f查看systemd日志:
journalctl -f

# 重启csp-nginx
systemctl restart csp-nginx

# gdb systemd:
gdb /usr/lib/systemd/systemd 1

参考:1.https://systemd-devel.freedesktop.narkive.com/bLn5kkmz/systemd-debugging-with-gdb

最小复现环境

1.tlinux2.4操作系统,systemd版本为219-78.tl2.7.1 ,以docker容器拉起。
2.配置nginx为nginx用户:

chown nginx:nginx -R /var/log/nginx/
chown nginx:nginx -R /var/lib/nginx/

nginx_user=$(cat /etc/passwd|grep -w nginx|wc -l)
if [ $nginx_user == 0 ] ; then
  useradd -d /var/lib/nginx -s /sbin/nologin nginx
fi
sed -i 's/user root;/user nginx;/g' /var/lib/nginx/nginx/conf/nginx.conf

之后重启nginx:
systemctl restart nginx

代码流程分析

service_load_pid_file(Service *s, bool may_warn)

// 传入pid_file与CHASE_SAFE的flag,检查权限是否正常:
fd = chase_symlinks(s->pid_file, NULL, CHASE_OPEN|CHASE_SAFE, NULL);
if (fd == -EPERM) {
    questionable_pid_file = true;//设置该flag,表示该pid_file是有疑问的,可以暂且不检查该项目,继续只检查其他项目。后面还有其他判断,再决定本pidfile是否可以使用:
    fd = chase_symlinks(s->pid_file, NULL, CHASE_OPEN, NULL);
}

此处chase_symlinks的作用:

  1. 逐级遍历pid路径:如当前服务的pid路径被定义为/var/lib/csp_nginx/nginx/sbin/nginx.pid,则会遍历到如下路径:
    /var
    /var/csp_nginx
    /var/csp_nginx/nginx
    /var/csp_nginx/nginx/sbin
    /var/csp_nginx/nginx/sbin/nginx.pid
    检查每个层级目录或文件归属的用户uid。若前者权限高于后者,则认为允许。若后者权限高于前者,则认为有安全风险,设置flag:questionable_pid_file=true,以供后续进一步检查,如下代码所示:
    而物理机中,r > 0,因此未进入当前流程,而是认为允许,可以继续后续流程。
    而当前docker中返回0,同时判断变量questionable_pid_file=true,随即打印日志并退出。
//进一步检查当前pid是否符合预期。
r = service_is_suitable_main_pid(s, pid, prio);
if (r == 0) {
    if (questionable_pid_file) {
        log_unit_error(UNIT(s)->id, "Refusing to accept PID outside of service control group, acquired through unsafe symlink chain: %s", s->pid_file);
        return -EPERM;
    }
}

函数static int service_is_suitable_main_pid(Service *s, pid_t pid, int prio)的主要作用为:
1.基本容错判断(略);
2.通过本service获取到manager,根据pid获取Unit owner。
3.判断owner是否与UNIT(s)相同。若相同,则返回1(物理机流程)。否则,返回0(当前docker容器流程)。

static int service_is_suitable_main_pid(Service *s, pid_t pid, int prio) {
    Unit *owner;
    /* Checks whether the specified PID is suitable as main PID for this service. returns negative if not, 0 if the
     * PID is questionnable but should be accepted if the source of configuration is trusted. > 0 if the PID is
     * good */
    owner = manager_get_unit_by_pid(UNIT(s)->manager, pid);
    if (owner == UNIT(s)) {
        log_unit_debug(UNIT(s)->id, "New main PID "PID_FMT" belongs to service, we are happy.", pid);
        return 1; /* Yay, it's definitely a good PID */
    }
    return 0; /* Hmm it's a suspicious PID, let's accept it if configuration source is trusted */
}

通过pid获取unit:

Unit *manager_get_unit_by_pid(Manager *m, pid_t pid) {
    _cleanup_free_ char *cgroup = NULL;
    int r;
    //通过name:systemd和pid获取cgroup
    r = cg_pid_get_path(SYSTEMD_CGROUP_CONTROLLER, pid, &cgroup);
    if (r < 0) {
        return NULL;
    }
    //根据cgroup,获取unit:
    return manager_get_unit_by_cgroup(m, cgroup);
}

此时docker里获取到的cgroup为/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice/nginx.service
根据cgroup获取unit:

Unit* manager_get_unit_by_cgroup(Manager *m, const char *cgroup) {
    char *p;
    Unit *u;
    //1
    u = hashmap_get(m->cgroup_unit, cgroup);
    if (u){
        return u;
    }
    p = strdupa(cgroup);
    for (;;) {
        char *e;
        // find the position where '/' last appears, set it to e:
        e = strrchr(p, '/');
        if (e == p || !e){
            return NULL;
        }
        // set *e to 0, so p can be stripped by position e:
        *e = 0;
        //2
        u = hashmap_get(m->cgroup_unit, p);
        if (u){
            return u;
        }
    }
}

此时未从1中直接获取到。而是对传入的cgroup进行切割,获取不同层级的新cgroup名称传给p,然后在m->cgroup_unit中查询。
例如:
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice/
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d

最终通过如下cgroup切割结果,可以从manager->cgroup_unit哈希表中找到unit,且该unit的id为'-.slice',即根层:p:'/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d', u->id:'-.slice', u->instance:'(null)'

而该函数返回后,在判断通过cgroup获取到的owner,与通过UNIT(s)获取到的unit时,二者不同,因此未返回1:

static int service_is_suitable_main_pid(Service *s, pid_t pid, int prio) {
    if (owner == UNIT(s)) {
        log_unit_debug(UNIT(s)->id, "New main PID "PID_FMT" belongs to service, we are happy.", pid);
        return 1; /* Yay, it's definitely a good PID */
    }
}

owner.id:'-.slice', UNIT(s).id:'nginx.service'
为何通过pid->cgroup->unit会与UNIT(s)获取到的不同?

UNIT(s)的定义为,根据service的meta指针获取unit:

/* For casting the various unit types into a unit */
#define UNIT(u) (&(u)->meta)

该值的初始化:

static void service_init(Unit *u) {
    Service *s = SERVICE(u);

    assert(u);
    assert(u->load_state == UNIT_STUB);

    s->timeout_start_usec = u->manager->default_timeout_start_usec;
    s->timeout_stop_usec = u->manager->default_timeout_stop_usec;
    s->restart_usec = u->manager->default_restart_usec;
    s->type = _SERVICE_TYPE_INVALID;
    s->socket_fd = -1;
    s->bus_endpoint_fd = -1;
    s->guess_main_pid = true;

    RATELIMIT_INIT(s->start_limit, u->manager->default_start_limit_interval, u->manager->default_start_limit_burst);

    s->control_command_id = _SERVICE_EXEC_COMMAND_INVALID;
}

service_is_suitable_main_pid()函数中:
service->meta->cgroup_path:为/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice/nginx.service:
在/proc/<PID/cgroup中,获取定义:

int cg_pid_get_path(const char controller, pid_t pid, char *path) 
{

//在/proc/<PID>/cgroup
fs = procfs_file_alloca(pid, "cgroup"); 

得到的cgroup为:“/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/s
ystem.slice/nginx.service”

注意此时manager->cgroup_root为"/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d"

从右向左,截取cgroup路径:

/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice/nginx.service
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/docker
/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d

获取到的 id = 0x55910789b670 "-.slice",

p owner,可以看到id为"-.slice", description为"Root Slice",cgroup_path为: cgroup_path = 0x5591078b40d0 "/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d",

添加unit到manager->cgroup_unit时机

service_spawn
unit_realize_cgroup
unit_realize_cgroup_now
unit_create_cgroups

此时给manager->cgroup_unit添加的值为
cgroup_path: "/docker/9fc2f4125a5a54bdc029dc4c4a9a73f6524c9aee41a09b56d6b4cbfd28b3179d/system.slice/nginx.service"
是正确的值。

这一点也可以在/sys/fs/cgroup/systemd/docker/xxx/system.slice中得到印证:

也就是,通过service定义的unit还是正确的,但是在通过/proc//cgroup获取unit时,由于获取到的cgroup地址不正确(如:/docker/xx/docker/xx/systemd.slice/nginx,重复了两遍),且systemd代码中将其截取查询,最终以/docker/xxx为key,在manager->cgroup_unit中找到了root slice,即-.slice。

因此问题的根因在于:csp-nginx被拉起时,生成的/proc//cgroup是何处定义的,为什么不对?

《TODO》

posted @ 2025-01-09 20:07  强壮的派大星  阅读(94)  评论(0)    收藏  举报