kubernetes中不可见的OOM

最近看了一篇文章:Tracking Down “Invisible” OOM Kills in Kubernetes,其讲述的是由于内存不足导致Pod中的进程被killed,但Pod并没有重启,也没有任何日志或kubernetes事件,只有一个"Exit Code: 137"的信息,导致难以进一步定位问题。最后还是通过查看节点系统日志才发现如下信息:

kernel: Memory cgroup out of memory: Killed process 18661 (helm) total-vm:748664kB, anon-rss:41748kB, file-rss:0kB, shmem-rss:0kB, UID:999 pgtables:244kB oom_score_adj:992

kernel: oom_reaper: reaped process 18661 (helm), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

在上述文章中,作者做了个总结:

When the Linux OOM Killer activated, it selected a process within the container to be killed. Apparently only when OOM selects the container’s init process PID 1 will the container itself be killed and have a status of OOMKilled. If that container was marked as restartable, then Kubernetes would restart the container and then you would see an increase in restart count.

As I’ve seen, when PID 1 is not selected then some other process inside the container is killed. This makes it “invisible” to Kubernetes. If you are not watching the tty console device or scanning kernel logs, you may not know that part of your containers are being killed. Something to consider when you enable container memory limits.

大意就是只有Pod中的PID 1被OOM kill时才会出现OOMKilled状态,并重启容器,此时我们可以清楚地看到OOM信息。

但在出现问题的场景中,被kill的并不是PID 1,这就导致容器或kubernetes无法记录相关信息,且不会重启容器。这种情况下只能通过查看系统日志才能发现相关信息。

文中也提出了一种解决该问题的方式:VPA

PS

我之前也遇到过类似的问题,当问题出现时,也只是有个"Exit Code: 137"信息,Pod正常运行,没有任何错误日志和事件,但其实Pod内的某个进程已经被killed,无法执行正常功能。

出现"被隐藏的OOM"的原因可能是Pod中单独启动了多个独立的进程(进程间无父子关系),在我的场景中就是单独启动了一个脚本进程,当内存不足的时候会导致kill脚本进程。因此还有一种解决思路就是,如果要启动多个独立的进程,还可以将其作为sidecar方式,避免出现这种问题。

posted @ 2022-08-09 10:35  charlieroro  阅读(936)  评论(0编辑  收藏  举报