「Kubernetes Objects」- Managing Compute Resources(学习笔记)



Kubernetes v1.16/Managing Compute Resources for Containers
Kubernetes v1.16/Assign Memory Resources to Containers and Pods
Kubernetes v1.16/Assign CPU Resources to Containers and Pods


CPU and memory

CPU and memory are each a resource type. CPU and memory are collectively referred to as compute resources, or just resources.

CPU is specified in units of cores

memory is specified in units of bytes

If you’re using Kubernetes v1.14 or newer, you can specify huge page resources.

local ephemeral storage

Kubernetes version 1.8 introduces a new resource, ephemeral-storage for managing local ephemeral storage.

In each Kubernetes node, kubelet’s root directory (/var/lib/kubelet by default) and log directory (/var/log) are stored on the root partition of the node. This partition is also shared and consumed by Pods via emptyDir volumes, container logs, image layers and container writable layers.

Local ephemeral storage management only applies for the root partition; the optional partition for image layer and writable layer is out of scope.

Resource requests and limits of Pod and Container

Each Container of a Pod can specify one or more of the following:

A Pod resource request/limit for a particular resource type is the sum of the resource requests/limits of that type for each Container in the Pod. => 没有 Pod 资源限制,资源限制定义在 Container 上,Pod 资源限制是指对应资源限制的累加。

Meaning of CPU

Limits and requests for CPU resources are measured in cpu units.

One cpu, in Kubernetes, is equivalent to: 1 AWS vCPU / 1 GCP Core / 1 Azure vCore 1 IBM vCPU 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

spec.containers[].resources.requests.cpu of 0.5 is guaranteed half as much CPU as one that ask for 1 CPU.The expression 0.1 is equivalent to the expression 100m, which can be read as “one hundred millicpu”.

Precision finer than 1m is not allowed. For this reason, the form 100m might be preferred.

CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.

Meaning of memory

memory is specified in units of bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

How Pods with resource requests are scheduled

The scheduler ensures that, for each resource type, the sum of the resource requests of the scheduled Containers is less than the capacity of the node.

Note that although actual memory or CPU resource usage on nodes is very low, the scheduler still refuses to place a Pod on a node if the capacity check fails.

How Pods with resource limits are run

The spec.containers[].resources.requests.cpu is converted to its core value, which is potentially fractional, and multiplied by 1024. The greater of this number or 2 is used as the value of the –cpu-shares flag in the docker run command.

The spec.containers[].resources.limits.cpu is converted to its millicore value and multiplied by 100. The resulting value is the total amount of CPU time that a container can use every 100ms. A container cannot use more than its share of CPU time during this interval.

The spec.containers[].resources.limits.memory is converted to an integer, and used as the value of the –memory flag in the docker run command.

If a Container exceeds its memory limit, it might be terminated.

If a Container exceeds its memory request, it is likely that its Pod will be evicted whenever the node runs out of memory.

A Container might or might not be allowed to exceed its CPU limit for extended periods of time. However, it will not be killed for excessive CPU usage.

Monitoring compute resource usage

没有找到直接查看的方法。可能需要安装 Heapster 扩展。

# 06/24/2020 迭代速度就是这么快,Heapster Deprecation Timeline,现在我们使用 metrics-server 进行替代。


My Pods are pending with event message failedScheduling


可以使用 kubectl describe pod frontend 查看 Events 部分的日志,如果类似如下输出,则是因为资源不足:

  FirstSeen LastSeen   Count  From          Subobject   PathReason      Message
  36s   5s     6      {scheduler }              FailedScheduling  Failed for reason PodExceedsFreeCPU and possibly others

或者使用 kubectl describe nodes <node-name> 命令,输出: Capacity 表示总共的资源;Allocatable 表示除了系统服务占用外,能够用来分配给 Pod 的资源;Allocated resources 表示已经分配的资源;那么 Allocatable – Allocated resources 则是可以用来分配的资源。

解决办法:1)向集群添加节点;2)结束没有必要运行的 POD 实例;3)检查 Pod 资源要求,比如 Node / cpu: 1,而 Pod / cpu: 1.1 必然失败。

1)Node Allocatable Resources
2)Resource Quotas => 限制命名空间内资源占用

My Container is terminated


使用 kubectl describe pod 查看 Pod 状态

使用 kubectl get pod -o go-template= 查看上次被停止的容器的状态:

kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' "<pod-name>"

Local ephemeral storage

Limits and requests for ephemeral-storage are measured in bytes. You can express storage as a plain integer or as a fixed-point integer using one of these suffixes: E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki.

Requests and limits setting for local ephemeral storage

Each Container of a Pod can specify one or more of the following:

How Pods with ephemeral-storage requests are scheduled

与之前相同:1)节点首先要有足够的空间,2)所有 Pod 累计容量 < 节点的容量。

How Pods with ephemeral-storage limits run

For container-level isolation, if a Container’s writable layer and logs usage exceeds its storage limit, the Pod will be evicted.

For pod-level isolation, if the sum of the local ephemeral storage usage from all containers and also the Pod’s emptyDir volumes exceeds the limit, the Pod will be evicted.

Monitoring ephemeral-storage consumption

When local ephemeral storage is used, it is monitored on an ongoing basis by the kubelet.The monitoring is performed by scanning each emptyDir volume, log directories, and writable layers on a periodic basis.

Starting with Kubernetes 1.15, emptyDir volumes (but not log directories or writable layers) may, at the cluster operator’s option, be managed by use of project quotas. Quotas are faster and more accurate than directory scanning.


Extended resources



Kubernetes v1.16/Reserve Compute Resources for System Daemons


Kubernetes v1.16/Managing Compute Resources for Containers