k8s入门概述

k8s 里面的节点分为 master 和 node 节点,master 提供 api 操作集群。

image-20201022152526918

Master:

  • API Server:和外界交互
  • Scheduler:调度
  • Controller:控制,如维持 pods 的期望部署数量和实际部署数量的匹配
  • etcd:存储数据

image-20201022152624806

Node:

  • Pod:具有相同 namespace 的容器的集合就是一个 Pod
  • Docker:容器技术的实现
  • kubelet:Master 节点控制 Node 节点的桥梁
  • kube-proxy:网络控制相关、services 负载均衡
  • Fluentd:日志采集、存储、查询
  • Optional 的插件:DNS、UI etc
  • Image Registry:镜像服务

image-20201022152735696

image-20201022153122051

k8s安装

minikube 安装

官网安装工具页面

  1. 依赖一个虚拟化技术,可以使用 virtual box,先安装 virtual box

  2. 先安装 kubectl ,注意页面的最后有介绍 kubectl 命令的自动补全功能,目前只支持 bash shell 和 zsh shell。

  3. 安装 minikube

    • minikube 类似 vagrant 和 docker machine,会依赖虚拟化技术创建一个 VM,然后安装一系列的 docker 和 k8s 的软件,在安装过程中会拉取镜像,官方的 minikube 可能会去拉取一些在 google 服务器中的镜像,会遇到网络问题,这是阿里云提供的解决方案
  4. 使用 minikube 创建一个 k8s 集群

    minikube start --registry-mirror=https://8awf7z4n.mirror.aliyuncs.com
    
  5. 使用 kubectl 查看集群的上下文(context)信息:

    查看集群详细信息:

    D:\k8s>kubectl config view
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority: C:\Users\john\.minikube\ca.crt
        server: https://192.168.99.100:8443
      name: minikube
    contexts:
    - context:
        cluster: minikube
        user: minikube
      name: minikube
    current-context: minikube
    kind: Config
    preferences: {}
    users:
    - name: minikube
      user:
        client-certificate: C:\Users\john\.minikube\profiles\minikube\client.crt
        client-key: C:\Users\john\.minikube\profiles\minikube\client.key
    

    列出所有集群

    D:\k8s>kubectl config get-contexts
    CURRENT   NAME       CLUSTER    AUTHINFO   NAMESPACE
    *         minikube   minikube   minikube
    

    查看集群状态:

    D:\k8s>kubectl cluster-info
    Kubernetes master is running at https://192.168.99.100:8443
    KubeDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    获取集群节点简要信息和详细信息:

    D:\docker\docker\chapter9\labs\deployment>kubectl get node
    NAME       STATUS   ROLES    AGE   VERSION
    minikube   Ready    master   14h   v1.19.2
    
    D:\docker\docker\chapter9\labs\deployment>kubectl get node -o wide
    NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
    minikube   Ready    master   14h   v1.19.2   192.168.99.100           Buildroot 2019.02.11   4.19.114         docker://19.3.12
    
  6. 通过 minikube ssh 进入集群虚拟机,类似 vagrant

    D:\k8s>minikube ssh
                             _             _
                _         _ ( )           ( )
      ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
    /' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
    | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
    (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
    
    $ docker version
    Client: Docker Engine - Community
     Version:           19.03.12
     API version:       1.40
     Go version:        go1.13.10
     Git commit:        48a66213fe
     Built:             Mon Jun 22 15:42:53 2020
     OS/Arch:           linux/amd64
     Experimental:      false
    
    Server: Docker Engine - Community
     Engine:
      Version:          19.03.12
      API version:      1.40 (minimum version 1.12)
      Go version:       go1.13.10
      Git commit:       48a66213fe
      Built:            Mon Jun 22 15:49:35 2020
      OS/Arch:          linux/amd64
      Experimental:     false
     containerd:
      Version:          v1.2.13
      GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
     runc:
      Version:          1.0.0-rc10
      GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
     docker-init:
      Version:          0.18.0
      GitCommit:        fec3683
    $
    

kubeadm 安装

vagrabt 创建 VM

分别创建一个 master 节点和两个 node 节点,以下是 vagrant 的 .Vagrantfile 文件:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.require_version ">= 1.6.0"

boxes = [
    {
        :name => "k8s-master",
        :eth1 => "192.168.205.120",
        :mem => "2048",
        :cpu => "2",
        :ssh_port => "2213"
    },
    {
        :name => "k8s-node1",
        :eth1 => "192.168.205.11",
        :mem => "2048",
        :cpu => "1",
        :ssh_port => "2214"
    },
    {
        :name => "k8s-node2",
        :eth1 => "192.168.205.12",
        :mem => "2048",
        :cpu => "1",
        :ssh_port => "2215"
    }
]

Vagrant.configure(2) do |config|

  config.vm.box = "centos/7"

  boxes.each do |opts|
      config.vm.define opts[:name] do |config|
        config.vm.hostname = opts[:name]
        config.vm.provider "vmware_fusion" do |v|
          v.vmx["memsize"] = opts[:mem]
          v.vmx["numvcpus"] = opts[:cpu]
        end

        config.vm.provider "virtualbox" do |v|
          v.customize ["modifyvm", :id, "--memory", opts[:mem]]
          v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
        end
        
        config.vm.network :forwarded_port, guest: 22, host: opts[:ssh_port]
        config.vm.network :private_network, ip: opts[:eth1]
      end
  end

  config.vm.synced_folder "./labs", "/home/vagrant/labs"
  config.vm.provision "shell", privileged: true, path: "./setup.sh"

end

.Vagrantfile 中指定的 setup.sh 文件

#/bin/sh

sudo yum install wget -y

sudo mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

sudo wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo

sudo yum makecache

# install some tools
sudo yum install -y git vim gcc glibc-static telnet bridge-utils brctl bind-utils

# install docker
curl -fsSL get.docker.com -o get-docker.sh
sh get-docker.sh

# start docker service
# sudo groupadd docker
sudo usermod -aG docker vagrant
sudo systemctl start docker

rm -rf get-docker.sh

sudo su
echo root| passwd root --stdin
sed "/^PasswordAuthentication no/c PasswordAuthentication yes" /etc/ssh/sshd_config > tmp_file
mv tmp_file /etc/ssh/sshd_config
rm -f tmp_file

systemctl restart sshd

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'eof' { "registry-mirrors": ["https: 8awf7z4n.mirror.aliyuncs.com"] } eof sudo systemctl daemon-reload restart docker bash -c 'cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF'

sudo setenforce 0

sudo yum install -y kubelet kubeadm kubectl

cat <

运行 vagrant 创建:

vagrant up

运行完成后,分别进入三个 VM,使用以下命令查看是否安装完成

[root@k8s-master ~]# which kubelet
/usr/bin/kubelet
[root@k8s-master ~]# which kubeadm
/usr/bin/kubeadm
[root@k8s-master ~]# which kubectl
/usr/bin/kubectl
[root@k8s-master ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:03:45 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.13
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       4484c46d9d
  Built:            Wed Sep 16 17:02:21 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.7
  GitCommit:        8fba4e9a7d01810a393d5d25a3621dc101981175
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

kubeadm 初始化集群

--pod-network-cidr 指定集群创建的 pod 的网段;--apiserver-advertise-addressv 指定集群公告的地址,也就是当前这台主机作为 master 可以被其它节点以及 API 访问到的地址,所以这个地址要选用和其它节点同网段的网络节点的地址;--v=5 指定打印日志的等级;--image-repository 指定拉取镜像的地址,这里指定阿里云的地址,不然默认会拉谷歌的

[root@k8s-master ~]# kubeadm init --pod-network-cidr 172.100.0.0/16 --apiserver-advertise-address 192.168.205.120 --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --v=5
I1023 04:32:50.960205     889 initconfiguration.go:103] detected and using CRI socket: /var/run/dockershim.sock
I1023 04:32:51.080327     889 version.go:183] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
W1023 04:32:52.697343     889 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
I1023 04:32:52.697935     889 checks.go:577] validating Kubernetes and kubeadm version

... ...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.205.120:6443 --token eaxpot.07q5bn1h6etqawca \
    --discovery-token-ca-cert-hash sha256:7445ed26891dd725b679b837cd2752563672c12f671a61d23f8f513efd805877
[root@k8s-master ~]#   mkdir -p $HOME/.kube
[root@k8s-master ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]#

检查 pods

[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-6c76c8bb89-c5dcd             0/1     Pending   0          12m
kube-system   coredns-6c76c8bb89-n5hqc             0/1     Pending   0          12m
kube-system   etcd-k8s-master                      1/1     Running   0          12m
kube-system   kube-apiserver-k8s-master            1/1     Running   0          12m
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          12m
kube-system   kube-proxy-ctz76                     1/1     Running   0          92s
kube-system   kube-proxy-qbjjx                     1/1     Running   0          12m
kube-system   kube-scheduler-k8s-master            1/1     Running   0          12m

安装网络插件

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

加入 node 节点

在 node 节点上运行以下命令加入集群:

  • node1

    [root@k8s-node1 ~]# kubeadm join 192.168.205.120:6443 --token eaxpot.07q5bn1h6etqawca \
    >     --discovery-token-ca-cert-hash sha256:7445ed26891dd725b679b837cd2752563672c12f671a61d23f8f513efd805877
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@k8s-node1 ~]#
    
  • node2

    [root@k8s-node2 ~]# kubeadm join 192.168.205.120:6443 --token eaxpot.07q5bn1h6etqawca \
    >     --discovery-token-ca-cert-hash sha256:7445ed26891dd725b679b837cd2752563672c12f671a61d23f8f513efd805877
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

回到 master 节点查看集群节点状态:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   Ready      master   16m     v1.19.3
k8s-node1    Ready         5m10s   v1.19.3
k8s-node2    NotReady      20s     v1.19.3

其它参考安装

注意以下命令,需要切换到 root 后运行

安装 docker

首先确定已经安装完成 docker,如果没有安装可以使用以下脚本快速安装并配置:

curl -fsSL https://get.docker.com | sudo sh -s -- --mirror Aliyun
sudo usermod -aG docker $USER
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'eof' { "exec-opts": ["native.cgroupdriver="systemd"]," "log-driver": "json-file", "log-opts": "max-size": "100m" }, "storage-driver": "overlay2", "registry-mirrors": ["https: t9ab0rkd.mirror.aliyuncs.com"] } eof sudo systemctl daemon-reload restart docker 

kubeadm

安装 k8s 套件

# 添加并信任APT证书
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

# 添加源地址
add-apt-repository "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"

# 更新源并安装最新版 kubenetes
sudo apt update && apt install -y kubelet kubeadm kubectl

# 添加 completion,最好放入 .bashrc 中
source <(kubectl completion bash) source <(kubeadm 

关闭 swap

为了性能考虑,k8s 需要关闭 swap 功能,然后重启主机。

/etc/fstab 中找到带有 swap 的那一行,注释掉。

$ vim /etc/fstab
# UUID=9224d95f-cd87-4b56-b249-3dc7de4491d3 none            swap    sw              0       0

启动 master 节点:

kubeadm init --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'

--image-repository 指定控制平面容器镜像地址,这里使用aliyun镜像,而不是默认的 k8s.gcr.io,这样就能避免下载失败。

如果 init 失败,检查是否关闭 swap、 用户是否为 root 以及是否下载好核心组件镜像(可能得网络的问题)、是否为至少 2G 内存,然后运行 kubeadm reset 接着再 kubeadm init

配置读取路径

# append to .bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络插件

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

加入 worker

worker 节点加入 master,使用 kubeadm init 最后提示的命令在 worker 上运行

kubeadm join 192.168.199.117:6443 --token y8l6qv.oj2hxua9szguei23 \
--discovery-token-ca-cert-hash sha256:bae71d8fb4a26c5f29a6df2db037e08e581fcb344ff85089a603e3eeb9d6d26f

其中 --token 是临时的生成,可以通过下面命令获取

$ kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
y8l6qv.oj2hxua9szguei23   23h      2019-09-09T12:04:27+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

--discovery-token-ca-cert-hash 指的是 CA 证书的哈希值,那么可以使用这种方式获取:

$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | sha256sum | awk '{print $1}'
3e77f845edf944d76234a6d78dde3e5bae3e50261362b1d8cc8d025ac97136b0

查看 nodes

在 master 节点上运行

kubectl get nodes

minikube

国内源

minikube文档页面,选择操作系统,然后下载 minikube,注意版本号。

k8s-kubectl页面下载 kubectl 并放在$PATH下,注意版本号。

下载安装 virtualbox。

启动命令:

minikube start \
--vm-driver=virtualbox \
--image-mirror-country=cn \
--registry-mirror='https://t9ab0rkd.mirror.aliyuncs.com' \
--image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' \
--iso-url='https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.5.1.iso'

–image-mirror-country cn 将缺省利用 registry.cn-hangzhou.aliyuncs.com/google_containers 作为安装Kubernetes的容器镜像仓库, --iso-url= 利用阿里云的镜像地址下载相应的 .iso 文件 --cpus=2: 为minikube虚拟机分配CPU核数 --memory=2000mb: 为minikube虚拟机分配内存数 --kubernetes-version= : minikube 虚拟机将使用的 kubernetes 版本

官方源

# k8s
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

# docker
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# apt安装
apt-get update && apt-get install docker-ce kubeadm

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

systemctl daemon-reload
systemctl restart docker

其它阅读参考

https://juejin.im/post/6844903668269973511

https://cloud.tencent.com/developer/article/1525487

配置 kubectl 操作多个 k8s 集群

参考 官方说明

可以看到 kubectl 主要通过类似以下格式的文件读取 k8s 集群的联系信息的:

contexts:
- context:
    cluster: development
    namespace: frontend
    user: developer
  name: dev-frontend
- context:
    cluster: development
    namespace: ramp
    user: developer
  name: dev-ramp-up
- context:
    cluster: development
    namespace: storage
    user: developer
  name: dev-storage
- context:
    cluster: scratch
    namespace: default
    user: experimenter
  name: exp-scratch
  • 使用 minikube 和 kubeadm 安装集群之后,分别会在 minikube 所在主机以及 kubeadm 所在主机的当前用户 $HOME 下创建 .kube/config 文件,这个就是默认创建给 kubectl 读取的文件,里面包含了集群的连接信息。也就是说 kubectl 默认是会读取这个路径文件的

  • 另外我们可以通过指定环境变量 KUBECONFIG 来修改它的默认行为,将集群配置文件所在目录指定给这个环境变量,如果是多个配置文件,在 linux 和 macos 下用冒号隔开,在 windows 下用分号隔开。

  • 同时还可以通过 kubectl config 命令设置的方式生成该配置文件。

在经过以上步骤使得 kubectl 可以读取多个集群信息后,通过 kubectl config use-context <context-name> 即可在多个集群的不同上下文之间进行上下文切换。

Node

每个 cluster 下面有多个 context ,一个 context 可能会涉及多个 Node,一个 Node 就是一个物理主机或者虚机。kubectl 切换到一个 context 之后,可以查看它涉及的节点:

[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   9h    v1.19.3
k8s-node1    NotReady      9h    v1.19.3
k8s-node2    NotReady      9h    v1.19.3
[root@k8s-master ~]# kubectl get node -o wide
NAME         STATUS     ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-master   Ready      master   20h   v1.19.3   10.0.2.15             CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.13
k8s-node1    NotReady      20h   v1.19.3   10.0.2.15             CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.13
k8s-node2    NotReady      20h   v1.19.3   10.0.2.15             CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   docker://19.3.13

通过 kubectl get node -o yaml 以 yaml 格式输出更多信息(也支持 json 格式输出这些信息,-o json):

[root@k8s-master ~]# kubectl get node -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: Node
  metadata:
    annotations:
      kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
      node.alpha.kubernetes.io/ttl: "0"
      volumes.kubernetes.io/controller-managed-attach-detach: "true"
    creationTimestamp: "2020-10-23T04:33:49Z"
    labels:
      beta.kubernetes.io/arch: amd64
      beta.kubernetes.io/os: linux
      kubernetes.io/arch: amd64
      kubernetes.io/hostname: k8s-master
      kubernetes.io/os: linux
      node-role.kubernetes.io/master: ""
    managedFields:
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:kubeadm.alpha.kubernetes.io/cri-socket: {}
          f:labels:
            f:node-role.kubernetes.io/master: {}
      manager: kubeadm
      operation: Update
      time: "2020-10-23T04:33:52Z"
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            f:node.alpha.kubernetes.io/ttl: {}
          f:labels:
            f:beta.kubernetes.io/arch: {}
            f:beta.kubernetes.io/os: {}
        f:spec:
          f:podCIDR: {}
          f:podCIDRs:
            .: {}
            v:"172.100.0.0/24": {}
          f:taints: {}
      manager: kube-controller-manager
      operation: Update
      time: "2020-10-23T04:48:39Z"
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:status:
          f:conditions:
            k:{"type":"NetworkUnavailable"}:
              .: {}
              f:lastHeartbeatTime: {}
              f:lastTransitionTime: {}
              f:message: {}
              f:reason: {}
              f:status: {}
              f:type: {}
      manager: kube-utils
      operation: Update
      time: "2020-10-23T14:12:48Z"
    - apiVersion: v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:volumes.kubernetes.io/controller-managed-attach-detach: {}
          f:labels:
            .: {}
            f:kubernetes.io/arch: {}
            f:kubernetes.io/hostname: {}
            f:kubernetes.io/os: {}
        f:status:
          f:addresses:
            .: {}
            k:{"type":"Hostname"}:
              .: {}
              f:address: {}
              f:type: {}
            k:{"type":"InternalIP"}:
              .: {}
              f:address: {}
              f:type: {}
          f:allocatable:
            .: {}
            f:cpu: {}
            f:ephemeral-storage: {}
            f:hugepages-2Mi: {}
            f:memory: {}
            f:pods: {}
          f:capacity:
            .: {}
            f:cpu: {}
            f:ephemeral-storage: {}
... ...

通过 kubectl describe node <node-name> 查看详细信息:

[root@k8s-master ~]# kubectl describe node k8s-master
Name:               k8s-master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 23 Oct 2020 04:33:49 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-master
  AcquireTime:     
  RenewTime:       Sat, 24 Oct 2020 00:58:28 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 23 Oct 2020 14:12:48 +0000   Fri, 23 Oct 2020 14:12:48 +0000   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:48:39 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.15
  Hostname:    k8s-master
Capacity:
  cpu:                2
  ephemeral-storage:  41921540Ki
  hugepages-2Mi:      0
  memory:             1881936Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  38634891201
  hugepages-2Mi:      0
  memory:             1779536Ki
  pods:               110
System Info:
  Machine ID:                 db0b07db41e673489398574c36ad7aa0
  System UUID:                DB0B07DB-41E6-7348-9398-574C36AD7AA0
  Boot ID:                    814e4cf8-5526-42fd-ae20-1afe15ee387f
  Kernel Version:             3.10.0-1127.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.13
  Kubelet Version:            v1.19.3
  Kube-Proxy Version:         v1.19.3
PodCIDR:                      172.100.0.0/24
PodCIDRs:                     172.100.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6c76c8bb89-c5dcd              100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     20h
  kube-system                 coredns-6c76c8bb89-n5hqc              100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     20h
  kube-system                 etcd-k8s-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-apiserver-k8s-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-controller-manager-k8s-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-proxy-qbjjx                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-scheduler-k8s-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 weave-net-k777j                       100m (5%)     0 (0%)      200Mi (11%)      0 (0%)         20h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                850m (42%)   0 (0%)
  memory             340Mi (19%)  340Mi (19%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              [root@k8s-master ~]# kubectl describe node k8s-master
Name:               k8s-master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 23 Oct 2020 04:33:49 +0000
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-master
  AcquireTime:     
  RenewTime:       Sat, 24 Oct 2020 00:58:28 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Fri, 23 Oct 2020 14:12:48 +0000   Fri, 23 Oct 2020 14:12:48 +0000   WeaveIsUp                    Weave pod has set this
  MemoryPressure       False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sat, 24 Oct 2020 00:55:03 +0000   Fri, 23 Oct 2020 04:48:39 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  10.0.2.15
  Hostname:    k8s-master
Capacity:
  cpu:                2
  ephemeral-storage:  41921540Ki
  hugepages-2Mi:      0
  memory:             1881936Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  38634891201
  hugepages-2Mi:      0
  memory:             1779536Ki
  pods:               110
System Info:
  Machine ID:                 db0b07db41e673489398574c36ad7aa0
  System UUID:                DB0B07DB-41E6-7348-9398-574C36AD7AA0
  Boot ID:                    814e4cf8-5526-42fd-ae20-1afe15ee387f
  Kernel Version:             3.10.0-1127.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.13
  Kubelet Version:            v1.19.3
  Kube-Proxy Version:         v1.19.3
PodCIDR:                      172.100.0.0/24
PodCIDRs:                     172.100.0.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                  ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6c76c8bb89-c5dcd              100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     20h
  kube-system                 coredns-6c76c8bb89-n5hqc              100m (5%)     0 (0%)      70Mi (4%)        170Mi (9%)     20h
  kube-system                 etcd-k8s-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-apiserver-k8s-master             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-controller-manager-k8s-master    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-proxy-qbjjx                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 kube-scheduler-k8s-master             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20h
  kube-system                 weave-net-k777j                       100m (5%)     0 (0%)      200Mi (11%)      0 (0%)         20h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                850m (42%)   0 (0%)
  memory             340Mi (19%)  340Mi (19%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              

Labels

在上面的 describe 命令中可以看到 node 的 Labels 属性:

Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=

Labels 以键值对的形式存在,可以利用这些 Labels 做过滤条件,kubectl get node --show-labels 可以获取 lables

[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS     ROLES    AGE   VERSION   LABELS
k8s-master   Ready      master   20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux

添加 label、删除 label

[root@k8s-master ~]# kubectl label node k8s-master
.bash_history    .bash_logout     .bash_profile    .bashrc          .cshrc           .kube/           .pki/            .tcshrc          .viminfo         anaconda-ks.cfg  original-ks.cfg
[root@k8s-master ~]# kubectl label node k8s-master env=test
node/k8s-master labeled
[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS     ROLES    AGE   VERSION   LABELS
k8s-master   Ready      master   20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env=test,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]# kubectl label node k8s-master env-
node/k8s-master labeled
[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS     ROLES    AGE   VERSION   LABELS
k8s-master   Ready      master   20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]#

node-role

ROLES 是一个特殊的 label “node-role.kubernetes.io”。

[root@k8s-master ~]# kubectl get node --show-labels
NAME         STATUS     ROLES    AGE   VERSION   LABELS
k8s-master   Ready      master   20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node1    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2    NotReady      20h   v1.19.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
[root@k8s-master ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=
node/k8s-node1 labeled
[root@k8s-master ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=
node/k8s-node2 labeled
[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   20h   v1.19.3
k8s-node1    NotReady   worker   20h   v1.19.3
k8s-node2    NotReady   worker   20h   v1.19.3

调度的最小单位:Pod

一个 Pod 里面一个或者多个 Container,一个 Pod 只有一个 namespace,所有 Container 共享这个 namespace 。

image-20201022191526685

创建 pod

下面是一个 k8s 格式的 yml 文件,通过这个文件可以创建一个资源

apiVersion: v1
kind: Pod # 资源类型是 pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers: # 可以包含多个容器
  - name: nginx # 命名
    image: nginx # 镜像
    ports: # 容器暴露的端口
    - containerPort: 80

使用 kubectl create 命令创建资源,根据 -f 指定的文件里面的指定资源类型按需创建,使用 kubectl get pods 命令查看 pods 的状态,显示正在创建中:

D:\docker\docker\chapter9\labs\pod-basic>kubectl create -f pod_nginx.yml
pod/nginx created

D:\docker\docker\chapter9\labs\pod-basic>kubectl get pods
NAME    READY   STATUS              RESTARTS   AGE
nginx   0/1     ContainerCreating   0          13s

过一会儿,创建完成

D:\docker\docker\chapter9\labs\pod-basic>kubectl get pods
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          10m

查看更详细的信息,可以看到名为 nginx 的 pod 的 ip 地址 “172.17.0.2” 和所在的主机节点 “minikube”

D:\docker\docker\chapter9\labs\pod-basic>kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          12m   172.17.0.2   minikube              

进入容器的两种方式

  • minikube ssh 进入 minikube 主机进行操作:

    D:\docker\docker\chapter9\labs\pod-basic>minikube ssh
                             _             _
                _         _ ( )           ( )
      ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
    /' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
    | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
    (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
    
                             _             _
                _         _ ( )           ( )
      ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
    /' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
    | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
    (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
    
    $ docker ps | grep nginx # 查看 nginx 容器的 id
    701409f023de        nginx                  "/docker-entrypoint.…"   7 minutes ago       Up 7 minutes                            k8s_nginx_nginx_default_d94d0dba-f5c4-42a6-ac43-ac58d1f9afe0_0
    $ docker exec -it 701409f023de sh # 这样就进来了
    # exit
    

    通过 inspect 查看容器的 ip 得到 “172.17.0.2/16”,正是上面 kubectl get pods -o wide 得到的 ip:

    $ docker network ls
    NETWORK ID          NAME                DRIVER              SCOPE
    f557efad83b5        bridge              bridge              local
    532631c87d8b        host                host                local
    66c4b8c8efa3        none                null                local
    $ docker network inspect f557efad83b5
    [
        {
            "Name": "bridge",
            "Id": "f557efad83b5947aa44cbe3989e93ded56da77ecddc42e98f3d469dea389c2a0",
            "Created": "2020-10-22T09:00:27.247244957Z",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": null,
                "Config": [
                    {
                        "Subnet": "172.17.0.0/16",
                        "Gateway": "172.17.0.1"
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Ingress": false,
            "ConfigFrom": {
                "Network": ""
            },
            "ConfigOnly": false,
            "Containers": {
                "03a004aa1e27244812acedddb03dcbc471ff19472a05c53b292897cbaa704963": {
                    "Name": "k8s_POD_nginx_default_d94d0dba-f5c4-42a6-ac43-ac58d1f9afe0_1",
                    "EndpointID": "d5a75a4f94133680a249472e9389c7cb92108262ef9b13e4be0ee13c77bf14ec",
                    "MacAddress": "02:42:ac:11:00:02",
                    "IPv4Address": "172.17.0.2/16",
                    "IPv6Address": ""
                },
                "05839cdb2c6b26ca0865bed8cbbcdff77a2056d6972d25627e3b11b0a13e071e": {
                    "Name": "k8s_POD_coredns-f9fd979d6-gsjlm_kube-system_c2971377-7640-4c2d-bf5b-bbb3a2c4fa41_2",
                    "EndpointID": "0a8539bd0e679a9cc7fc1eef47c3c393031a3a042503d75bbce6ce27600e7c9a",
                    "MacAddress": "02:42:ac:11:00:03",
                    "IPv4Address": "172.17.0.3/16",
                    "IPv6Address": ""
                }
            },
            "Options": {
                "com.docker.network.bridge.default_bridge": "true",
                "com.docker.network.bridge.enable_icc": "true",
                "com.docker.network.bridge.enable_ip_masquerade": "true",
                "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
                "com.docker.network.bridge.name": "docker0",
                "com.docker.network.driver.mtu": "1500"
            },
            "Labels": {}
        }
    ]
    
  • 直接使用 kubectl exec -it 进入容器

    D:\docker\docker\chapter9\labs\pod-basic>kubectl exec -it nginx -- sh
    #
    

    通过 -c=<Container namme> 指定要进入的 pod 的容器,如果不指定,默认进入第一个容器

查看 pod 的更详细信息

通过 kubectl describe 命令查看

D:\docker\docker\chapter9\labs\pod-basic>kubectl describe pods nginx
Name:         nginx
Namespace:    default
Priority:     0
Node:         minikube/192.168.99.100
Start Time:   Thu, 22 Oct 2020 19:22:57 +0800
Labels:       app=nginx
Annotations:  
Status:       Running
IP:           172.17.0.2
IPs:
  IP:  172.17.0.2
Containers:
  nginx:
    Container ID:   docker://701409f023de62e508b79b1cd95a70e2ce96cc8e4ab084c4cc4f5cc0fe82ca98
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:ed7f815851b5299f616220a63edac69a4cc200e7f536a56e421988da82e44ed8
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 22 Oct 2020 19:30:47 +0800
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fc5qj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-fc5qj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fc5qj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       24m                default-scheduler  Successfully assigned default/nginx to minikube
  Warning  Failed          17m                kubelet, minikube  Failed to pull image "nginx": rpc error: code = Unknown desc = unexpected EOF
  Warning  Failed          17m                kubelet, minikube  Error: ErrImagePull
  Normal   SandboxChanged  17m                kubelet, minikube  Pod sandbox changed, it will be killed and re-created.
  Normal   BackOff         17m (x3 over 17m)  kubelet, minikube  Back-off pulling image "nginx"
  Warning  Failed          17m (x3 over 17m)  kubelet, minikube  Error: ImagePullBackOff
  Normal   Pulling         17m (x2 over 24m)  kubelet, minikube  Pulling image "nginx"
  Normal   Pulled          17m                kubelet, minikube  Successfully pulled image "nginx" in 8.817994182s
  Normal   Created         17m                kubelet, minikube  Created container nginx
  Normal   Started         17m                kubelet, minikube  Started container nginx

端口映射

在 minikube 内部可以根据 nginx 容器的 ip 地址进行访问的

D:\docker\docker\chapter9\labs\pod-basic>minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.242 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.088 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.088/0.165/0.242 ms
$ curl 172.17.0.2



Welcome to nginx!




Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

$

下面我们要实现在 minikube 外面访问这个容器没,先查看 minikube 的 ip ,kuku 可以在外面访问

$ ip a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
    inet 192.168.99.100/24 brd 192.168.99.255 scope global dynamic eth1
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
$ exit
logout
ssh: exit status 127

D:\docker\docker\chapter9\labs\pod-basic>ping 192.168.99.100

正在 Ping 192.168.99.100 具有 32 字节的数据:
来自 192.168.99.100 的回复: 字节=32 时间<1ms ttl="64" 来自 192.168.99.100 的回复: 字节="32" 时间<1ms 的 ping 统计信息: 数据包: 已发送="2,已接收" = 2,丢失="0" (0% 丢失), 往返行程的估计时间(以毫秒为单位): 最短="0ms,最长" 0ms,平均="0ms" control-c ^c 

通过 kubectl port-forward <pod-name> <local-port>:<container-port> 命令可以对指定 pod 做端口映射,下面命令将 nginx 的80 端口映射到 kubectl 所在主机(不是 minikube 主机)的 8080 端口:

D:\docker\docker\chapter9\labs\pod-basic>kubectl port-forward nginx 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80

此时在 kubectl 所在主机访问 “127.0.0.1:8080” 就可以访问到 nginx 了:

image-20201022201832158

可以看到这种映射方式会导致当前终端阻塞,如果中断阻塞,就会导致映射结束。

删除 pod

D:\docker\docker\chapter9\labs\pod-basic>kubectl delete -f pod_nginx.yml
pod "nginx" deleted

D:\docker\docker\chapter9\labs\pod-basic>kubectl get pods
No resources found in default namespace.

pod 横向扩展

ReplicationController

以下是一个类型是 ReplicationController 的资源,他会创建 3 个 pods,在 v1 版本的配置文件中,支持扩展 pod 副本的资源类型为 ReplicationController 。

apiVersion: v1
kind: ReplicationController 
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
创建

使用 kubectl create 创建一个 ReplicationController 资源,kubectl get rc 获取资源创建情况:

D:\docker\docker\chapter9\labs\replicas-set>kubectl create -f rc_nginx.yml
replicationcontroller/nginx created

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rc
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         3         1       12s

D:\docker\docker\chapter9\labs\replicas-set>kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
nginx-9m8br   1/1     Running   0          26s
nginx-h97jg   1/1     Running   0          26s
nginx-ntjcb   1/1     Running   0          26s

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rc
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         3         3       30s

自愈

下面尝试删除一个 pod,和 docker swarm 一样具有自动恢复的功能:

D:\docker\docker\chapter9\labs\replicas-set>kubectl delete pods nginx-9m8br
pod "nginx-9m8br" deleted

D:\docker\docker\chapter9\labs\replicas-set>kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
nginx-h97jg   1/1     Running   0          3m18s
nginx-ntjcb   1/1     Running   0          3m18s
nginx-schc7   1/1     Running   0          8s
扩展

使用 kubectl scale rc 命令可以对 ReplicationController 进行横向扩展

D:\docker\docker\chapter9\labs\replicas-set>kubectl scale rc nginx --replicas=2
replicationcontroller/nginx scaled

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rc
NAME    DESIRED   CURRENT   READY   AGE
nginx   2         2         2       6m35s

D:\docker\docker\chapter9\labs\replicas-set>kubectl scale rc nginx --replicas=4
replicationcontroller/nginx scaled

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rc
NAME    DESIRED   CURRENT   READY   AGE
nginx   4         4         4       6m53s

删除
D:\docker\docker\chapter9\labs\replicas-set>kubectl delete -f rc_nginx.yml
replicationcontroller "nginx" deleted

D:\docker\docker\chapter9\labs\replicas-set>kubectl get pods
No resources found in default namespace.

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rc
No resources found in default namespace.

ReplicaSet

ReplicaSet 是用来替换 ReplictionController 的,它在 apps/v1 版本配置文件中开始支持,它比 ReplicationController 多了一个 new set-based selector requirement 的功能,以下是它的一个 yml 文件配置:

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      name: nginx
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
创建

kubectl create rs 创建 ReplicaSet

D:\docker\docker\chapter9\labs\replicas-set>kubectl create -f rs_nginx.yml
replicaset.apps/nginx created

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
nginx   3         3         3       15s

D:\docker\docker\chapter9\labs\replicas-set>kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
nginx-5ddz7   1/1     Running   0          21s
nginx-7gfjk   1/1     Running   0          21s
nginx-t4k5l   1/1     Running   0          21s
自愈

… …

扩展

kubectl scale rs 横向扩展

D:\docker\docker\chapter9\labs\replicas-set>kubectl scale rs nginx --replicas=2
replicaset.apps/nginx scaled

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rs
NAME    DESIRED   CURRENT   READY   AGE
nginx   2         2         2       42s

删除
D:\docker\docker\chapter9\labs\replicas-set>kubectl delete -f rs_nginx.yml
replicaset.apps "nginx" deleted

D:\docker\docker\chapter9\labs\replicas-set>kubectl get rs
No resources found in default namespace.

Deployment

deployment 是比 replicas set 更抽象的资源,使用它可以创建 replicas set 的同时还可以对 replicas set 进行更新操作,例如镜像更新,端口更新等,类似 docker swarm 的 docker service update 和 docker deploy。下面是它的一个配置文件:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.12.2
        ports:
        - containerPort: 80
创建

kubectl create 指定 deployment 类型的配置文件创建 deployment,使用 kubectl get deployment 获取创建的 deployment 信息:

D:\docker\docker\chapter9\labs\deployment>kubectl create -f deployment_nginx.yml
deployment.apps/nginx-deployment created

D:\docker\docker\chapter9\labs\deployment>kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0/3     3            0           9s

D:\docker\docker\chapter9\labs\deployment>kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           23s

D:\docker\docker\chapter9\labs\deployment>kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-84b8bdb667   3         3         3       32s

D:\docker\docker\chapter9\labs\deployment>kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-84b8bdb667-tfsfr   1/1     Running   0          39s
nginx-deployment-84b8bdb667-tr9dm   1/1     Running   0          39s
nginx-deployment-84b8bdb667-xp8pm   1/1     Running   0          39s
查看详细信息
D:\docker\docker\chapter9\labs\deployment>kubectl get deployment -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES         SELECTOR
nginx-deployment   3/3     3            3           3m36s   nginx        nginx:1.12.2   app=nginx

更新 deployment
  • 使用 kubectl set image deployment 更新镜像

    D:\docker\docker\chapter9\labs\deployment>kubectl set image deployment nginx-deployment nginx=nginx:1.13
    deployment.apps/nginx-deployment image updated
    
    D:\docker\docker\chapter9\labs\deployment>kubectl get deployment -o wide
    NAME               READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES       SELECTOR
    nginx-deployment   3/3     3            3           8m8s   nginx        nginx:1.13   app=nginx
    
    D:\docker\docker\chapter9\labs\deployment>kubectl get rs
    NAME                          DESIRED   CURRENT   READY   AGE
    nginx-deployment-687d765c64   3         3         3       29s
    nginx-deployment-84b8bdb667   0         0         0       8m18s
    
    D:\docker\docker\chapter9\labs\deployment>kubectl get pods
    NAME                                READY   STATUS    RESTARTS   AGE
    nginx-deployment-687d765c64-24s72   1/1     Running   0          36s
    nginx-deployment-687d765c64-pzz7w   1/1     Running   0          27s
    nginx-deployment-687d765c64-tm68b   1/1     Running   0          29s
    
  • 使用 kubectl apply -f <resource-yml> 指定一个对某个现有资源的配置做了更改的 yml 文件,也会对正在运行的资源进行更新,apply 相当于 create or update。而 create 只会 create ,如果资源已存在,就会报错。

  • kubectl edit deployment <deployment-name> 也可以对资源进行更新,它会打开资源的 yaml 文件编辑界面,我们对 yml 文件更新后保存即可。

滚动发布

deployment 的更新是滚动的,使用 kubectl describe deployment 可以看到它的策略:

[root@k8s-master deployment]# kubectl describe deployment nginx-deployment
... ...
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
... ...
回滚

kubectl rollout history 查看 deployment 的历史记录

D:\docker\docker\chapter9\labs\deployment>kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
1         
2         

查看版本1 和版本 2 的详细:

[root@k8s-master deployment]# kubectl rollout history deployment nginx-deployment --revision 1
deployment.apps/nginx-deployment with revision #1
Pod Template:
  Labels:	app=nginx
	pod-template-hash=84b8bdb667
  Containers:
   nginx:
    Image:	nginx:1.12.2
    Port:	80/TCP
    Host Port:	0/TCP
    Environment:	
    Mounts:	
  Volumes:	

[root@k8s-master deployment]# kubectl rollout history deployment nginx-deployment --revision 2
deployment.apps/nginx-deployment with revision #2
Pod Template:
  Labels:	app=nginx
	pod-template-hash=687d765c64
  Containers:
   nginx:
    Image:	nginx:1.13
    Port:	80/TCP
    Host Port:	0/TCP
    Environment:	
    Mounts:	
  Volumes:	

kubectl rollout undo 对 deployment 进行回滚;另外加 --to-revision <revision num> 可以回滚到指定版本

D:\docker\docker\chapter9\labs\deployment>kubectl rollout undo deployment nginx-deployment
deployment.apps/nginx-deployment rolled back

D:\docker\docker\chapter9\labs\deployment>kubectl get deployment -o wide
NAME               READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
nginx-deployment   3/3     3            3           10m   nginx        nginx:1.12.2   app=nginx

D:\docker\docker\chapter9\labs\deployment>kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION  CHANGE-CAUSE
2         
3         


D:\docker\docker\chapter9\labs\deployment>kubectl set image deployment nginx-deployment nginx=nginx:1.13
deployment.apps/nginx-deployment image updated
自愈

deployment 包装了 ReplicasSet,依赖 ReplicasSet 实现 。

Namespace

Namespace 用来隔离资源,可以在不同的 Namespace 下同时存在两个同类型同名的资源,不能在一个 Namespace 下存在这种情况。即以 pod 为例,现在的 k8s 资源结构层级为:cluster - context - namespace - pod。

kubectl get namespaces 查看命名空间:

[root@k8s-master ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   21h
kube-node-lease   Active   21h
kube-public       Active   21h
kube-system       Active   21h

获取指定 namespace 下面的 pods;获取所有 namespace 下的资源使用参数 --all-namespaces

[root@k8s-master ~]# kubectl get pod --namespace kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6c76c8bb89-c5dcd             1/1     Running   1          21h
coredns-6c76c8bb89-n5hqc             1/1     Running   1          21h
etcd-k8s-master                      1/1     Running   1          21h
kube-apiserver-k8s-master            1/1     Running   1          21h
kube-controller-manager-k8s-master   1/1     Running   2          21h
kube-proxy-ctz76                     1/1     Running   0          21h
kube-proxy-fn2fh                     1/1     Running   0          21h
kube-proxy-qbjjx                     1/1     Running   1          21h
kube-scheduler-k8s-master            1/1     Running   2          21h
weave-net-k777j                      2/2     Running   3          21h
weave-net-m88vt                      2/2     Running   0          21h
weave-net-w6krt                      2/2     Running   1          21h

创建 namespace

[root@k8s-master ~]# kubectl create namespace demo
namespace/demo created
[root@k8s-master ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   21h
demo              Active   3s
kube-node-lease   Active   21h
kube-public       Active   21h
kube-system       Active   21h

在创建 pod 的 yml 文件中,在顶级属性 metadata 下的子属性 namespace 指定命名空间:

kind: Pod
metadata:
	name: nginx
	namesapce: demo
	... ...

删除当前上下文中的命名空间:

[root@k8s-master ~]# kubectl delete namespaces demo
namespace "demo" deleted

k8s 自带命名空间

[root@k8s-master ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   27h
kube-node-lease   Active   27h
kube-public       Active   27h
kube-system       Active   27h

其中 kube-system 中运行着 kubernetes 提供的服务的 pod ,如 etcd、controller、apiserver等等,其中还有一个提供 DNS 服务 的 service :kube-dns。

内置 DNS 服务

如下就是内置 DNS 服务的信息,它的 ip 地址是 10.96.0.10,kubernetes 通过在每个容器内的 /etc/resolv.conf 文件中增加这个 ip 地址为 DNS 服务器地址,从而实现每个容器进行 DNS 查询的时候就会访问这个 ip 。

[root@k8s-master ~]# kubectl get services -n kube-system -o wide
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
kube-dns   ClusterIP   10.96.0.10           53/UDP,53/TCP,9153/TCP   27h   k8s-app=kube-dns

Context

default 是所有上下文中默认的缺省命名空间,创建上下文的时候如果没有指定 namespace ,后续创建资源的时候如果没有创建自定义的命名空间或者创建资源没有指定命名空间,新创建的资源就会位于这个命名空间下面 :

[root@k8s-master ~]# kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin

另外我们可以创建自己的上下文并指定该上下文的默认命名空间,如下所示,在 demo 上下文中创建资源没有指定命名空间,就会放到 demo 命名空间中:

[root@k8s-master ~]# kubectl config set-context demo --user=john --cluster=kubernetes --namespace=demo
Context "demo" created.
[root@k8s-master ~]# kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
          demo                          kubernetes   john               demo
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin
[root@k8s-master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.205.120:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: demo
    user: john
  name: demo
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

删除上下文:

[root@k8s-master ~]# kubectl config delete-context demo
deleted context demo from /root/.kube/config

Service

反向代理、负载均衡。

Selector、Label

Service 使用 Selector 组件获取 pods 的标签并根据 Selector 的条件标签进行流量定向。

Service Type

ClusterIP

k8s 内部服务暴露

NodePort

k8s 服务暴露给外部

LoadBalancer

暴露服务到公网(只监听公网ip,NodePort 监听内网 ip?)

网络转发模式

  • 用户空间代理模式:路由信息存储在 kube-proxy 中,转发逻辑需要切换到 kube-proxy 用户态然后再切回内核态

  • iptables:每一个 pod 一条 netfilter(iptables 接口操作的内核程序) 规则,负载均衡由转发表实现,负载均衡算法较简单,且一个 pod 一条规则在大规模 k8s 集群下会导致规则过多,频繁更新 linux 内核的 netfilter。

  • IPVS:一个 service 一条 netfilter 规则,这条规则会将流量转发到 LVS,由 LVS 实现负载均衡,性能更高,算法更多。

蓝绿发布

先发布新版本 pod,与旧版本 pod 并存,修改 Service Selector Label 指向新版本,待新版本稳定后删除旧版本;如果新版本出现问题,则改回旧版本。

k8s ingress

本质上也是一种 service,相当于微服务网关,可以实现流量路由(根据域名、路径、端口等路由到不同 pod)、安全认证、日志监控、流量治理等功能。k8s 的 ingress 和网络一样也是提供了一个标准,有不同的实现,流行的是 nginx-ingress。

ConfigMap

通过发布一个 ConfigMap 对象,然后在其它对象如 Pod、Service 等中引用这个 ConfigMap 对象。

  • 通过环境变量的形式注入配置到容器:envFrom 属性的 configMapRef 子属性
  • 挂载一个 volume 到容器,并将配置以文件的形式放到 volume 中

变更传播

更新一个已经发布的 ConfigMap,并不会将更新传播到正在运行的之前引用了该 ConfigMap 的 Pod 中。需要以下做法:

  • 重启 pod
  • 修改被更新的 ConfigMap 的名称,或者创建一个新的配置文件填入新的内容和名称,然后修改引用了该 ConfigMap 的资源中的 ConfigMap 名称,重新发布该资源,利用滚动更新的机制相较上面直接重启的方式更平滑

Secret

Secret 和 ConfigMap 一样也是一个键值对的配置存储对象,也是有环境变量注入和 volume 挂载的方式进行使用。区别是使用 kubectl describe 的时候 ConfigMap 会显示配置信息,Secret 不会显示。不过使用 kubectl get secret <secret-name> -o yaml 的时候就会显示了,包括配置值的 base64 编码,如果在 yml 文件中填写的是明文,那么也会显示明文信息。而且 base64 也是不安全的,很容器解码。

volume

k8s 的 volume 有多种类型,hostPath 表示本地文件。在 containers 的子属性 volumeMounts 的子属性 name 引用 containers 的兄弟属性 volumes 的子属性 name 即可对 pod 的路径进行 volume 挂载。挂载的本地文件路径需要是共享的。

PV、PVC

将在 volumes 中定义的具体 volume 类型进行解耦,改成引用一个 PVC 对象,而 PVC 对象定义 yml 中引用一个 PV。PVC 表示一个申请动作,PV 中定义了具体的 volume 类型和持久化路径。

kubectl get pv
kubectl get pvc

resources

containers 的子属性 resources 进行设置。

request

最小需求,如果 k8s 中的资源不能满足,pod 发布就会 pending(如果一个 deployment 发布多个 pod,则会先发布满足资源的 pod,剩下的如果资源不足就 pending),直到资源满足才会启动。

limit

最大可使用量。如果 pod 在运行过程中内存申请超过了 limit,会被 OOM kil 然后重启(CrashLoopBackOff),重启达到一定次数还不能成功就停止;CPU 资源则会被硬性限制。

三种设置

  1. Guaranteed:request=limit,只有内存使用超出 limit 才会被kill
  2. Burstable:request < limit,在 k8s 集群资源不足的时候可能会 kill
  3. Best Effor:两个值都不设置,在 k8s 资源不足的时候会被 kiil

查看资源情况

metrics-server

需要安装,安装之后通过 kubectl top 命令可以查看各维度、资源的 metrics。

kubernetes-dashboard

也需要安装,安装之后有一个图形化界面可以查看各指标信息。

k8s Java pod 内存限制

需要开启容器识别,这样 k8s 的 resource request/limit 才能生效,然后设置 JVM 相应的内存参数。

k8s 就绪和存活探针

防止在 pod 实际上不可用的情况下向它发送流量发生错误。

  • 就绪探针:在 pod 启动时期检测是否就绪,就绪才会发送流量。

  • 存活探针:在 pod 运行时期检测,如果没有存活则会 kill 掉 pod 重启。

另外在容器依赖的情况下应该也有用到

helm/charts

k8s 插件、安装包管理。


   转载规则


《k8s入门概述》 阿钟 采用 知识共享署名 4.0 国际许可协议 进行许可。
  目录