2025年Kubernetes基础1

Kubernetes基础11 kubernetes 部署 1 1 环境介绍 系统版本 Ubuntu 18 04 4 LTS 角色 主机名 IP 备注 k8s master1 kubeadm master1 haostack com 172 16 62 18 k8s master2 kubeadm master2 haostack com 172 16 62 19 k8s master3

大家好,我是讯享网,很高兴认识大家。

1.kubernetes部署

1.1 环境介绍

  • 系统版本 Ubuntu 18.04.4 LTS
角色 主机名 IP 备注
k8s-master1 kubeadm-master1.haostack.com 172.16.62.18
k8s-master2 kubeadm-master2.haostack.com 172.16.62.19
k8s-master3 kubeadm-master3.haostack.com 172.16.62.20
ha1 ha1.haostack.com 172.16.62.21
ha2 ha2.haostack.com 172.16.62.22
node1 node1.haostack.com 172.16.62.31
node2 node2.haostack.com 172.16.62.32
node3 node3.haostack.com 172.16.62.33
harbor harbor.haostack.com 172.16.62.25
dns haostack.com 172.16.62.24

1.2 系统环境准备

  • 关闭ipv6
目录 sudo vi /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1" GRUB_CMDLINE_LINUX="ipv6.disable=1" #更新 update-grub #重启 reboot 

讯享网
  • 关闭防火墙
讯享网root@ha1:~# systemctl status firewalld 
  • 关闭ufw
root@ha1:~# ufw status Status: inactive root@ha1:~# 
  • 路由转发功能
讯享网sed -i '1i net.ipv4.ip_forward=1' /etc/sysctl.conf sysctl -p 
  • 修改时区
sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 
  • 更新软件源 sources.list
讯享网目录 /etc/apt/sources.list sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse apt-get update 

1.3 HA节点配置keepalived和haproxy

  • 两个节点都需要安装
#安装keepalived+haproxy apt install keepalived haproxy -y #启动服务 systemctl start haproxy systemctl start keepalived systemctl enable keepalived systemctl enable keepalived #keepalived配置 - 从模板文件中拷贝配置文件过来修改参数 root@ha2:~# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf root@ha2:~# root@ha2:~# more /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen } notification_email_from  smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_199 { state BACKUP interface ens160 garp_master_delay 10 smtp_alert virtual_router_id 22 priority 80 advert_int 1 authentication { auth_type PASS auth_pass  } virtual_ipaddress { 172.16.62.200 dev ens160 label ens160:1 } } #haproxy配置 - 两台配置文件需要一样 root@ha1:/etc/haproxy# vim haproxy.cfg ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ # An alternative list with additional directives can be obtained from # https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http listen k8s-api-6443 bind 172.16.62.200:6443 mode tcp balance roundrobin server master1 172.16.62.18:6443 check inter 3s fall 3 rise 5 server master2 172.16.62.19:6443 check inter 3s fall 3 rise 5 server master3 172.16.62.20:6443 check inter 3s fall 3 rise 5 #如有报错信息,请指定启动配置文件 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid 

1.4 harbor配置

  • 安装过程略

在这里插入图片描述
讯享网

1.5.docker安装

  • 所有master和node 节点都要配置,这里指向aliyun

1.5.1 Docker CE 镜像

讯享网# step 1: 安装必要的一些系统工具 sudo apt-get update sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common # step 2: 安装GPG证书 curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - # Step 3: 写入软件源信息 sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" # Step 4: 更新并安装Docker-CE sudo apt-get -y update #docker 安装 sudo apt-get -y install docker-ce # 安装指定版本的Docker-CE: # Step 1: 查找Docker-CE的版本: # apt-cache madison docker-ce # docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages # docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages # Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial) # sudo apt-get -y install docker-ce=[VERSION] 

1.5.2 Docker 安装

  • 所有master和node节点都要安装docker
#安装 sudo apt-get -y install docker-ce #查看版本 root@kubeadm-master1:/# docker version Client: Docker Engine - Community Version: 19.03.8 API version: 1.40 Go version: go1.12.17 Git commit: afacb8b7f0 Built: Wed Mar 11 01:25:46 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.12 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 48a66213fe Built: Mon Jun 22 15:44:07 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: 7adfa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 

1.5.3 Docker镜像加速器

讯享网root@kubeadm-master2:/# more /etc/docker/daemon.json { "registry-mirrors": ["https://9916w1ow.mirror.aliyuncs.com"] } root@kubeadm-master2:/# 
  • 重启服务
sudo systemctl daemon-reload sudo systemctl restart docker 

1.6 Kubernetes安装

1.6.1 Kubernetes 镜像

讯享网apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update apt-get update && apt-get install -y apt-transport-https curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update #需要指定版本 apt-get install -y kubelet kubeadm kubectl 

1.6.2 查看版本信息

root@kubeadm-master1:/# apt-cache madison kubeadm kubeadm | 1.18.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.4-01 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.7-01 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages 

1.6.3 安装kubernetes

  • 所有master和node节点都要安装kubeadm
讯享网- 便于版本升级演示,先安装低版本 root@kubeadm-master1:/# apt install kubeadm=1.17.2-00 kubectl=1.17.2-00 kubelet=1.17.2-00 - 启动服务 root@kubeadm-master1:/var/log# systemctl enable kubelet.service root@kubeadm-master1:/var/log# systemctl start kubelet.service #查看版本 root@kubeadm-master1:/# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:27:49Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"} root@kubeadm-master1:/ 

1.6.4 kubeadm completion 命令补全

#mkdir /data/scripts -p #kubeadm completion bash > /data/scripts/kubeadm_completion.sh source /data/scripts/kubeadm_completion.sh #临时生效 vim /etc/profile #配置文件中是长期生效 source /data/scripts/kubeadm_completion.sh source /etc/profile 

1.6.5 kubelet报错日志

讯享网root@kubeadm-master1:/var/log/apt# tail -f /var/log/syslog Jul 22 10:55:46 kubeadm-master1 kubelet[2278]: F0722 10:55:46.086882 2278 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory Jul 22 10:55:46 kubeadm-master1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a Jul 22 10:55:46 kubeadm-master1 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 22 10:55:56 kubeadm-master1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart. Jul 22 10:55:56 kubeadm-master1 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4270. Jul 22 10:55:56 kubeadm-master1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Jul 22 10:55:56 kubeadm-master1 systemd[1]: Started kubelet: The Kubernetes Node Agent. Jul 22 10:55:56 kubeadm-master1 kubelet[2338]: F0722 10:55:56. 2338 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory 忽略以上的报错,设置为开机自启动即可,因为此时的配置还没初始化完成,所以此时不能启动kubelet,等后续kubeadm启动成功后再查看 

1.7 kubeadm init 初始化

--apiserver-advertise-address string #K8S API Server 将要监听的监听的本机IP --apiserver-bind-port int32 #API Server 绑定的端口,默认为 6443 --apiserver-cert-extra-sans stringSlice #可选的证书额外信息,用于指定 API Server 的服务器证 书。可以是 IP 地址也可以是 DNS 名称。 --cert-dir string #证书的存储路径,缺省路径为 /etc/kubernetes/pki --certificate-key string #定义一个用于加密 kubeadm-certs Secret 中的控制平台证书的密钥 --config string #kubeadm #配置文件的路径 --control-plane-endpoint string #为控制平台指定一个稳定的 IP 地址或 DNS 名称,即配置一 个可以长期使用切是高可用的 VIP 或者域名,k8s 多 master 高可用基于此参数实现 --cri-socket string #要连接的 CRI(容器运行时接口,Container Runtime Interface, 简称 CRI)套 接字的路径,如果为空,则 kubeadm 将尝试自动检测此值,"仅当安装了多个 CRI 或具有非 标准 CRI 插槽时,才使用此选项" --dry-run #不要应用任何更改,只是输出将要执行的操作,其实就是测试运行。 --experimental-kustomize string #用于存储 kustomize 为静态 pod 清单所提供的补丁的路径。 --feature-gates string #一组用来描述各种功能特性的键值(key=value)对,选项是: IPv6DualStack=true|false (ALPHA - default=false) --ignore-preflight-errors strings #可以忽略检查过程 中出现的错误信息,比如忽略 swap,如 果为 all 就忽略所有 --image-repository string #设置一个镜像仓库,默认为 k8s.gcr.io --kubernetes-version string #指定安装 k8s 版本,默认为 stable-1 --node-name string #指定 node 节点名称 --pod-network-cidr #设置 pod ip 地址范围 --service-cidr #设置 service 网络地址范围 --service-dns-domain string #设置 k8s 内部域名,默认为 cluster.local,会有相应的 DNS 服务 (kube-dns/coredns)解析生成的域名记录。 --skip-certificate-key-print #不打印用于加密的 key 信息 --skip-phases strings #要跳过哪些阶段 --skip-token-print #跳过打印 token 信息 --token #指定 token --token-ttl #指定 token 过期时间,默认为 24 小时,0 为永不过期 --upload-certs #更新证书 #全局可选项: --add-dir-header #如果为 true,在日志头部添加日志目录 --log-file string #如果不为空,将使用此日志文件 --log-file-max-size uint #设置日志文件的最大大小,单位为兆,默认为 1800 兆,0 为没有限 制 --rootfs #宿主机的根路径,也就是绝对路径 --skip-headers #如果为 true,在 log 日志里面不显示标题前缀 --skip-log-headers #如果为 true,在 log 日志里里不显示标 

1.8.高可用master 初始化

讯享网kubeadm config print init-defaults #输出默认初始化配置 # kubeadm config print init-defaults > kubeadm-init.yaml #将默认配置输出至文件 

1.8.1 kubeadm-init.yaml 文件

root@kubeadm-master1:/data# vim kubeadm-init.yaml ttl: 24h0m0s #token 过期时间 usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 172.16.62.18 #master1 ip地址 bindPort: 6443 #API Server端口 nodeRegistration: criSocket: /var/run/dockershim.sock name: kubeadm-master1.haostack.com #master节点名称 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki #证书路径 clusterName: kubernetes #集群名称 controlPlaneEndpoint: 172.16.62.200:6443 #添加基于vip的Endpoint controllerManager: {} dns: type: CoreDNS #采用coredns解析 etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #镜像源 kind: ClusterConfiguration kubernetesVersion: v1.17.2 #安装版本 networking: dnsDomain: haostack.com #域名 podSubnet: 10.10.0.0/16 #pod子网 需要跟flannel子网一致 serviceSubnet: 172.26.0.0/16 #Service子网 scheduler: {} 

1.8.2 初始化过程

讯享网root@kubeadm-master1:/data# kubeadm init --config kubeadm-init.yaml W0725 11:17:50. 21297 validation.go:28] Cannot validate kube-proxy config - no validator is available W0725 11:17:50. 21297 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.2 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubeadm-master1.haostack.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.haostack.com] and IPs [172.26.0.1 172.16.62.18 172.16.62.200] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubeadm-master1.haostack.com localhost] and IPs [172.16.62.18 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubeadm-master1.haostack.com localhost] and IPs [172.16.62.18 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0725 11:18:02. 21297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0725 11:18:02. 21297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 38. seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kubeadm-master1.haostack.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubeadm-master1.haostack.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [kubelet-check] Initial timeout of 40s passed. [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb root@kubeadm-master1:/data# 

1.8.3 初始化完成

在这里插入图片描述

1.8.4 创建 kube-config 文件

  • 根据提示开始操作
root@kubeadm-master1:/data# mkdir -p $HOME/.kube root@kubeadm-master1:/data# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config root@kubeadm-master1:/data# sudo chown $(id -u):$(id -g) $HOME/.kube/config root@kubeadm-master1:/data# #查看节点信息 root@kubeadm-master1:~/.kube# kubectl get node NAME STATUS ROLES AGE VERSION kubeadm-master1.haostack.com NotReady master 4m20s v1.17.2 root@kubeadm-master1:~/.kube# 

1.9 部署kube-config 网络组件

  • 需要下载网络组件 github上路径

在这里插入图片描述

  • https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
讯享网#需要修改子网 需要跟kube-init.yml文件中的 podSubnet 子网一致 net-conf.json: | { "Network": "10.10.0.0/16", "Backend": { "Type": "vxlan" 

1.9.1 开始安装网络组件flannel

root@kubeadm-master1:/data# kubectl apply -f kube kubeadm-init.yaml kube-flannel.yml root@kubeadm-master1:/data# kubectl apply -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created root@kubeadm-master1:/data# #验证master节点状态 依据Ready 状态 root@kubeadm-master1:/data# kubectl get node NAME STATUS ROLES AGE VERSION kubeadm-master1.haostack.com Ready master 43m v1.17.2 root@kubeadm-master1:/data# 

1.9.2 配置subnet.env

  • 如果有报错,根据报错信息需要手动在/run/flannel目录下添加subnet.env文件
讯享网root@node3:/run/flannel# more subnet.env FLANNEL_NETWORK=10.10.0.0/16 FLANNEL_SUBNET=10.10.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true root@node3:/run/flannel# 

1.10 当前 maste 生成证书用于添加新控制节点:

  • key 很重要,需要保存起来
root@kubeadm-master1:/run/flannel# kubeadm init phase upload-certs --upload-certs W0725 11:28:23. 26849 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers) W0725 11:28:23. 26849 version.go:102] falling back to the local client version: v1.17.2 W0725 11:28:23. 26849 validation.go:28] Cannot validate kube-proxy config - no validator is available W0725 11:28:23. 26849 validation.go:28] Cannot validate kubelet config - no validator is available [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa root@kubeadm-master1:/run/flannel# 

1.10.1 添加新 master 节点

  • 在另外一台已经安装了 docker、kubeadm 和 kubelet 的 master 节点上执行以下操作
讯享网#命令 kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \ --control-plane --certificate-key 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa #matser2操作 root@kubeadm-master2:~/.kube# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \ --control-plane --certificate-key 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa #matser3操作 root@kubeadm-master3:/var/lib/etcd# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \ > --control-plane --certificate-key 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa #然后安装提示操作,创建文件等 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 

1.10.2 加入master 过程

root@kubeadm-master3:/var/lib/etcd# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \ > --control-plane --certificate-key 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubeadm-master3.haostack.com localhost] and IPs [172.16.62.20 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubeadm-master3.haostack.com localhost] and IPs [172.16.62.20 127.0.0.1 ::1] [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubeadm-master3.haostack.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.haostack.com] and IPs [172.26.0.1 172.16.62.20 172.16.62.200] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" W0725 11:43:52. 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-controller-manager" W0725 11:43:52. 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W0725 11:43:52. 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s {"level":"warn","ts":"2020-07-25T11:44:20.889+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.16.62.20:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"} [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [mark-control-plane] Marking the node kubeadm-master3.haostack.com as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubeadm-master3.haostack.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane (master) label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster 

1.10.3 验证mster状态

讯享网root@kubeadm-master1:/data# kubectl get node NAME STATUS ROLES AGE VERSION kubeadm-master1.haostack.com Ready master 28m v1.17.2 kubeadm-master2.haostack.com Ready master 6m31s v1.17.2 kubeadm-master3.haostack.com Ready master 3m7s v1.17.2 root@kubeadm-master1:/data# 

1.10.4 验证k8s集群状态

root@kubeadm-master1:/data# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} root@kubeadm-master1:/data# 

1.10.5 验证csr证书状态

讯享网oot@kubeadm-master1:/data# kubectl get csr NAME AGE REQUESTOR CONDITION csr-7chml 6m18s system:bootstrap:abcdef Approved,Issued csr-n7mz2 2m55s system:bootstrap:abcdef Approved,Issued 

1.11 k8s集群添加node节点

  • 各需要加入到 k8s master 集群中的 node 节点都要安装 docker kubeadm kubelet ,因此都要 重新执行安装 docker kubeadm kubelet 的步骤,即配置 apt 仓库、配置 docker 加速器、安装 命令、启动 kubelet 服务

1.11.1 node节点添加

  • 添加命令为 master 端 kubeadm init 初始化完成之后返回的添加命令
#命令 kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb #node1 oot@node1:/# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb #node2 root@node2:~# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb #node3 root@node3:/# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb 

1.11.2 node节点添加到集群过程

讯享网root@node3:/# kubeadm join 172.16.62.200:6443 --token abcdef.0abcdef \ > --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb W0725 11:51:59. 32501 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. root@node3:/# #node节点镜像 已经从阿里云下载 root@node3:~# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.2 cba2a99699bd 6 months ago 116MB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB root@node3:~# 

1.11.3 验证 node 节点状态 都是Ready 状态

root@kubeadm-master1:/data# kubectl get node NAME STATUS ROLES AGE VERSION kubeadm-master1.haostack.com Ready master 36m v1.17.2 kubeadm-master2.haostack.com Ready master 14m v1.17.2 kubeadm-master3.haostack.com Ready master 11m v1.17.2 node1.haostack.com Ready <none> 3m22s v1.17.2 node2.haostack.com Ready <none> 3m21s v1.17.2 node3.haostack.com Ready <none> 3m18s v1.17.2 

1.11.4 验证pod状态

讯享网root@kubeadm-master1:/data# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-7f9c544f75-lspdp 1/1 Running 0 52m kube-system coredns-7f9c544f75-mfbm6 1/1 Running 0 52m kube-system etcd-kubeadm-master1.haostack.com 1/1 Running 0 52m kube-system etcd-kubeadm-master2.haostack.com 1/1 Running 2 30m kube-system etcd-kubeadm-master3.haostack.com 1/1 Running 0 26m kube-system kube-apiserver-kubeadm-master1.haostack.com 1/1 Running 0 52m kube-system kube-apiserver-kubeadm-master2.haostack.com 1/1 Running 2 30m kube-system kube-apiserver-kubeadm-master3.haostack.com 1/1 Running 2 26m kube-system kube-controller-manager-kubeadm-master1.haostack.com 1/1 Running 1 52m kube-system kube-controller-manager-kubeadm-master2.haostack.com 1/1 Running 0 30m kube-system kube-controller-manager-kubeadm-master3.haostack.com 1/1 Running 0 26m kube-system kube-flannel-ds-amd64-5dt7f 1/1 Running 7 18m kube-system kube-flannel-ds-amd64-dq6zv 1/1 Running 7 18m kube-system kube-flannel-ds-amd64-jjptb 1/1 Running 7 18m kube-system kube-flannel-ds-amd64-jvbj5 1/1 Running 10 30m kube-system kube-flannel-ds-amd64-l6x5h 1/1 Running 9 26m kube-system kube-flannel-ds-amd64-zb9xj 1/1 Running 13 44m kube-system kube-proxy-b5mgq 1/1 Running 0 52m kube-system kube-proxy-c65kg 1/1 Running 0 26m kube-system kube-proxy-cptfw 1/1 Running 0 18m kube-system kube-proxy-gkknw 1/1 Running 0 18m kube-system kube-proxy-k95g7 1/1 Running 0 18m kube-system kube-proxy-wgs77 1/1 Running 0 30m kube-system kube-scheduler-kubeadm-master1.haostack.com 1/1 Running 1 52m kube-system kube-scheduler-kubeadm-master2.haostack.com 1/1 Running 3 30m kube-system kube-scheduler-kubeadm-master3.haostack.com 1/1 Running 2 26m root@kubeadm-master1:/data# 

1.12 k8s 创建容器并测试网络:

  • 创建测试容器,测试网络连接是否可以通信
#创建 root@kubeadm-master1:/run/flannel# kubectl run net-test1 --image=alpine --replicas=2 sleep  root@kubeadm-master1:/data# kubectl run net-test2 --image=alpine --replicas=3 sleep  kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/net-test2 created root@kubeadm-master1:/data# #查看pod root@kubeadm-master1:/data# kubectl get pod -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default net-test1-5fcc69db59-2452n 1/1 Running 0 94s 10.10.4.2 node2.haostack.com <none> <none> default net-test1-5fcc69db59-fh4fb 1/1 Running 0 94s 10.10.3.2 node1.haostack.com <none> <none> default net-test2-8456fd74f7-5jx5f 1/1 Running 0 24s 10.10.4.3 node2.haostack.com <none> <none> default net-test2-8456fd74f7-8k9mn 1/1 Running 0 24s 10.10.3.3 node1.haostack.com <none> <none> default net-test2-8456fd74f7-r5kfh 1/1 Running 0 24s 10.10.5.2 node3.haostack.com <none> <none> #进入到pod中 root@kubeadm-master1:/data# kubectl exec -it net-test2-8456fd74f7-5jx5f sh / # ifconfig eth0 Link encap:Ethernet HWaddr 6A:5C:93:FA:AD:45 inet addr:10.10.4.3 Bcast:10.10.4.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1 RX packets:1 errors:0 dropped:0 overruns:0 frame:0 TX packets:1 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:42 (42.0 B) TX bytes:42 (42.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) #测试网络 / # ping 10.10.4.1 PING 10.10.4.1 (10.10.4.1): 56 data bytes 64 bytes from 10.10.4.1: seq=0 ttl=64 time=0.226 ms ^C --- 10.10.4.1 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 0.226/0.226/0.226 ms / # ping www.baidu.com PING www.baidu.com (180.101.49.11): 56 data bytes 64 bytes from 180.101.49.11: seq=0 ttl=48 time=9.238 ms ^C --- www.baidu.com ping statistics --- #可以通外网coreDNS 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 9.238/9.238/9.238 ms / # ping 10.10.3.2 PING 10.10.3.2 (10.10.3.2): 56 data bytes 64 bytes from 10.10.3.2: seq=0 ttl=62 time=2.059 ms 64 bytes from 10.10.3.2: seq=1 ttl=62 time=0.752 ms ^C --- 10.10.3.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.752/1.405/2.059 ms / # 

2.部署Dashboard

  • 参考github 部署步骤

在这里插入图片描述

2.1 下载dashboard镜像

讯享网docker pull kubernetesui/dashboard:v2.0.0-rc6 docker pull kubernetesui/metrics-scraper:v1.0.3 root@kubeadm-master1:/data/dashboard# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE harbor.haostack.com/k8s/jack_dashboard v2.0.0-rc6 cdc71b5a8a0e 4 months ago 221MB kubernetesui/dashboard v2.0.0-rc6 cdc71b5a8a0e 4 months ago 221MB quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MB kubernetesui/metrics-scraper v1.0.3 3327f0dbcb4a 5 months ago 40.1MB harbor.haostack.com/k8s/jack_metrics-scraper v1.0.3 3327f0dbcb4a 5 months ago 40.1MB 

2.2 上传到harbor

docker push harbor.haostack.com/k8s/jack_dashboard:v2.0.0-rc6 docker push harbor.haostack.com/k8s/jack_metrics-scraper:v1.0.3 

2.4 开始部署dashboard 2.0.0-rc6

  • 部署dash_board-2.0.0-rc6.yml admin-user.yml
讯享网root@kubeadm-master1:/data/dashboard# kubectl apply -f dashboard-2.0.0-rc6.yml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created root@kubeadm-master1:/data/dashboard# kubectl apply -f admin-user.yml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created 

2.5 获取登录token

  • token 很重要,需要保存起来
#先查看admin-user root@kubeadm-master1:/data/dashboard# kubectl get secret -A | grep admin-user kubernetes-dashboard admin-user-token-zvzhc kubernetes.io/service-account-token 3 11s #获取token root@kubeadm-master1:/data/dashboard# kubectl describe secret admin-user-token-zvzhc -n kubernetes-dashboard Name: admin-user-token-zvzhc Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 0356b7eb-3caa-4f92-9093-cf17f013313a Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZUR2xzOHZNcTZIY1h0QVMyanN4bGhkY0JUNnpfY2RVSnh6UDdVanRQbHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2emhjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwMzU2YjdlYi0zY2FhLTRmOTItOTA5My1jZjE3ZjAxMzMxM2EiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Hm0V37rhde5hibDQKOs_a3y55ZCM_mcSFc3epwy1XiPQ61_a0xP_Wk1acYYGbdNyOEtJ63W0E_yCtbJIj2pLU_G9ZUlilPLE6DydTmTHBoe0yaYVUrBHvNbWZbBs6hu8V1DMSOhletNxBInKJ5eRKyvgV9LoriJRwO4K8U9Ce2sXWTubDyeGwBXhI2Qf_ecv2v5c89IHmYm3VG8HcZuqyIv3nFcxIpeADQd8mnrpY1AizhTDsyKVRbfpGirSkKC7AN4USo37z58sWDnMpDHG34IHMjOemPKx5d-uRrLAp6oCCsdpkaPeT4ulur8Px71FzZSRQ279SJR5aJyvpRR4sQ root@kubeadm-master1:/data/dashboard# 

2.6 kubeconfig方式登录

  • 编辑kebeconfig文件
讯享网root@kubeadm-master1:/data/nginx# kubectl get secret -A | grep admin-user kubernetes-dashboard admin-user-token-zvzhc kubernetes.io/service-account-token 3 19h root@kubeadm-master1:/data/nginx# kubectl describe secret admin-user-token-zvzhc -n kubernetes-dashboard Name: admin-user-token-zvzhc Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 0356b7eb-3caa-4f92-9093-cf17f013313a Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 20 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZUR2xzOHZNcTZIY1h0QVMyanN4bGhkY0JUNnpfY2RVSnh6UDdVanRQbHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2emhjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIwMzU2YjdlYi0zY2FhLTRmOTItOTA5My1jZjE3ZjAxMzMxM2EiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Hm0V37rhde5hibDQKOs_a3y55ZCM_mcSFc3epwy1XiPQ61_a0xP_Wk1acYYGbdNyOEtJ63W0E_yCtbJIj2pLU_G9ZUlilPLE6DydTmTHBoe0yaYVUrBHvNbWZbBs6hu8V1DMSOhletNxBInKJ5eRKyvgV9LoriJRwO4K8U9Ce2sXWTubDyeGwBXhI2Qf_ecv2v5c89IHmYm3VG8HcZuqyIv3nFcxIpeADQd8mnrpY1AizhTDsyKVRbfpGirSkKC7AN4USo37z58sWDnMpDHG34IHMjOemPKx5d-uRrLAp6oCCsdpkaPeT4ulur8Px71FzZSRQ279SJR5aJyvpRR4sQ #复制config文件 cp /root/.kube/config /data/kubeconfig #编辑kubeconfig root@kubeadm-master1:/data# more kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EY3lOVEF6TVRjMU0xb1hEVE13TURjeU16QXpNVGMxTTFvd0ZURVRNQkVH QTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkJyCmEyOFRMcFNLeTl1WXE2dWFMQWI1OUhaN1lTa1JZTXpJZ3oxcXQza3oyTDY5SU8rTVpoYjNXeVA4ZVJETVY1RW0KRExNMXhLZU1sNzdPKzNyNHdiZ0JBMXN1aU15Mzh0YUhibFUxblVrbWtE TFIzaVNvcnpGaUV5UGFFSmJyaHNXSQpSMm1Sc1pBWTh1d3l4N2Y2bm9ROTJOenVnYWxwOXBMaGlsSDU4d2pldGJFYXErbVFKR092STdrNzNLMmtFMXJQCjArcDY0Ly9ZdHVweUtYWFBCVEI0ZzJOQUYramdxbWJFSHNMTFBBOUpaZUFzNXZzVmJhSVBnZFRwcGN0ZUFmUzYKS0VIUW1nc3NSMUx1aksrL0VSOXpWd0N3 NWNWakFVWUlNdlMzU3BXQ2lzM0lGWHNaKytWRDM5R3QxVlpnZGVubwp6RlhhQ2dVTWRzaVRsZ0xaZUdjQ0F3RUFBYU1qTUNFd0RnWURWUjBVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLUlk0eDJoNWMzMFF6YXVpYmx1Tk5KajAyQVoKVjAvUTFD R3hGSHZFcjFUd21GdHphYisvS3BjRlV1YzhCUzY5TUM1OElsbXQ3b3diMDFpdnhiTkRIYXVSa2lLcwpmdy9jcldOS3VjSUZVZElDTytwTlJaRzJTUDhkbzBCbmVTR3Z4WUplTHd6a0NkbTBtYVdxZG9UZ0xTa0s4YXE4Ck1kaEtjY2F5czhaNFFvSnRTMTlwNWRsRlBBQUxrSFJQTm5YenRNSElDUmREZUZEQVNsN2VP YXArQWpUN0tod0QKSG55cWQ3TUYxT2JheWFpUTRHTEowRjhVS2NoVW9RQlNhRllIZlFDYkNIbnpKZlJVN1AzNVZzaHJrRjlpVFFkdQpucEkwZ0F3WEd5cGovSWRWWXVUUTZiTHB1d0kyM0QraXBiQnFPZHJpSE1XM3BFdHY2bmlxMXpvbEVOTT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://172.16.62.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJUGxRS0thWTMrY013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBM01qVXdNekUzTlROYUZ3MHlNVEEzTWpVd016RTRNREJhTUR ReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXQrTnJFY1BubEJFZHJLRUMKS3BOT0xsRnJQcnFQNDV4ZGlFSmNzUE5IcDc0STJ0TnRsVGlzSmcrY1JPTzg 1R29mOW5QZkI3MVFjd05sSGZoZwpYZEZNeHNkRWhydDl2L0ZnQUdQbk9iRzRUNnUvL3ZKM2NlckYwQWRiZGtZcGVKUWY1aS9KK25nbXpwb0dnZ0YvClVYYkxLRWdncTdFbTJXbEQxNG1KWkZTRGRwaXFCQmhrdVhoenduR2VFT200aHpLOVFNblY1QU5US1J1Uk5rZXoKbjdYWjhPT1VYRXMzVjQ4RFN5bEFISzlZRHR 5V1pTZzAxWG1nenE4TEZwaFIxcjBjaks0MmlxNytrVzF6cHF3OAoyWC9EN3c0UE5rOWNCVnJRaHVUeE5YWjhSdEhGQnBxMVhlU2VNWUxNWFpVd3ZQSWlxTnVhKzd1d2pzK1ZxUTl3CnNzVUZMUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUp Lb1pJaHZjTkFRRUxCUUFEZ2dFQkFBUTJUdnpGNjYvQ1dpNXBVaGhqbU1NcFhJQVkydERoT0dNWQpra0lnUkVOLzJLYzNnTUR4LzlYOFEybkZGYy9XQzFjbEdiSElHQUIxYlVUekR6T1laM1Zqc05uUFJ4V2J6MkNlCkVlSE0wUjd5V0tWbzZvWFhITDNubXJDYTFoVVFaUUpoeVlydkVONGZVMFYrY1YzNWlCbE8wQ0J nbWxjK1VrOHIKc1Q1OTJBMDFJLzRxRWdkWlllVlcrV0RVNVV6cGVjL3NzNFUwMElwVFNSVWJjQ25Fc1VNM2dXZ1NZL2dsVG5XdwpHOGYzNjVxM3BWUVJ6c2hDT0VuSE1Wak9tbEJvUTh5SkFsa0p2MDJldzhlcHRHNHB3dmVSYzJLTWoyYm1LUzdvCkg5Ukp1MHIrS3c2ZHd6QVVESkMvU216QXZUTHBkS0hJM0hhQjl YQW5DMDZDd2VWZGN0bz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBdCtOckVjUG5sQkVkcktFQ0twTk9MbEZyUHJxUDQ1eGRpRUpjc1BOSHA3NEkydE50CmxUaXNKZytjUk9PODVHb2Y5blBmQjcxUWN3TmxIZmhnWGRGTXhzZEVocnQ5di9GZ0FHUG5PYkc0VDZ1Ly92SjM KY2VyRjBBZGJka1lwZUpRZjVpL0orbmdtenBvR2dnRi9VWGJMS0VnZ3E3RW0yV2xEMTRtSlpGU0RkcGlxQkJoawp1WGh6d25HZUVPbTRoeks5UU1uVjVBTlRLUnVSTmtlem43WFo4T09VWEVzM1Y0OERTeWxBSEs5WUR0eVdaU2cwCjFYbWd6cThMRnBoUjFyMGNqSzQyaXE3K2tXMXpwcXc4MlgvRDd3NFBOazljQlZ yUWh1VHhOWFo4UnRIRkJwcTEKWGVTZU1ZTE1YWlV3dlBJaXFOdWErN3V3anMrVnFROXdzc1VGTFFJREFRQUJBb0lCQUROQ3JRMGx2RDkxU2YxZQpZTGszbVBxbWJhdnQyOENLVFRSM3Mxa01hRFFsY0ZoM3lidG9NZXptT3h5bEUzbms3NFlISk93R1pRKzZxWXhpCk9aTE5qb1oyOCs1UEE2M20vbWo5Y0c2UDBSNDh kV2YvZFRhSFNKOUYvY1FKcVBQWTdzOS9FT0hHYnFMM0lzdEkKMlpIKytJRUJJa0phUHNjcVplUUdqZ3N1MS9yTitGdXBGemhrSCtiVFJ4T2xsdkU0OHlseS83TEJuZlJyVTNhbgpRZzVMdlEyYUlGaFdoRktyNEl4T3RjUnBaSkNhWWRIZ0NsM09rdXpTR2tvMm5DbjY3VjRNTm9kQk1IbkJGWTNaCkZmMFpWWWlQYkN ITWd1WGE2QUdrVjJrNGxDa2FyLzdJVWNaTGxpNGVPZlhsNlhhQVhHaEFDa2Rad2M0RE4vblMKT0VRNlhHRUNnWUVBN1VlVFV2MDdRUFNUcC92T2taVWNia2ZIM2dMNkFqYU0wZjZXd0t6ZmxEc2NWLzVPNEhYYwpEcHZwVGI1eVZQc0dXclIzWVhEWEhtNHBrSkViaWtzUkh0YXVIZlZMdG84eDhpdnNSaGVHZkFvczI vMlBNeVZXCmcvY1hSUXBYNTNOOCtvOWJmQ0Y2anpBYWxlMVFsS1RzdXFTSWFnelErQjhtOFdpczFCdVVPMWtDZ1lFQXhtVjYKM3ArMFF0MFJoQWlEWXBuTzFuaVFaSFFuQndQeGpZcmNjdWZod2dibFdaNlBCQ1c2ZWtRZVJkNWxzUWhWUkhIUgpOTHBzQzVXOTNWWFlZOUo2TDBaTDBDdzlXalpQYWVXYVFmblRCRnF TNmpLbFlxdzZheUZIamlHUUdBRDhzT0xHCjM2OS9GYzVMOHAxa0d3Q0dLOW54cDVNQm9SUjl1SFVaUU43WDRmVUNnWUFpcm1LSEw4STRaVWNydDI5aThnTjgKenZzVXBTUzdyQk43SWhZUXhYUE1hN05oM1NiVVFnWFBFTlRSNnpNMDNwZjRMQWFDOUlaTXlWZEQ3U0cwWGZKNwpxbTg2cTc3TVNUUElyTWpWR2QwclJ pVjJaaUpISEg3L3ZON20xWE14dmp5WE50cnRVc3RpSUdyU1hTUjVCWDRnCmJhb09yaDdoRlZTUTFuYmtiYitGeVFLQmdITEhKUFdFMlluUlVaL2NPUDZqVXlsN0tMWWxDS3NqV2V6MFNDTm0KQ1pMeDRHQWZ2a2U4K0F4aU9rMWJvK051bWIzMlJ2MUZXTnErNzlBTUtSdGZHbmNkS1NFdlp2TTQ5bXFpZmNMcgpvR3d sWmxkOW8zYlpneGFWYzB0RUdaUDVoamRqaTRDL2pEdDJWVFB3WUlqS25kVGl5czZTMnQ5dzltYnZ3QU5xCnFPUzlBb0dBRmgrOGRWTjQvSE5sMEpMeXRmdFJrckNmQnU5eGRkK1RDQ1IwYjVtV0YyQVprajU0UnBzcUdWK2IKZ1dZOWZZT1ZBaWg3blJaOTdyUnRkTHJRQlNjVDQ3VHhFY2VJQnB1cXgzYzBSclVRTXF VNXVUbStmdlZqVXZSYQpCSnFFU056VDlwS0lLaUJFallXd1ZLVzFKNnRMcmFRUUljZUMvNkJ5UC8yZWRiSmtkUjA9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== #把admin-user token复制过来,注意格式有缩进 token: eyJhbGciOiJSUzI1NiIsImtpZCI6InZUR2xzOHZNcTZIY1h0QVMyanN4bGhkY0JUNnpfY2RVSnh6UDdVanRQbHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsIm t1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXp2emhjIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC 51aWQiOiIwMzU2YjdlYi0zY2FhLTRmOTItOTA5My1jZjE3ZjAxMzMxM2EiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Hm0V37rhde5hibDQKOs_a3y55ZCM_mcSFc3epwy1XiPQ61_a0xP_Wk1acYYGbdNyOEtJ63W0E_yCtbJIj2pLU_G9ZUlil PLE6DydTmTHBoe0yaYVUrBHvNbWZbBs6hu8V1DMSOhletNxBInKJ5eRKyvgV9LoriJRwO4K8U9Ce2sXWTubDyeGwBXhI2Qf_ecv2v5c89IHmYm3VG8HcZuqyIv3nFcxIpeADQd8mnrpY1AizhTDsyKVRbfpGirSkKC7AN4USo37z58sWDnMpDHG34IHMjOemPKx5d-uRrLAp6oCCsdpkaPeT4ulur8Px71FzZSRQ279S JR5aJyvpRR4sQ root@kubeadm-master1:/data# 

2.7 验证pod

  • master节点那验证
root@kubeadm-master2:/etc/kubernetes# kubectl get pod -A | grep kubernetes-dashboard kubernetes-dashboard dashboard-metrics-scraper-6d656c9966-ltzq8 1/1 Running 0 13m kubernetes-dashboard kubernetes-dashboard-df-d56x7 1/1 Running 0 13m root@kubeadm-master2:/etc/kubernetes# 

2.8 验证NodePort

讯享网root@kubeadm-master1:/data/dashboard# kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 172.26.0.1 <none> 443/TCP 79m kube-system kube-dns ClusterIP 172.26.0.10 <none> 53/UDP,53/TCP,9153/TCP 79m kubernetes-dashboard dashboard-metrics-scraper ClusterIP 172.26.23.205 <none> 8000/TCP 12m kubernetes-dashboard kubernetes-dashboard NodePort 172.26.15.39 <none> 443:30002/TCP 12m root@kubeadm-master1:/data/dashboard# 443:30002/TCP 3m5s 

2.9 登录dashboard

在这里插入图片描述

2.9.1 使用token方式登录

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-VOI3Uosc-1595770577581)(DFD35B464A724C87BB488BDCF5EA9100)]

2.9.2 使用kubeconfig 方式登录

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kg0Y7Kys-1595770577583)(2413445B056C454B9214E523885DA764)]

2.10 登录dashboard成功

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ytS5SV3A-1595770577584)(42F81492E3694BB1A917E89F123A5E1A)]

2.11查看pod

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-R9OKOmIr-1595770577585)(9320C650BF9A4A65982333593C20114A)]

3.kubernetes集群升级

3.1 升级步骤

1.查看版本 apt-cache madison kubeadm 2.升级计划检查 - kubeadm upgrade plan 3. keepalive+haproxy配置 - 升级哪个节点就把哪个节点踢出集群 4. 升级master节点-kubeadm #安装指定版本 apt-get install kubeadm=1.17.4-00 #升级 kubeadm upgrade apply v1.17.4 #升级master节点-kubelet和kubectl 安装指定版本 apt install kubelet=1.17.4-00 kubectl=1.17.4-00 #升级 kubeadm upgrade apply 5. 升级node节点 #安装指定版本 apt install kubeadm=1.17.4-00 kubelet=1.17.4-00 #升级kubeadm和kubelet kubeadm upgrade node --kubelet-version 1.17.4 

3.2查看kubeadm版本

讯享网root@kubeadm-master1:/data/dashboard# apt-cache madison kubeadm kubeadm | 1.18.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.4-01 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.18.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.9-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.8-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.7-01 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.7-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages kubeadm | 1.17.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages 

3.3 查看升级计划

  • 建议升级到 kubeadm upgrade apply v1.17.9
root@kubeadm-master1:/# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.17.2 [upgrade/versions] kubeadm version: v1.17.2 I0725 12:47:05. 27423 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17 [upgrade/versions] Latest stable version: v1.17.9 [upgrade/versions] Latest version in the v1.17 series: v1.17.9 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 6 x v1.17.2 v1.17.9 Upgrade to the latest version in the v1.17 series: COMPONENT CURRENT AVAILABLE API Server v1.17.2 v1.17.9 Controller Manager v1.17.2 v1.17.9 Scheduler v1.17.2 v1.17.9 Kube Proxy v1.17.2 v1.17.9 CoreDNS 1.6.5 1.6.5 Etcd 3.4.3 3.4.3-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.17.9 Note: Before you can perform this upgrade, you have to update kubeadm to v1.17.9. _____________________________________________________________________ root@kubeadm-master1:/# 

3.4 升级kubadm服务

  • 所有master和node节点都要升级

3.4.1 安装

  • 安装命令 apt-get install kubeadm=1.17.4-00
讯享网#3个节点都要安装 root@kubeadm-master2:/etc/kubernetes# apt-get install kubeadm=1.17.4-00 Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be DOWNGRADED: kubeadm 0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 58 not upgraded. Need to get 8,064 kB of archives. After this operation, 4,096 B disk space will be freed. Do you want to continue? [Y/n] Y Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.4-00 [8,064 kB] Fetched 8,064 kB in 0s (20.7 MB/s) dpkg: warning: downgrading kubeadm from 1.17.9-00 to 1.17.4-00 (Reading database ...  files and directories currently installed.) Preparing to unpack .../kubeadm_1.17.4-00_amd64.deb ... Unpacking kubeadm (1.17.4-00) over (1.17.9-00) ... Setting up kubeadm (1.17.4-00) ... root@kubeadm-master2:/etc/kubernetes# 

3.4.2 升级kubeadm

  • 升级命令 kubeadm upgrade apply v1.17.4
  • 由于v1.17.9 总是超时,这里测试升级到v1.17.4
- 所有master节点都要升级 #master1 root@kubeadm-master1:/data/scripts# kubeadm upgrade apply v1.17.4 #master2 root@kubeadm-master2:/etc/kubernetes# kubeadm upgrade apply v1.17.4 #master3 root@kubeadm-master3:/etc/kubernetes# kubeadm upgrade apply v1.17.4 

3.4.3 升级kubelet和kubectl

  • 所有master节点
讯享网- apt install kubelet=1.17.4-00 kubectl=1.17.4-00 #安装 - kubeadm upgrade apply kubelet=1.17.4 kubectl=1.17.4 #升级 

3.4.4 node节点升级

  • node节点只有kubadm 和kubelet组件,只要升级这两个即可
apt install kubeadm=1.17.4-00 kubelet=1.17.4-00 #安装 kubeadm upgrade node --kubelet-version 1.17.4 #升级 

3.4.5 升级过程

讯享网#master升级过程 root@kubeadm-master1:/data/scripts# kubeadm upgrade apply v1.17.4 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade/version] You have chosen to change the cluster version to "v1.17.4" [upgrade/versions] Cluster version: v1.17.4 [upgrade/versions] kubeadm version: v1.17.9 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd] [upgrade/prepull] Prepulling image for component etcd. [upgrade/prepull] Prepulling image for component kube-apiserver. [upgrade/prepull] Prepulling image for component kube-controller-manager. [upgrade/prepull] Prepulling image for component kube-scheduler. [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd [upgrade/prepull] Prepulled image for component kube-scheduler. [upgrade/prepull] Prepulled image for component kube-apiserver. [upgrade/prepull] Prepulled image for component kube-controller-manager. [upgrade/prepull] Prepulled image for component etcd. [upgrade/prepull] Successfully prepulled the images for all the control plane components [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.4"... Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: b5e3b0092320eb9795f14c21b091fb3f Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: bd130f2488b3f5237c32145a1856eaa6 Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 92eb55a1e38e273dab9a560bc [upgrade/etcd] Upgrading to TLS for etcd [upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.17.4" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests" W0725 17:37:49. 346 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-apiserver.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: b5e3b0092320eb9795f14c21b091fb3f Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: 9a6e25f7f4f72df425fb7b61b7dc9c94 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: bd130f2488b3f5237c32145a1856eaa6 Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: 0c38f39e720ba377cf3e5bb783dab2b8 [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 92eb55a1e38e273dab9a560bc Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 51b669bfc8f29eab8a4b7da81 [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons]: Migrating CoreDNS Corefile [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.4". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. root@kubeadm-master1:/data/scripts# #node节点升级过程 oot@node3:/# kubeadm upgrade node --kubelet-version 1.17.4 [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Skipping phase. Not a control plane node. [upgrade] Using kubelet config version 1.17.4, while kubernetes-version is v1.17.4 [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. 

3.5 验证版本

  • 所有节点kubeadmy版本 v1.17.4
root@kubeadm-master1:/data/scripts# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aaad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:01:11Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} root@kubeadm-master1:/data/scripts# 
  • kubectl版本
讯享网root@kubeadm-master1:/data/scripts# kubectl get node NAME STATUS ROLES AGE VERSION kubeadm-master1.haostack.com Ready master 6h53m v1.17.4 kubeadm-master2.haostack.com Ready master 6h31m v1.17.4 kubeadm-master3.haostack.com Ready master 6h28m v1.17.4 node1.haostack.com Ready <none> 6h20m v1.17.4 node2.haostack.com Ready <none> 6h20m v1.17.4 node3.haostack.com Ready <none> 6h20m v1.17.4 root@kubeadm-master1:/data/scripts# 

4.运行Nginx+tomcat

4.1 nginx.yml文件

root@kubeadm-master1:/data/nginx# more nginx.yml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: harbor.haostack.com/baseimages/jack_nginx_yum:v1 #内部仓库 ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: labels: app: app-nginx-service-label name: app-nginx-service #service 名称 namespace: default spec: type: NodePort ports: - name: http port: 80 #service端口 protocol: TCP targetPort: 80 #nginx服务端口 nodePort: 30007 #nodeport端口 selector: app: nginx root@kubeadm-master1:/data/nginx# 

4.2.tomcat.yml文件

讯享网root@kubeadm-master1:/data/tomcat# more tomcat.yml apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment labels: app: tomcat spec: replicas: 1 selector: matchLabels: app: tomcat template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: harbor.haostack.com/baseimages/jack_tomcat_app1:v1 ports: - containerPort: 8080 --- kind: Service apiVersion: v1 metadata: labels: app: app-tomcat-service-label name: app-tomcat-service namespace: default spec: type: NodePort ports: - name: http port: 80 protocol: TCP targetPort: 8080 nodePort: 30006 selector: app: tomcat root@kubeadm-master1:/data/tomcat# 

4.3 创建pod

root@kubeadm-master1:/data/tomcat# kubectl apply -f tomcat.yml root@kubeadm-master1:/data/tomcat# kubectl apply -f tomcat.yml 

4.4 查看pod

  • 一个pod 状态都是运行状态
讯享网root@kubeadm-master1:~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES net-test1-5fcc69db59-2452n 1/1 Running 0 9h 10.10.4.2 node2.haostack.com <none> <none> net-test1-5fcc69db59-fh4fb 1/1 Running 0 9h 10.10.3.2 node1.haostack.com <none> <none> net-test2-8456fd74f7-5jx5f 1/1 Running 0 9h 10.10.4.3 node2.haostack.com <none> <none> net-test2-8456fd74f7-8k9mn 1/1 Running 0 9h 10.10.3.3 node1.haostack.com <none> <none> net-test2-8456fd74f7-r5kfh 1/1 Running 0 9h 10.10.5.2 node3.haostack.com <none> <none> nginx-deployment-555f58994f-dpjk7 1/1 Running 0 149m 10.10.3.5 node1.haostack.com <none> <none> tomcat-deployment-6f566f49f8-6z2h7 1/1 Running 0 27m 10.10.4.6 node2.haostack.com <none> <none> 

4.5 查看service

  • app-nginx-service和app-tomcat-service
root@kubeadm-master1:~# kubectl get service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR app-nginx-service NodePort 172.26.183.218 <none> 80:30007/TCP 15m app=nginx app-tomcat-service NodePort 172.26.160.109 <none> 80:30006/TCP 17m app=tomcat kubernetes ClusterIP 172.26.0.1 <none> 443/TCP 10h <none> root@kubeadm-master1:~# #service 会把请求转发给后端的tomcat 8080 root@kubeadm-master1:~# kubectl describe service app-tomcat-service Name: app-tomcat-service Namespace: default Labels: app=app-tomcat-service-label Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"app-tomcat-service-label"},"name":"app-tomcat-service","... Selector: app=tomcat Type: NodePort IP: 172.26.160.109 Port: http 80/TCP TargetPort: 8080/TCP NodePort: http 30006/TCP Endpoints: 10.10.4.6:8080 Session Affinity: None External Traffic Policy: Cluster Events: <none> root@kubeadm-master1:~# kubectl get pod NAME READY STATUS RESTARTS AGE net-test1-5fcc69db59-2452n 1/1 Running 0 9h net-test1-5fcc69db59-fh4fb 1/1 Running 0 9h net-test2-8456fd74f7-5jx5f 1/1 Running 0 9h net-test2-8456fd74f7-8k9mn 1/1 Running 0 9h net-test2-8456fd74f7-r5kfh 1/1 Running 0 9h nginx-deployment-555f58994f-dpjk7 1/1 Running 0 158m tomcat-deployment-6f566f49f8-6z2h7 1/1 Running 0 36m root@kubeadm-master1:~# 

4.6 配置nginx

  • 在nginx配置文件中增加如下配置
讯享网  server {
 45         listen       80 default_server;
 46         server_name  _; 
 47         root         /usr/share/nginx/html;
 48         include /etc/nginx/default.d/*.conf;
 49         
 50         location /app1 {
 51         proxy_pass http://app-tomcat-service;
 52     }   
 53         error_page 404 /404.html;
 54         location = /40x.html {
 55     }   
 56         error_page 500 502 503 504 /50x.html;
 57         location = /50x.html { 
 58         }
 59     }   
 60 }   

4.7 利用haproxy实现反向代理

  • 基于 haproxy 和 keepalived 实现高可用的反向代理,并访问到运行在 kubernetes 集群中业务 Pod
  • haproxy配置
listen k8s-pod-nginx bind 172.16.62.201:80 mode tcp balance roundrobin server node1 172.16.62.31:30007 check inter 3s fall 3 rise 5 server node2 172.16.62.32:30007 check inter 3s fall 3 rise 5 server node3 172.16.62.33:30007 check inter 3s fall 3 rise 5 
  • keepalive VIP配置
  • 172.16.62.201 www.haostack.com 已经在DNS上做解析
讯享网root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod2 root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod3 root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod2 root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod1 root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod3 root@kubeadm-master3:/etc/kubernetes# 

4.8 nginx+tomcat测试

  • 进入nginx pod 访问tomcat 测试
oot@kubeadm-master1:/data/nginx# docker exec -it nginx-deployment-555f58994f-dpjk7 sh
Error: No such container: nginx-deployment-555f58994f-dpjk7
root@kubeadm-master1:/data/nginx# kubectl  exec -it nginx-deployment-555f58994f-dpjk7 sh
sh-4.2# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.10.3.5  netmask 255.255.255.0  broadcast 10.10.3.255
        ether f2:6d:71:27:f3:57  txqueuelen 0  (Ethernet)
        RX packets 4486  bytes 312171 (304.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2426  bytes 176207 (172.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 24  bytes 1964 (1.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24  bytes 1964 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sh-4.2# ping 10.10.4.6
PING 10.10.4.6 (10.10.4.6) 56(84) bytes of data.
64 bytes from 10.10.4.6: icmp_seq=1 ttl=62 time=0.960 ms
64 bytes from 10.10.4.6: icmp_seq=2 ttl=62 time=0.710 ms
#测试tomcat
sh-4.2# curl 10.10.4.6:8080/app1/index.html
tomcat app1 web server 
docker
sh-4.2#


  • 访问nginx默认页面
讯享网root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com docker nginx pod1 
  • 访问tomcat默认页面
root@kubeadm-master3:/etc/kubernetes# curl 172.16.62.31:30006/app1/ tomcat app1 web server docker root@kubeadm-master3:/etc/kubernetes# 
  • 访问nginx代理页面
讯享网root@kubeadm-master3:/etc/kubernetes# curl www.haostack.com/app1/ tomcat app1 web server docker root@kubeadm-master3:/etc/kubernetes 
小讯
上一篇 2025-01-27 10:44
下一篇 2025-02-17 10:54

相关推荐

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容,请联系我们,一经查实,本站将立刻删除。
如需转载请保留出处:https://51itzy.com/kjqy/122778.html