启动k8s

一、安装准备

1. 查看系统版本

1
cat /etc/redhat-release

2. 更新系统版本:

1
yum update -y

3. 改主机名:

1
hostnamectl set-hostname master

4. 改各个机器host

1
2
3
4
5
vi /etc/hosts

10.0.90.217 master
10.0.90.218 node01
10.0.90.219 node02

5. 关闭防火墙:

1
2
3
systemctl disable firewalld.service

systemctl stop firewalld.service

6. 关闭networkManager

1
2
3
4
5
systemctl disable

NetworkManager.service

systemctl stop NetworkManager.service

7. 查看系统版本:

1
cat /etc/redhat-release

8. 关闭selinux

1
2
3
getenforce

setenforce 0

永久关闭:

1
2
3
vi /etc/selinux/config

SELINUX=disabled

9.

  • Disabling SELinux by running setenforce 0 is required to allow containers to access the host filesystem, which is required by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
1
2
3
4
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
  • 加载系统任一个配置文件
    1
    sysctl --system

10. 临时关掉所有交换设备

1
swapoff –a
  • 永久关闭:
    1
    2
    3
    vi /etc/fstab

    # /dev/mapper/sysvg-swaplv swap

二、安装docker yum

1. 配置源

1
2
3
4
5
6
7
8
vi /etc/yum.repos.d/docker-main.repo

[docker-main-repo]
name=Docker main Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/docker-engine/yum/gpg

2. 查询yum源版本

1
yum list docker-engine-selinux  --showduplicates | sort -r

3. 安装docker

1
yum install -y docker-engine-selinux-17.03.1.ce-1.el7.centos

查询yum源版本

1
yum list docker-engine  --showduplicates | sort -r

安装

1
yum install -y docker-engine-17.03.1.ce-1.el7.centos

4. 设置docker开机启动

1
systemctl enable docker.service
1
systemctl start docker.service

三、 安装k8s

参考:

1
2
3
4
5
6
7
8
9
10
11
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2. 查看k8s版本:

1
yum list kubelet  --showduplicates | sort -r
1
yum list kubeadm  --showduplicates | sort -r
1
yum list kubectl  --showduplicates | sort -r

3. 安装k8s:

1
yum install -y kubelet-1.11.0-0 kubeadm-1.11.0-0 kubectl-1.11.0-0

4. 设置开机启动:

1
systemctl enable kubelet.service
1
systemctl start kubelet.service

5. 配置主节点:

(需要修改advertiseAddress为master ip)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Vi /kubeadm/kubeadm-master.config:

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.0
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
api:
advertiseAddress: 10.0.90.217

controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s

networking:
podSubnet: 10.211.0.0/16
serviceSubnet: 10.96.0.0/16

kubeProxy:
config:
mode: iptables
1
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1
vi /etc/sysconfig/kubelet
1
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"

6. 重启:

  • systemctl daemon-reload
  • systemctl restart kubelet.service

    7. 下载镜像:

1
kubeadm config images pull --config kubeadm-master.config
  • 因各种原因无法执行,请手动下载:
    (每个节点执行)
    1
    kubeadm config images pull --config kubeadm-master.config
1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.0
1
registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1
1
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
  • 各个节点执行
1
docker pull quay.io/calico/typha:v0.7.4
1
docker pull quay.io/calico/node:v3.1.3
1
docker pull quay.io/calico/cni:v3.1.3
  • 其余master执行

8. 初始化集群

1
kubeadm init --config kubeadm/kubeadm-master.config
  • 执行init之后将执行结果粘贴出来,会用到标红处:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
I0910 02:21:32.639267  127417 feature_gate.go:230] feature gates: &{map[]}
[init] using Kubernetes version: v1.11.0
[preflight] running pre-flight checks
I0910 02:21:32.662821 127417 kernel_validator.go:81] Validating kernel version
I0910 02:21:32.662876 127417 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [server01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.170.128]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [server01 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [server01 localhost] and IPs [172.16.170.128 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 37.004312 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node server01 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node server01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server01" as an annotation
[bootstraptoken] using token: rbwvyz.pv9yna3slu6y0gg4
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.16.170.128:6443 --token rbwvyz.pv9yna3slu6y0gg4 --discovery-token-ca-cert-hash sha256:884bad27ceb666fdd0ee58dc291a7d361348b449e2d55a1d29ab09386df07265

9.

1
scp calico.yaml root@10.0.90.217:~
1
scp rbac-kdd.yaml root@10.0.90.217:~

10. 新建

  • (在kubeadm下面):
1
mkdir -p $HOME/.kube
1
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
1
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11. 创建网络

1
kubectl create -f kubeadm/calico/rbac-kdd.yaml
1
kubectl create -f kubeadm/calico/calico.yaml

12. 加入节点

1
kubeadm join 172.16.170.128:6443 --token rbwvyz.pv9yna3slu6y0gg4 --discovery-token-ca-cert-hash sha256:884bad27ceb666fdd0ee58dc291a7d361348b449e2d55a1d29ab09386df07265

kubeadm 生成的token过期后,集群增加节点

  • 默认token的有效期为24小时,当过期之后,该token就不可用了, 重新生成新的token
1
kubeadm token create
  • 获取ca证书sha256编码hash值
1
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

13. #安装nfs(复用master节点)

1
yum install -y nfs-utils rpcbind
  • vi /etc/exports
1
/data 10.0.90.217 (rw,async,no_root_squash)

14. #设置启动:

1
2
3
4
[root@server ~]# systemctl start rpcbind
[root@server ~]# systemctl start nfs-server
[root@server ~]# systemctl enable nfs-server
[root@server ~]# systemctl enable rpcbind

在node节点:

1
yum install -y nfs-utils

15. #挂载测试:

1
showmount -e 172.16.0.250
1
mount -t nfs 10.0.90.217:/data /mnt
  • 查看挂载状态
1
df -h
  • 取消挂载
1
umount 10.0.90.217:/data

16. #添加nfs 支持

  • 在154 sources scp
1
scp external-storage.tar.gz root@10.0.90.217:~

(k8s不支持nfs,由external-storage.tar.gz实现)

  • 解压
1
tar -zxvf external-storage.tar.gz
  • 打开文件目录
1
cd external-storage/nfs-client/deploy
  • 执行
1
2
3
kubectl create -f rbac.yaml 
kubectl create -f deployment.yaml
kubectl create -f class.yaml

17. 配置docker

  • cat /etc/docker/daemon.json
    (在154上传到各个节点)
1
scp /etc/docker/daemon.json root@10.0.90.218:/etc/docker/
  • 在master上:
1
2
kubectl create -f class.yaml
kubectl create -f deployment.yaml

18. 设置cephfs:

1
2
3
4
cd /sources/external-storage/ceph/cephfs
kubectl create –f deploy/rbac
cd example
kubectl create –f class.yaml

七、清理不干净的docker环境(2018-11-06 更新)

1
2
3
4
5
6
rm -rf /etc/yum.repos.d/docker-ce.repo
yum remove -y docker-ce docker-ce-cli container-selinux
yum clean all
rm -rf /var/cache/yum/
yum makecache fast
rm -rf /var/lib/docker/

八、Kubernetes宿主机之相关参数修改

1. ulimit -n 参数修改永久生效

1
2
3
4
cat <<EOF >  /etc/security/limits.d/k8s.conf
* hard nofile 65536
* soft nofile 65536
EOF

参考资料:
https://www.aliyun.com/jiaocheng/208647.html
https://www.aliyun.com/jiaocheng/124954.html
https://www.cnblogs.com/xuexiaohun/articles/6233430.html
https://my.oschina.net/jxcdwangtao/blog/1621106

2. 通过sysctl修改的相关内核参数:

这里是最完成的参数列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

vm.swappiness = 0
net.ipv4.ip_forward = 1

net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_tw_resue = 1
EOF

sysctl --system

或者(这里仅仅列出了几个和提升性能测试效果有关的参数)

1
2
3
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=1
sysctl -w net.ipv4.tcp_tw_reuse=1

九、Docker容器的时区设置

以CentOS 7.5.1804作为容器宿主机为例,分别介绍如何在原生Docker容器和Kubernetes的Pod中正确设置时区。

Node设置正确时区(以CentOS 7.5.1804 为例)

1
2
3
timedatectl list-timezones //查看时区列表
timedatectl set-timezone Asia/Shanghai //设置时区为"亚洲/上海"
timedatectl status | grep 'Time zone' //查看当前时区
提示:这里相当于为Docker Container和Kubernetes Pod的宿主机设置正确的时区,需要优先配置完成

Docker Container 与 Node同步时区设置

1
docker run -it -v /etc/localtime:/etc/localtime --rm alpine:3.8 /bin/sh
提示:上述命令注意-v参数的部分。
总结:经测试验证,alpine:3.8、centos:7.5.1804和ubuntu:16.04处理方式相同。

Kubernetes Pod 与 Node同步时区设置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: Pod
metadata:
name: alpine
labels:
app: alpine
spec:
containers:
- image: alpine:3.8
imagePullPolicy: IfNotPresent
name: alpine
command:
- sleep
- "3600"
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
readOnly: true
restartPolicy: Always
volumes:
- name: tz-config
hostPath:
path: /etc/localtime
提示:上述YAML注意volumes和volumeMounts部分。
总结:经测试验证,alpine:3.8、centos:7.5.1804和ubuntu:16.04处理方式相同。

Docker Container 和 Kubernetes Pod 通用 与 Node同步时区设置

该方式在需要构建Docker Image的时候,即修改Docker Image的默认时区设置。

1
2
3
4
5
6
7
Alpine Linux 3.8 Dockerfile Example
FROM alpine:3.8

MAINTAINER wangxin_0611@126.com

RUN apk --no-cache add tzdata && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
Ubuntu Linux 16.04 Dockerfile Example
1
2
3
4
5
6
7
FROM ubuntu:16.04

MAINTAINER wangxin_0611@126.com

RUN apt-get update && \
apt-get install -y tzdata && \
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
CenOS Linux 7.5.1804 Dockerfile Example
1
2
3
4
5
FROM centos:7.5.1804

MAINTAINER wangxin_0611@126.com

RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

参考资料:

Docker修改默认时区

【Kubernetes】同步pod时区,与node主机保持一致