使用kubeadm搭建k8s集群 1.安装环境准备 1.1.云服务器准备
IP地址
节点角色
CPU
Memory
Hostname
10.0.1.9
master and etcd
>=2c
>=2G
master
10.0.1.5
node
>=2c
>=2G
node1
1.2.软件版本
系统类型
Kubernetes版本
docker版本
kubeadm版本
kubectl版本
kubelet版本
CentOS 7.6
v1.17.4
19.03.8-ce
v1.17.4
v1.17.4
v1.17.4
1.3.云服务器环境初始化操作 1.3.1.设置主机名 1 2 hostnamectl set-hostname master
1.3.2.修改host文件 1 2 3 4 5 vim /etc/hosts 10.0.1.9 master 10.0.1.5 node1
1.3.3.关闭selinux 1 2 3 4 5 setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
1.3.4.关闭交换分区 1 2 3 4 5 swapoff -a sed -i '/ swap / s/^/#/' /etc/fstab
1.3.5.网络配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 cat > /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
1.3.6.配置资源地址 1 2 3 4 5 6 7 yum install wget -y wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo yum makecache yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1.3.7.安装Docker 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 yum list docker-ce --showduplicates|sort -r yum install -y docker-ce systemctl start docker systemctl enable docker docker version vim /etc/docker/daemon.json { "exec-opts" : ["native.cgroupdriver=systemd" ], "registry-mirrors" : ["https://mj9kvemk.mirror.aliyuncs.com" ] } systemctl daemon-reload systemctl restart docker
1.3.8.配置k8s资源的下载地址 1 2 3 4 5 6 7 8 9 10 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
1.3.9.安装 kubelet kubeadm kubectl 1 2 3 4 5 yum install -y kubelet kubeadm kubectl systemctl enable kubelet.service
2.开始初始化 (只在主节点Master上面操作) 2.1.创建初始化配置文件 1 kubeadm config print init-defaults > kubeadm-config.yaml
2.2.根据各自部署环境修改配置文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 10.0 .1 .9 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {}dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: gcr.azk8s.cn/google-containers kind: ClusterConfiguration kubernetesVersion: v1.17.3 networking: dnsDomain: cluster.local podSubnet: "192.168.0.0/16" serviceSubnet: 10.96 .0 .0 /12 scheduler: {}
配置说明:imageRepository :指定为业务所需的镜像仓库地址podSubnet :指定的IP地址段与后续部署的网络插件相匹配。(部署flannel插件,配置为10.244.0.0/16;部署calico插件,配置为192.168.0.0/16)
2.3.开始初始化 1 2 3 4 5 6 7 8 9 10 11 kubeadm init --config=kubeadm-config.yaml kubeadm join 10.0.1.9:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:b617794af7644843a3dd1104d717686fb31b9c295c7636c2b664b253e0fa6128 mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config
3.节点node加入(默认此事上面一步骤 都进行了操作) 1 2 3 4 5 6 7 kubeadm join 10.0.1.9:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:b617794af7644843a3dd1104d717686fb31b9c295c7636c2b664b253e0fa6128 This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
4.master节点验证安装进度 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health" : "true" } kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-j9g8d 1/1 Running 0 128m coredns-86c58d9df4-pg45w 1/1 Running 0 128m etcd-k8s1 1/1 Running 0 127m kube-apiserver-k8s1 1/1 Running 0 127m kube-controller-manager-k8s1 1/1 Running 0 127m kubectl get node NAME STATUS ROLES AGE VERSION master NoReady master 131m v1.17.4 node1 NoReady <none> 93m v1.17.4
5.安装集群网络插件 (只需要Master安装) 下面两种网络插件任选一种即可。
5.1.安装Flannel 网络插件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ... command : - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth0 ... kubectl apply -f kube-flannel.yml kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-j9g8d 1/1 Running 0 128m coredns-86c58d9df4-pg45w 1/1 Running 0 128m etcd-k8s1 1/1 Running 0 127m kube-apiserver-k8s1 1/1 Running 0 127m kube-controller-manager-k8s1 1/1 Running 0 127m kube-flannel-ds-amd64-7btlw 1/1 Running 0 91m kube-flannel-ds-amd64-9vq42 1/1 Running 0 106m kube-flannel-ds-amd64-kdf42 1/1 Running 0 90m kube-proxy-dtmfs 1/1 Running 0 128m kube-proxy-p76tc 1/1 Running 0 90m kube-proxy-xgw28 1/1 Running 0 91m kube-scheduler-k8s1 1/1 Running 0 128m
5.2.安装calico 1 2 3 4 5 6 7 8 9 wget https://docs.projectcalico.org/manifests/calico.yaml kubectl apply -f calico.yaml kubectl get pod -n kube-system