配置要求
角色 | 主机名 | ip地址 |
---|---|---|
master | k8s-master1 | 192.168.106.10 |
node | k8s-node1 | 192.168.106.20 |
node | k8s-node2 | 192.168.106.30 |
环境准备(所有节点)
关闭防火墙,关闭selinux
systemctl stop firewalld
systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
关闭swap
swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
修改主机名
[root@k8s-master1 ~]# hostnamectl set-hostname k8s-master1
--------------------------------
[root@k8s-node1 ~]# hostnamectl set-hostname k8s-node1
--------------------------------
[root@k8s-node2 ~]# hostnamectl set-hostname k8s-node2
添加主机名于ip对应关系
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.106.10 k8s-master1
192.168.106.20 k8s-node1
192.168.106.30 k8s-node2
将桥接的ipv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
安装docker
yum install wget.x86_64 -y
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/centos7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-20.10.11 -y
修改仓库,安装kubeadm、kubelet、kubectl
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubelet-1.22.2 kubeadm-1.22.2 kubectl-1.22.2 -y
systemctl enable kubelet && systemctl start kubelet
#若node1和node2的kubelet服务无法启动,node1和node2因为未加入集群,
集群部署(master节点)
修改docker配置
[root@k8s-master1 ~]# systemctl start docker
[root@k8s-master1 ~]# systemctl enable docker
[root@k8s-master1 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl restart docker.service
[root@k8s-master1 ~]# systemctl restart kubelet.service
[root@k8s-master1 ~]# systemctl status kubelet.service
#如果未启动就是配置文件写错了
docker进行镜像加速
#按照教程操作,在配置文件中加入自己加速器的地址
#注意加速器地址用 , 跟在刚才添加{}里的配置文件句子后面
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@k8s-master1 ~]# sudo systemctl daemon-reload
[root@k8s-master1 ~]# sudo systemctl restart docker
部署master
[root@k8s-master1 ~]# kubeadm init \
--apiserver-advertise-address=192.168.106.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.2 \
--control-plane-endpoint k8s-master1 \
--service-cidr=172.16.0.0/16 \
--pod-network-cidr=10.244.0.0/16
.......
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-master1:6443 --token 0re1oq.he6o0ab4mqtjtg83 \
--discovery-token-ca-cert-hash sha256:460d740c21fa040f7f12e22cdd018aec8c903d4880f42b0f7edeb78a80241b56 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master1:6443 --token 0re1oq.he6o0ab4mqtjtg83 \
--discovery-token-ca-cert-hash sha256:460d740c21fa040f7f12e22cdd018aec8c903d4880f42b0f7edeb78a80241b56
......
报错解决方法:
https://blog.csdn.net/qq_43580215/article/details/125153959
PS:kubeadm join k8s-master1:6443 --token 0re1oq.he6o0ab4mqtjtg83 \
--discovery-token-ca-cert-hash sha256:460d740c21fa040f7f12e22cdd018aec8c903d4880f42b0f7edeb78a80241b56
这句话很重要,需要保存。
若是部署出现了问题,可以使用 kubeadm reset
创建配置文件目录
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
查看节点状态为notready
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 3m10s v1.22.0
安装网络插件
官方文档:https://github.com/flannel-io/flannel
[root@k8s-master1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master1 ~]# kubectl apply -f kube-flannel.yml
# 最好手动提前拉取所需镜像
[root@k8s-master1 ~]# docker pull quay.io/coreos/flannel:v0.14.0
添加node节点(node1,2节点)
# 为node拉去网络插件镜像
[root@k8s-node1 ~]# docker pull quay.io/coreos/flannel:v0.14.0
[root@k8s-node2 ~]# docker pull quay.io/coreos/flannel:v0.14.0
[root@k8s-node1 ~]# kubeadm join k8s-master1:6443 --token 0re1oq.he6o0ab4mqtjtg83 --discovery-token-ca-cert-hash sha256:460d740c21fa040f7f12e22cdd018aec8c903d4880f42b0f7edeb78a80241b56
[root@k8s-node2 ~]# kubeadm join k8s-master1:6443 --token 0re1oq.he6o0ab4mqtjtg83 --discovery-token-ca-cert-hash sha256:460d740c21fa040f7f12e22cdd018aec8c903d4880f42b0f7edeb78a80241b56
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 32m v1.22.0
k8s-node1 Ready <none> 4m34s v1.22.0
k8s-node2 Ready <none> 62s v1.22.0
开启 kubelet.service
[root@k8s-node1 ~]# systemctl restart kubelet.service
[root@k8s-node1 ~]# systemctl status kubelet.service
[root@k8s-node2 ~]# systemctl restart kubelet.service
[root@k8s-node2 ~]# systemctl status kubelet.service
Comments | NOTHING