构建k8s集群
使用kubeadm 工具构建单节点
架构图
一、准备环境
1.1 要求
计算节点:
- MEM: 2G
- CPU: 2C
控制节点:
- MEM: 4G
- CPU: 2C
1.2 IP地址规划
IP地址 | 系统/内核 | 角色 | 版本 |
---|---|---|---|
192.168.212.75 | Red Hat Enterprise Linux 8.8 Linux 4.18.0-477.10.1.el8_8.x86_64 | master | v1.28.0 |
192.168.212.76 | Red Hat Enterprise Linux 8.8 <br/>Linux 4.18.0-477.10.1.el8_8.x86_64 | node01 | v1.28.0 |
192.168.212.77 | Red Hat Enterprise Linux 8.8 <br/>Linux 4.18.0-477.10.1.el8_8.x86_64 | node02 | v1.28.0 |
192.168.212.78 | Red Hat Enterprise Linux 8.8 <br/>Linux 4.18.0-477.10.1.el8_8.x86_64 | node03 | v1.28.0 |
1.3 部署前的准备工作
在所有节点上运行
1. 配置主机名(hosts)
2. 关闭防火墙/selinux/swap
cp /etc/fstab /etc/fstab_bak && grep -v "swap" /etc/fstab_bak > /etc/fstab
3. 修改网络配置
4. 修改内核
5. runtime ---> docker
6. 部署docker的第三方插件
安装容器运行时(container runtime)
我们是基于docker 实现,部署k8s使用kubeadm
的方式部署, 所以应该在所有节点上安装容器运行时,
1. install container docker
~]# dnf -y install vim yum-utils
~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
2.安装docker的第三方插件cri-dockerd
1.源码构建
1. 拉取源代码
2. 编译
3. 安装
2.RPM 包安装
2.1、获取软包
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el8.x86_64.rpm
2.2. install
dnf -y install cri-dockerd-0.3.14-3.el8.x86_64.rpm
2.3. 启动cri-dockerd进程
cri-dockerd &
安装 kubeadm、kubelet 和 kubectl
你需要在每台机器上安装以下的软件包:
- kubeadm:用来初始化集群的指令。
- kubelet:在集群中的每个节点上用来启动 Pod 和容器等。
- kubectl:用来与集群通信的命令行工具。
一、 安装kubernetes的源
配置:源
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/rpm/repodata/repomd.xml.key
EOF
- 列出可用版本
yum list kubeadm --showduplicates | sort -r
- 安装
yum install -y kubelet-1.28.0 kubeadm-1.28.0 kubectl-1.28.0
systemctl enable --now kubelet
二、 初始化kubenetes集群
- 初始化
[root@kube-master cri-dockerd]# kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock
由于网络源,垃取不了镜像
列出kubenetes所需要的组件的版本信息:
[root@kube-master cri-dockerd]# kubeadm config images list
I0301 09:54:25.716272 21904 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.7
registry.k8s.io/kube-controller-manager:v1.28.7
registry.k8s.io/kube-scheduler:v1.28.7
registry.k8s.io/kube-proxy:v1.28.7
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
手动获取镜像:
➜ ~ docker pull registry.k8s.io/kube-apiserver:v1.28.7
➜ ~ docker pull registry.k8s.io/kube-controller-manager:v1.28.7
➜ ~ docker pull registry.k8s.io/kube-scheduler:v1.28.7
➜ ~ docker pull registry.k8s.io/kube-proxy:v1.28.7
➜ ~ docker pull registry.k8s.io/pause:3.9
➜ ~ docker pull registry.k8s.io/etcd:3.5.9-0
➜ ~ docker pull registry.k8s.io/coredns/coredns:v1.10.1
➜ ~ docker save -o kube_images/apiserver.tar registry.k8s.io/kube-apiserver:v1.28.7
➜ ~ docker save -o kube_images/registry.k8s.io/kube-controller-manager:v1.28.7
➜ ~ docker save -o kube_images/controller-manager.tar registry.k8s.io/kube-controller-manager:v1.28.7
➜ ~ docker save -o kube_images/kube-scheduler.tar registry.k8s.io/kube-scheduler:v1.28.7
➜ ~ docker save -o kube_images/proxy.tar registry.k8s.io/kube-proxy:v1.28.7
➜ ~ docker save -o kube_images/pause.ta registry.k8s.io/pause:3.9
➜ ~ docker save -o kube_images/etcd.tar registry.k8s.io/etcd:3.5.9-0
➜ ~ docker save -o kube_images/codedns.tar registry.k8s.io/coredns/coredns:v1.10.1
➜ ~ scp -r kube_images [email protected]:~
[email protected]'s password:
codedns.tar 100% 51MB 74.0MB/s 00:00
proxy.tar 100% 79MB 68.5MB/s 00:01
controller-manager.tar 100% 117MB 61.1MB/s 00:01
pause.ta 100% 737KB 53.5MB/s 00:00
kube-scheduler.tar 100% 58MB 66.4MB/s 00:00
etcd.tar 100% 282MB 86.4MB/s 00:03
apiserver.tar
# 加载镜像
[root@kube-master ~]# docker load -i apiserver.tar
[root@kube-master ~]# docker load -i codedns.tar
[root@kube-master ~]# docker load -i controller-manager.tar
[root@kube-master ~]# docker load -i etcd.tar
[root@kube-master ~]# docker load -i kube-scheduler.tar
[root@kube-master ~]# docker load -i pause.ta
[root@kube-master ~]# docker load -i proxy.tar
再次初始化:
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.212.75:6443 --token ah1gmy.c9ybyxahew5tou0o \
--discovery-token-ca-cert-hash sha256:316141c909cbe8569730d61ac039eff7055e772df26f5471c2995f82006f657d
将工作节点加入到控制节点
kubeadm join 192.168.212.75:6443 --token ah1gmy.c9ybyxahew5tou0o --discovery-token-ca-cert-hash sha256:316141c909cbe8569730d61ac039eff7055e772df26f5471c2995f82006f657d --cri-socket unix:///var/run/cri-dockerd.sock
查看节点是否加入到控制节点中
[root@kube-master kube_images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master NotReady control-plane 8m28s v1.28.0
kube-node01 NotReady <none> 37s v1.28.0
kube-node02 NotReady <none> 8s v1.28.0
kube-node03 NotReady <none> 6s v1.28.0
三、 安装网络插件
- flannel
- calico
- Cilium
以calico网络插件
获取地址:
wget https://docs.projectcalico.org/manifests/calico.yaml
安装网络插件
[root@kube-master ~]# kubectl apply -f calico.yaml
在所有的工作节点上需要安装的组件
kube-proxy
pause
docker load -i proxy.tar
docker load -i pause.ta
集群构建状态
集群创建成功
[root@kube-master ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
kube-master Ready control-plane 42m v1.28.0
kube-node01 Ready <none> 34m v1.28.0
kube-node02 Ready <none> 34m v1.28.0
kube-node03 Ready <none> 34m v1.28.0
评论 (0)