跳到主要内容

kubeadm2-new


安装

安装GPG证书,执行如下命令:

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

EOF


apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl


cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y kubelet kubeadm kubectl
yum install -y kubelet-1.26.6 kubeadm-1.26.6 kubectl-1.26.6
systemctl enable kubelet && systemctl start kubelet




apt-get update


apt-get install -y kubelet kubeadm kubectl

kubectl /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/bin/kubelet --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target



部署

kubeadm reset #还原之前使用 kubeadm init 或者 kubeadm join 对节点所作改变

kubeadm config print init-defaults #打印初始化配置

kubeadm config print init-defaults > config.yaml #写入到文件

advertiseAddress:更改为master的IP地址
criSocket:指定容器运行时
imageRepository:配置国内加速源地址
podSubnet:pod网段地址
serviceSubnet:services网段地址
末尾添加了指定使用ipvs,开启systemd
nodeRegistration.name:改为当前主机名称


#外置etcd

etcd:
external:
endpoints:
- "https://192.168.26.81:2379"
- "https://192.168.26.82:2379"
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem

imageRepository: registry.aliyuncs.com/google_containers

networking:
dnsDomain: cluster.local
podSubnet: 10.224.0.0/16
serviceSubnet: 10.96.0.0/12

增加ipvs模式
文件末尾增加下面代码

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs


systemd 驱动
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd


---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd


拉镜像 使用代理最稳妥
kubeadm config images pull --kubernetes-version=v1.26.6 --image-repository=registry.aliyuncs.com/google_containers

初始化集群 用配置文件
kubeadm init --config=config.yaml
kubeadm init --config=config.yaml --ignore-preflight-errors=SystemVerification

10.224.0.0/16 为calico 的默认地址

kubeadm init \
--apiserver-advertise-address=192.168.70.101 \
--kubernetes-version=1.26.6 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.224.0.0/16


kubeadm init --kubernetes-version=v1.26.6 --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.70.200

kubeadm init phase分段执行
–skip-phases 可用于跳过某些阶段

sudo kubeadm init phase control-plane all --config=configfile.yaml
sudo kubeadm init phase etcd local --config=configfile.yaml
sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml


可以使用命令获取certificate-key
kubeadm init phase upload-certs --upload-certs --config kubeadm-init.yaml



指定使用的容器
--cri-socket unix:///var/run/cri-dockerd.sock





环境变量(env)存放的位置:/var/lib/kubelet/kubeadm-flags.env

kubelet配置文件存放位置:/var/lib/kubelet/config.yaml

k8s所有证书的存放位置:/etc/kubernetes/pki

配置文件放在:/etc/Kubernetes



mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf


## kubeadm token


kubeadm token create --print-join-command



kubeadm join 192.168.70.201:6443 --token 9atgvy.mtjy6r5tbemoh116 \
--discovery-token-ca-cert-hash sha256:bb8779348cce913989acf9e70ac39b36e28f4bc2ff5c373a1f37928c4e32ed1a


部署其他机器 master节点上新建以下目录

mkdir -p /etc/kubernetes/pki/etcd/

10.2在119 master节点上执行以下命令拷贝证书到120 master节点上

scp /etc/kubernetes/pki/ca.crt root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/ca.key root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.key root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/sa.pub root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.crt root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/front-proxy-ca.key root@10.1.60.120:/etc/kubernetes/pki/

scp /etc/kubernetes/pki/etcd/ca.crt root@10.1.60.120:/etc/kubernetes/pki/etcd/

scp /etc/kubernetes/pki/etcd/ca.key root@10.1.60.120:/etc/kubernetes/pki/etcd/

10.3在119 master查看加入集群的token

kubeadm token create --print-join-command

10.4在120 master执行以上输出的token并增加一些参数加入集群

kubeadm join 10.1.60.124:16443 --token zj1hy1.ufpwaj7wxhymdw3a --discovery-token-ca-cert-hash sha256:9636d912ddb2a9b1bdae085906c11f6839bcf060f8b9924132f6d82b8aaefecd --control-plane --cri-socket unix:///var/run/cri-dockerd.sock

--control-plane:是以控制节点的身份加入集群,不增加此参数是以工作节点的身份加入集群
可以看到控制节点加入集群和工作节点加入集群的区别就是有无--control-plane参数



kubeadm join bgp-k8s-api-server.tiga.cc:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:b0b51ba58c2d65463541b7dcbf63c78e95b1d9f1b349a698c4d00c54602569cc \
--control-plane --certificate-key a803128a1c14b8a64ad8146d19ca745c922fcafb56733595e632032b56bab198



--cri-socket:指定容器引擎


在任意节点Node中使用kubectl



解决:

步骤1. 将master 节点中/etc/kubernetes/admin.conf 配置文件拷贝到需要运行的Node服务器的/etc/kubernetes目录中

scp /etc/kubernetes/admin.conf root@192.168.43.130:/etc/kubernetes
步骤2. 在对应的节点服务器上配置环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
测试,是否可以使用kubectl




配置etcd为高可用

13.1编辑etcd yaml文件(每台master节点均需执行)

vi /etc/kubernetes/manifests/etcd.yaml

更改以下配置项
- --initial-cluster=k8s-master01=https://10.1.60.119:2380
更改为以下内容
- --initial-cluster=k8s-master01=https://10.1.60.119:2380,k8s-master02=https://10.1.60.120:2380,k8s-master03=https://10.1.60.121:2380
13.2重启kubelet服务(每台master节点均需执行)

systemctl restart kubelet

13.3运行etcd容器(在任意控制节点执行)

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt member list

13.4查看etcd集群状态

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://10.1.60.119:2379,https://10.1.60.120:2379,https://10.1.60.121:2379 endpoint health --cluster

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl -w table --cert /etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert /etc/kubernetes/pki/etcd/ca.crt --endpoints=https://10.1.60.119:2379,https://10.1.60.120:2379,https://10.1.60.121:2379 endpoint status --cluster

至此k8s高可用集群配置完成




![[Pasted image 20230807192533.png]]![[Pasted image 20230807192533.png]](assets/Pasted image 20230807192533.png)