Centos 7搭建k8s集群
Centos 7系统搭建
详细步骤请参考https://blog.csdn.net/WHQ556677/article/details/122283578
注:centos安装后默认不开启网络连接,需要按照以下步骤进行开启。
1、VMware网络模式选择NAT模式
2、进入centos7下 /etc/sysconfig/network-scripts/目录下,编辑ifcfg-ensxxx(默认是ifcfg-ens33)修改为一下配置文件
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=f5f97017-3e91-4d58-b30d-7ce3c711b636
DEVICE=ens33
ONBOOT=yes
GATEWAY=网关地址
IPADDR=ip地址
DNS1=114.114.114.114
3、修改配置文件后重启网络服务(systemctl restart network)
虚拟机重启后ip变化请参考:https://blog.csdn.net/qq_52103423/article/details/125292433
centos7安装docker
1、移除以前的docker安装包
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
2、配置yum源
sudo yum install -y yum-utils
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3、安装docker
yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io-1.4.6
4、启动
systemctl enable docker --now
5、配置加速
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://82m9ar63.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
其他系统版本请参考:https://docs.docker.com/engine/install/centos/
centos7安装k8s
0、注意
一台兼容的 Linux 主机。Kubernetes 项目为基于 Debian 和 Red Hat 的 Linux 发行版以及一些不提供包管理器的发行版提供通用的指令 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响你应用的运行内存) 2 CPU 核或更多(低于2核不允许安装) 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
设置防火墙放行规则 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里了解更多详细信息。
设置不同hostname 开启机器上的某些端口。请参见这里 了解更多详细信息。
内网互信 禁用交换分区。为了保证 kubelet 正常工作,你 必须 禁用交换分区。
永久关闭
1、基础环境
所有机器都执行一下操作
#各个机器设置自己的域名
hostnamectl set-hostname master
hostnamectl set-hostname node
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#允许 iptables 检查桥接流量
cat < br_netfilter EOF cat < net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system 2、安装kubelet、kubeadm、kubectl cat < [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl EOF sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes sudo systemctl enable --now kubelet 搭建k8s集群 1、下载各个集群需要的镜像 sudo tee ./images.sh <<-'EOF' #!/bin/bash images=( kube-apiserver:v1.20.9 kube-proxy:v1.20.9 kube-controller-manager:v1.20.9 kube-scheduler:v1.20.9 coredns:1.7.0 etcd:3.4.13-0 pause:3.2 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName done EOF chmod +x ./images.sh && ./images.sh 2、初始化master节点 1、所有机器添加master域名映射,以下(masterIp)需要修改为自己的master的ip。 echo "masterIp cluster-endpoint" >> /etc/hosts 2、主节点初始化,只需要主节点(master)运行即可。以下(masterIp)需要修改为自己的master的ip。 这里注意pod的网段不要和主机重复 #主节点初始化 kubeadm init --apiserver-advertise-address=192.168.75.129 --control-plane-endpoint=cluster-endpoint --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.20.9 --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 #所有网络范围不重叠 成功之后会输出以下内容 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join cluster-endpoint:6443 --token l7ove2.ighlmt9j61llegba \ --discovery-token-ca-cert-hash sha256:bbcdbc496f3f45cb34041f0ffab632315529af3b71ed336596cad3f628465943 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join cluster-endpoint:6443 --token l7ove2.ighlmt9j61llegba \ --discovery-token-ca-cert-hash sha256:bbcdbc496f3f45cb34041f0ffab632315529af3b71ed336596cad3f628465943 之后运行以下命令 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config 如果你是root用户则执行(参考上面初始化master节点后的输出数据) export KUBECONFIG=/etc/kubernetes/admin.conf 注:如果虚拟修改ip后就安装k8s 导致coredns这个pod起不来,可以尝试使用以下命令解决 systemctl stop kubelet systemctl stop docker iptables --flush iptables -tnat --flush systemctl start kubelet systemctl start docker 3、部署网络插件(calico) calico的版本需要和k8s的安装版本匹配 master节点运行以下命令 curl https://docs.projectcalico.org/archive/v3.20/manifests/calico.yaml -O kubectl apply -f calico.yaml 注:启动要查看dashBoard的时候发现不能访问了,如果master和pod ping不通 curl也不通,尝试重新部署了以下网络插件就通了 4、node节点加入集群 node节点运行以下命令 kubeadm join cluster-endpoint:6443 --token fg4w21.tm52v33d4yhlwas4 --discovery-token-ca-cert-hash sha256:e43b6abba3addde38c4e7b17a6c2e367623285deed5178c9bf233e807711a8a7 如果token过期。可以在master节点用以下命令重新生成。 kubeadm token create --print-join-command 如果加入报错 [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher 则运行以下命令成功解决 sysctl -w net.ipv4.ip_forward=1 到此,集群搭建成功,运行kubectl get nodes查看结果 [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 48m v1.20.9 node1 Ready 5、部署dashboard (1)搭建环境 master节点运行 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml 注:如果kubernetes-dashboard发生错误,日志为 Error from server: Get https://xx:10250/containerLogs dial tcp 10250: connect: no route to host 是node节点的端口号被防火墙挡住了,关闭防火墙或者放行该端口号(10250,443)! centos7防火墙为firewall常用命令如下: firewall-cmd --zone=public --add-port=xxx/tcp --permanent #开放端口 firewall-cmd --reload #重启防火墙 systemctl stop firewalld.service #停止firewall firewall-cmd --list-port #查看所有的开放端口号 firewall-cmd --query-port=xxx/tcp #查看某个端口号是否开放 (2) 设置访问端口 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard 打开以后将type: ClusterIP 改为 type: NodePort 运行 kubectl get svc -A |grep kubernetes-dashboard 找到 kubernetes-dashboard分配的端口号 [root@master ~]# kubectl get svc -A |grep kubernetes-dashboard kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.96.145.255 kubernetes-dashboard kubernetes-dashboard NodePort 10.96.83.127 这里映射的端口号就是32435,利用这个端口号访问 访问地址为 https://集群nodeip(任意): kubernetes-dashboard分配的端口号 (3) 创建访问账号 创建一个dash_user.yaml文件,内容为: #创建访问账号,准备一个yaml文件; vim dash.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard 然后应用这个yaml文件 kubectl apply -f dash_user.yaml (4) 令牌访问 #获取访问令牌 kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" 得到的令牌填入到web的token里面登录 到此,搭建成功! deb http://mirrors.163.com/debian/ buster main non-free contrib deb http://mirrors.163.com/debian/ buster-updates main non-free contrib deb http://mirrors.163.com/debian/ buster-backports main non-free contrib deb-src http://mirrors.163.com/debian/ buster main non-free contrib deb-src http://mirrors.163.com/debian/ buster-updates main non-free contrib deb-src http://mirrors.163.com/debian/ buster-backports main non-free contrib deb http://mirrors.163.com/debian-security/ buster/updates main non-free contrib deb-src http://mirrors.163.com/debian-security/ buster/updates main non-free contrib 文章来源
发表评论