centOS安装k8s 本文共有11660个字,关键词: #####设置主机名 ```bash hostnamectl set-hostname k8s-master echo "192.168.119.151 k8s-master" >> /etc/hosts #master #生成公钥 ssh-keygen ssh-copy-id -i .ssh/id_rsa.pub root@ip ``` #####关闭swap ```bash #临时 swapoff -a #永久 vim /etc/fstab #/dev/mapper/centos_hhdcloudrd6-swap swap ``` #####关闭selinux ```bash #临时 setenforce 0 #永久 sudo sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config sudo sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux #查看selinux sestatus ``` #####关闭防火墙 ```bash systemctl stop firewalld systemctl disable firewall ``` #####启用 bridge-nf-call-iptables 预防网络问题 ```bash echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables ``` OR #####配置K8S虚拟网络路由转发 ```bash sudo echo -e "net.bridge-nf-call-ip6tables = 1 net.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1" > /etc/sysctl.conf ``` #####设置网桥参数 ```bash cat << EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF ``` #####安装docker ```bash curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun ``` #####修改docker的 /etc/docker/daemon.json文件 ```bash #注意引号的丢失 sudo mkdir -p /etc/docker echo -e "{ "registry-mirrors": ["https://r61ch9pn.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] }" > /etc/docker/daemon.json cat /etc/docker/daemon.json systemctl daemon-reload systemctl restart docker systemctl enable docker ``` #####查看kubelet驱动 ```bash cat /var/lib/kubelet/config.yaml |grep group ``` #####配置k8s下载资源配置文件 ```bash cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` #####查看可安装版本kubelet(k8s) 并安装 ```bash yum list kubelet --showduplicates|sort -r yum install -y --nogpgcheck kubelet-1.23.13 kubeadm-1.23.13 kubectl-1.23.13 ``` #####查看是否安装成功 ```bash kubelet --version kubectl version kubeadm version ``` #####启动kubelet ```bash systemctl daemon-reload systemctl start kubelet systemctl enable kubelet ``` #####创建配置文件 ```bash kubeadm config print init-defaults > init-config.yaml vim init-config.yaml ``` ######修改三处 ```bash apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.119.155 #修改为MasterIP bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock imagePullPolicy: IfNotPresent name: k8s-master #修改为机器名 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers #修改为阿里云 kind: ClusterConfiguration kubernetesVersion: 1.23.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {} ``` #####查看 需要拉取的镜像 ```bash kubeadm config images list --config init-config.yaml ``` ######pull.sh ```bash #!/bin/bash images=`kubeadm config images list --config init-config.yaml` if [[ -n ${images} ]] then echo "开始拉取镜像" for i in ${images}; do echo $i docker pull $i; done else echo "没有可拉取的镜像" fi ``` #####或者使用下面的 #####所需镜像可以通过下面命令查看 ```bash kubeadm config images list ``` ```bash #先拉取这些镜像 images=( kube-apiserver:v1.23.13 kube-controller-manager:v1.23.13 kube-scheduler:v1.23.13 kube-proxy:v1.23.13 pause:3.6 etcd:3.5.1-0 coredns:v1.8.6 ) #coredns/coredns:v1.8.6 需要自己命名 为k8s.gcr.io/coredns/coredns:v1.8.6 for i in ${images[@]} do #下载镜像 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6${i} #打上标签 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${i} k8s.gcr.io/${i} #删除之前镜像 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${i} done ``` #####确保swap和SELINUX已关闭 ```bash setenforce 0 swapoff -a ``` #####获取k8s版本 ```bash ver=`kubeadm version|awk '{print $5}'|sed "s/[^0-9|.]//g"|awk 'NR==1{print}'` ``` #####初始化 ```bash #获取MASTER主机IP ip=`cat /etc/hosts|grep k8s-master|awk '{print $1}'|awk 'NR==1{print}'` #验证信息 echo -e "k8s version is v${ver} , master ip is ${ip}" #初始化 kubeadm init --apiserver-advertise-address=1 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v${ver} \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16 \ --ignore-preflight-errors=all #失败时进行复位 #kubeadm reset -f ``` ```bash mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config export KUBECONFIG=/etc/kubernetes/admin.conf #kube-flannel.yml在文末 kubectl apply -f kube-flannel.yml ``` #####其它节点加入 ```bash kubeadm join ip:port --token xxxxxx.xxxxxxxxxxxxxxxx \ --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx #查看存在的token kubeadm token list #生成永久token kubeadm token create --ttl 0 #生成 Master 节点的 ca 证书 sha256 编码 hash 值 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ``` #####其它命令 ```bash #生成加入节点的命令 kubeadm token create --print-join-command #在master节点查看 加入的node节点 kubectl get nodes #删除节点 kubectl delete nodes 节点名称 #master节点删除node节点后,node节点再次加入需要在node节点执行 kubeadm reset #查看kubelet日志 journalctl -xefu kubelet #重启kubelet systemctl daemon-reload systemctl restart kubelet #node节点kubeadm join失败 kubeadm -y reset kubeadm join #删除 kubectl delete -f ***.yaml #生效 kubectl apply -f ***.yaml #网络插件使用 weave 网络插件 和 kube-flannel 2选一就可以 wget http://static.corecore.cn/weave.v2.8.1.yaml kubectl apply -f weave.v2.8.1.yaml #卸载 yum -y remove kubelet kubeadm kubectl sudo kubeadm reset -f sudo rm -rvf $HOME/.kube sudo rm -rvf ~/.kube/ sudo rm -rvf /etc/kubernetes/ sudo rm -rvf /etc/systemd/system/kubelet.service.d sudo rm -rvf /etc/systemd/system/kubelet.service sudo rm -rvf /usr/bin/kube* sudo rm -rvf /etc/cni sudo rm -rvf /opt/cni sudo rm -rvf /var/lib/etcd sudo rm -rvf /var/etcd ``` ######使用kubeadm安装k8s集群的小bug,查看时候会出现不健康状态 ```bash cd etc/kubernetes/manifest/ vim kube-scheduler.yaml spec: containers: - command: -port:0 #删除 ``` ######kube-flannel.yml ```bash --- kind: Namespace apiVersion: v1 metadata: name: kube-flannel labels: pod-security.kubernetes.io/enforce: privileged --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-flannel --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-flannel labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-flannel labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply) image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply) image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel #image: flannelcni/flannel:v0.20.1 for ppc64le and mips64le (dockerhub limitations may apply) image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: xtables-lock mountPath: /run/xtables.lock volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate ``` 「一键投喂 软糖/蛋糕/布丁/牛奶/冰阔乐!」 赞赏 × 梦白沙 (๑>ڡ<)☆谢谢老板~ 1元 2元 5元 10元 50元 任意金额 2元 使用微信扫描二维码完成支付 版权声明:本文为作者原创,如需转载须联系作者本人同意,未经作者本人同意不得擅自转载。 Docker,CentOS,K8S 2022-11-08 评论 2760 次浏览