公司最近有一个内网搭建k8s集群的项目。因为客户的环境是内网环境,由于环境限制,需要实现离线安装kubernetes集群。这里采用kubeadm实现集群的部署。
一、部署环境信息
1、系统信息
CentOS7.1 64位 2台
[root@localhost ~]# uname -a
Linux k8s-master 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
Master: 7.7.0.23 Node1: 7.7.0.24
2、设置三台机器的主机名
Master上执行:
hostnamectl –static set-hostname k8s-master
Node上执行:
hostnamectl –static set-hostname k8s-node-1
3、初始化系统环境
在两台机器上设置hosts,均执行如下命令:
echo -e ‘7.7.0.23 k8s-master\n7.7.0.23 etcd\n7.7.0.23 registry\n7.7.0.24 k8s-node-1’ >> /etc/hosts
关闭两台机器上的防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
关闭两台主机的selinux
临时关闭:setenforce 0
永久关闭(需要重启):
sed -i“s@SELINUX=enforcing@SELINUX=disabled@”/etc/selinux/config
同步主机的时间(需root权限):
示例:date -s “2018-06-20 01:01:01”
时间不同步,在node加入集群时会有报错:
[discovery] Failed to request cluster info, will try again: [Get https://7.7.0.23:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid]
二、部署Master
1、安装docker
上传docker-offline.tar.gz压缩包,解压并执行安装脚本
[root@k8s-master ~]# tar xvf docker-offline.tar.gz
[root@k8s-master ~]# cd docker-offline
[root@k8s-master docker-offline]# ./docker-install.sh
[root@k8s-master docker-offline]#cat docker-install.sh
#!/bin/bash
basedir=`pwd`
mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/Cent* /etc/yum.repos.d/bak
cp $basedir/docker.repo /etc/yum.repos.d/
sed -i “s@baseurl=file://@baseurl=file://$basedir/dockerRpm@” /etc/yum.repos.d/docker.repo
yum clean all && yum makecache fast
yum install docker-ce -y
mv /etc/yum.repos.d/bak/* /etc/yum.repos.d/
rm -rf /etc/yum.repos.d/bak
在这里要说一下如何创建本地的docker源:
[root@k8s-master docker-offline]# systemctl start docker && systemctl enable docker
[root@k8s- master docker-offline]# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 02:21:36 2017
OS/Arch: linux/amd64
Experimental: false
2、设置内核参数
(主要是为了避免 RHEL/CentOS 7系统下出现路由异常):
[root@k8s- master docker-offline]#echo -e “net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\n net.ipv4.ip_forward=1 ” >> /etc/sysctl.conf
[root@k8s- master docker-offline]# sysctl -p
3、上传k8s集群镜像
镜像及安装包:链接:https://pan.baidu.com/s/1MeRXs4Gk65xE-RSnHcRgVw 密码:hqco
[root@k8s-master ]# tar xvf k8s_images.tar.gz
[root@k8s-master ]# cd k8s_images/docker_images
[root@k8s-master docker_images]# for i in `ll | awk ‘{print$9}’`;do docker load < $i;done
完成后,可以看到镜像已经准备完毕:
[root@k8s-master]# docker images
安装kubernetes
[root@k8s-master docker_images]# cd ../
[root@k8s-master k8s_images]# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
[root@k8s-master k8s_images]#rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm \
kubelet-1.9.9-9.x86_64.rpm \
kubectl-1.9.0-0.x86_64.rpm \
kubeadm-1.9.0-0.x86_64.rpm
[root@k8s-master k8s_images]# rpm -qa | grep kube
kubelet-1.9.0-0.x86_64
kubectl-1.9.0-0.x86_64
kubernetes-cni-0.6.0-0.x86_64
kubeadm-1.9.0-0.x86_64
kubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd,因此我们要修改成一致。在虚拟机上部署k8s 1.9版本需要关闭操作系统交换分区
[root@k8s-master k8s_images]# swapoff -a
[root@k8s-master k8s_images]# sed -i ‘s@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=systemd”@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs”@’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@k8s-master k8s_images]# systemctl daemon-reload
启动kubelet,并设置为开启自启
[root@k8s-master k8s_images]# systemctl start kubelet && systemctl enable kubelet
4、初始化集群
K8s支持多种网络插件如flannel、weave、calico,这里我们使用flannel,需要设置–pod-network-cidr参数,10.244.0.0/16是kube-flannel.yml文件配置的默认网段,可以自定义,如果需要修改,–pod-network-cidr和kube-flannel.yml文件需要保持一致。
为了使用kubelet访问apiserver,添加环境变量:
[root@k8s-master k8s_images]# echo “export KUBECONFIG=/etc/kubernetes/admin.conf” >> ~/.bash_profile
[root@k8s-master k8s_images]# source ~/.bash_profile
初始化集群
[root@k8s-master k8s_images]# kubeadm init –kubernetes-version=v1.9.0 –pod-network-cidr=10.244.0.0/16
记住此信息:kubeadm join –token 84a65f.0fdac91a5852510c 7.7.0.23:6443 –discovery-token-ca-cert-hash sha256:0d78812defc7fb554ad7a7c9bfad194cccb82817a69c9be554d776f976ed772d
用于node加入集群
token 24小时后过期,超过时间需要重新获取
token重新获取:
在master主机上— kubeadm token list
或者: kubeadm token create
sha256获取:
在master主机上:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed ‘s/^.* //’
查看kubernetes安装是否成功
[root@k8s-master k8s_images]# kubectl version
(如果初始化失败需要重新进行初始化,需要先进行reset一下 kubeadm reset)
5、在master节点上部署网络插件flannel
[root@k8s-master k8s_images]# kubectl create -f kube-flannel.yml
clusterrole “flannel” created
clusterrolebinding “flannel” created
serviceaccount “flannel” created
configmap “kube-flannel-cfg” created
daemonset “kube-flannel-ds” created
三、部署node节点
1、安装docker
上传docker-offline.tar.gz,安装docker:
bash ./ docker-install.sh
启动docker:
[root@k8s-node-1 docker-offline]#systemctl start docker && systemctl enable docker
[root@k8s-node-1 docker-offline]# echo -e “net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\n net.ipv4.ip_forward=1 ” >> /etc/sysctl.conf
[root@k8s-node-1 docker-offline]# sysctl -p
2、安装kubernetes
上传k8s_images.tar.gz,安装kubernetes:
[root@k8s-node-1 ]# tar xvf k8s_images.tar.gz
[root@k8s-node-1 ]# cd k8s_images/docker_images
[root@k8s-node-1 docker_images]# for i in `ll | awk ‘{print$9}’`;do docker load < $i;done
[root@k8s-node-1 k8s_images]# rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm
[root@k8s-node-1 k8s_images]# rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm \
kubelet-1.9.9-9.x86_64.rpm \
kubectl-1.9.0-0.x86_64.rpm \
kubeadm-1.9.0-0.x86_64.rpm
3、配置kubelet
[root@k8s-node-1 k8s_images]# swapoff -a
[root@k8s-node-1 k8s_images]# sed -i ‘s@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=systemd”@Environment=”KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs”@’ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@k8s-node-1 k8s_images]# systemctl daemon-reload
[root@k8s-node-1 k8s_images]#systemctl start kubelet && systemctl enable kubelet
四、Node加入Cluster
将node节点加入cluster集群中
[root@k8s-node-1 k8s_images]# kubeadm join –token 84a65f.0fdac91a5852510c 7.7.0.23:6443 –discovery-token-ca-cert-hash sha256:0d78812defc7fb554ad7a7c9bfad194cccb82817a69c9be554d776f976ed772d(此为master初始化后记录的)
五、查看集群状态
master主机上查看node状况:
[root@k8s-node-1 k8s_images]# kubectl get nodes
在master主机上查看k8s集群相关pod运行情况:
[root@k8s-master]# kubectl get pods –all-namespaces
本文来自投稿,不代表Linux运维部落立场,如若转载,请注明出处:http://www.178linux.com/102207