云主机搭建Kubernetes
分类:电脑系统

一、基础环境

kubeadm是Kubernetes官方提供的快速安装和初始化Kubernetes集群的工具,目前的还处于孵化开发状态,伴随Kubernetes每个版本的发布都会同步更新。 当然,目前的kubeadm是不能用于生产环境的。 但伴随着Kubernetes每次版本升级,kubeadm都会对集群配置方面的一些实践做调整,通过实验kubeadm我们可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

云主机

图片 1

Kubernetes 1.8已经发布,为了跟上官方更新的脚本,接下来体验一下Kubernetes 1.8中的kubeadm。

下载软件包

将所有软件下载至/data目录

# 链接:https://pan.baidu.com/s/13DlR1akNBCjib5VFaIjGTQ 密码:1l69
# 链接:https://pan.baidu.com/s/1V6Uuj6a08mEq-mRLaI1xgw 密码:6gap

1.准备

master到node做免密认证

ssh-keygen
ssh-copy-id root@192.168.1.237
ssh-copy-id root@192.168.1.100
ssh-copy-id root@192.168.1.188

1.1系统配置

设定主机名与host文件

# 分别设定node与master的主机名
hostnamectl set-hostname master
exec bash

# 同步所有主机的hosts文件
vim /etc/hosts
192.168.1.78 master localhost
192.168.1.237 node1
192.168.1.100 node2
192.168.1.188 node3

在安装之前,需要先做如下准备。两台CentOS 7.3主机如下:

解决DNS解析localhost

此云主机的DNS解析localhost会解析到一个鬼地址,这是个大坑。kubeadm初始化是会用到localhost。如果你的主机能解析到自己的IP,那么这步可以跳过。如果不能则需要自己搭建一个DNS,将localhost解析到自己。

# 1.检测
[root@node2 ~]# nslookup localhost
Server:     118.118.118.9
Address:    118.118.118.9#53

Non-authoritative answer:
Name:   localhost.openstacklocal
Address: 183.136.168.91

# 2.搭建DNS
yum -y install dnsmasq
cp /etc/resolv.conf{,.bak}
rm -rf /etc/resolv.conf
echo -e "nameserver 127.0.0.1nnameserver $(hostname -i)" >> /etc/resolv.conf
chmod 444 /etc/resolv.conf
chattr +i /etc/resolv.conf
echo -e "server=8.8.8.8nserver=8.8.4.4" > /etc/dnsmasq.conf
echo -e "$(hostname -i)tlocalhost.$(hostname -d)" >> /etc/hosts
service dnsmasq restart

# 3.再次检测
[root@master ~]# nslookup localhost
Server:     127.0.0.1
Address:    127.0.0.1#53

Name:   localhost
Address: 192.168.1.78

# 4.添加域名解析
vim /etc/dnsmasq.conf
address=/www.baidu.com/123.123.123.123

cat/etc/hosts192.168.61.11node1192.168.61.12node2

同步系统时间

ntpdate 0.centos.pool.ntp.org

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

关闭防火墙

iptables -F
systemctl stop firewalld
systemctl disable firewalld

systemctl stop firewalld

关闭SELinux & 关闭swap

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab
setenforce 0

systemctl disable firewalld

确认时区

timedatectl set-timezone Asia/Shanghai 
systemctl restart chronyd.service 

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

修改系统参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

net.bridge.bridge-nf-call-ip6tables=1net.bridge.bridge-nf-call-iptables=1

安装docker

tar -xvf docker-packages.tar
cd docker-packages
yum -y install local *.rpm
systemctl start docker && systemctl enable docker

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

配置镜像加速器

vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://lw9sjwma.mirror.aliyuncs.com"]
}

systemctl daemon-reload 
systemctl restart docker

禁用SELINUX:

配置k8s的yum源

vim /etc/yum.repos.d/k8s.repo
[k8s]
name=k8s
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0

setenforce0

获取kube软件包

cd kube-packages-1.10.1                 # 软件包在网盘中下载    
tar -xvf kube-packages-1.10.1.tar
cd kube-packages-1.10.1
yum -y install local *.rpm 
systemctl start kubelet && systemctl enable kubelet

vi/etc/selinux/config

统一k8s与docker的驱动

# 1.查看docker驱动
 docker info | Cgroup Driver
Cgroup Driver: cgroupfs

# 修改k8s配置文件与docker保持一致
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

SELINUX=disabled

导入基础镜像

cd /data
docker load -i k8s-images-1.10.tar.gz 

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。可以通过kubelet的启动参数–fail-swap-on=false更改这个限制。 我们这里关闭系统的Swap:

二、初始化master节点

# 初始化master 指定的版本要与kubeadm版本一致
# kubeadm只给定了最少选项,集群名称等等都没有指定,kubeadm init
[root@master ~]# kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16

# 初始化完成后得到如下信息

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.78:6443 --token qabol0.c2gq0uyfxvpqr8bu --discovery-token-ca-cert-hash sha256:2237ec7b8efd5a8f68adcb04900a0b17b9df2a78675a7d62b4aef644a7f62c05
# kubeadm join 是node节点加入集群的命令,注意token的有效期

swapoff-a

如果以后要通过其他普通用户运行k8s,那么切换用户后执行,否则root下直接执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。

基本命令

# 查看pods
kubectl get pods


# 查看系统pods 
[root@master ~]# kubectl get pods -n kube-system
NAME                             READY     STATUS     RESTARTS   AGE
etcd-master                      0/1       Pending    0          1s
kube-apiserver-master            0/1       Pending    0          1s
kube-controller-manager-master   0/1       Pending    0          1s
kube-dns-86f4d74b45-d42zm        0/3       Pending    0          8h
kube-proxy-884h6                 1/1       NodeLost   0          8h
kube-scheduler-master            0/1       Pending    0          1s

# 查看集群各组件状态信息
[root@master ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   
You have new mail in /var/spool/mail/root

swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

三、node加入集群

# 确保node节点cgroup驱动保持一致
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

# 命令来自集群初始化之后额显示中
kubeadm join 192.168.1.78:6443 --token v0866r.u7kvg5js1ah2u1bi --discovery-token-ca-cert-hash sha256:7b36794f4fa5121f6a5e309d0e312ded72997a88236a93ec7da3520e5aaccf0e

# master节点查看nodes信息
[root@master data]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady      master    57m       v1.10.1
node1     NotReady      <none>    27m       v1.10.1
node2     NotReady      <none>    11s       v1.10.1
node3     NotReady   <none>    4s        v1.10.1
You have new mail in /var/spool/mail/root

vm.swappiness=0

四、部署网络

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

部署

flannel官网
flannel下载时不用科学上网,flannel的yml文件会自动去quay.io网站中下载镜像。

# 1.1使用软件包中的flannel,并指pod映射到哪个主机的网卡上面。
vim kube-flannel.yml
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr","-iface=eth0" ]
# 以下要按顺序创建,先创建rbac,之前没有穿件rbac导致pod正常创建,但是pin不同
kubectl apply -f kube-flannel-rbac.yml
kubectl apply -f kube-flannel.yml
# 后,节点的状态会变为ready
[root@master1 kubernetes1.10]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master    Ready      master    57m       v1.10.1
node1     Ready      <none>    27m       v1.10.1
node2     Ready      <none>    11s       v1.10.1
node3     Ready   <none>    4s        v1.10.1

# 2.从官网下载最新的flannel,k8s1.7+ 直接执行以下命令即可
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

1.2安装Docker

flannel配置文件修改

kube-flannel.yml中指定使用的网段
"Network": "10.244.0.0/16"

默认使用16位掩码,则在各node中都分配一个10.244.0.0/8的网络

yum install-y yum-utils device-mapper-persistent-data lvm2

五、部署dashboard

kubectl apply -f kubernetes-dashboard-http.yam
kubectl apply -f admin-role.yaml
kubectl apply -f kubernetes-dashboard-admin.rbac.yaml

yum-config-manager --add-repo

命令行常用命令

# 查看pod信息,默认显示default名称空间下的pod
[root@master ~]# kubectl get pods
No resources found.

# 指定名称空间写pod
[root@master ~]# kubectl get pods -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
etcd-master                             1/1       Running   0          3h
kube-apiserver-master                   1/1       Running   0          3h
kube-controller-manager-master          1/1       Running   0          3h
kube-dns-86f4d74b45-bzbvc               3/3       Running   0          3h
kube-flannel-ds-5ghhj                   1/1       Running   0          2h
kube-flannel-ds-ht4xd                   1/1       Running   0          3h
kube-flannel-ds-kbm5g                   1/1       Running   0          3h
kube-flannel-ds-mlj4r                   1/1       Running   0          2h
kube-proxy-9xxnd                        1/1       Running   0          3h
kube-proxy-n9w5x                        1/1       Running   0          3h
kube-proxy-nkn8c                        1/1       Running   0          2h
kube-proxy-shd6l                        1/1       Running   0          2h
kube-scheduler-master                   1/1       Running   0          3h
kubernetes-dashboard-5c469b58b8-rjfx6   1/1       Running   0          1h


# 显示更详细的pod信息,此时各pod中都运行了一个kube-proxy和flannel容器
-o wide 显示更详细的信息,报错node节点iP、主机名
[root@master ~]# kubectl get pods -n kube-system -o wide
NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
etcd-master                             1/1       Running   0          3h        192.168.1.78    master
kube-apiserver-master                   1/1       Running   0          3h        192.168.1.78    master
kube-controller-manager-master          1/1       Running   0          3h        192.168.1.78    master
kube-dns-86f4d74b45-bzbvc               3/3       Running   0          3h        10.244.0.2      master
kube-flannel-ds-5ghhj                   1/1       Running   0          2h        192.168.1.188   node3
kube-flannel-ds-ht4xd                   1/1       Running   0          3h        192.168.1.78    master
kube-flannel-ds-kbm5g                   1/1       Running   0          3h        192.168.1.237   node1
kube-flannel-ds-mlj4r                   1/1       Running   0          2h        192.168.1.100   node2
kube-proxy-9xxnd                        1/1       Running   0          3h        192.168.1.237   node1
kube-proxy-n9w5x                        1/1       Running   0          3h        192.168.1.78    master
kube-proxy-nkn8c                        1/1       Running   0          2h        192.168.1.100   node2
kube-proxy-shd6l                        1/1       Running   0          2h        192.168.1.188   node3
kube-scheduler-master                   1/1       Running   0          3h        192.168.1.78    master
kubernetes-dashboard-5c469b58b8-rjfx6   1/1       Running   0          1h        10.244.0.3      master

六、kubeadm清空配置

# 清空kubectl
kubeadm reset

# 清空网络信息
ip link del cni0
ip link del flannel.1

查看当前的Docker版本:

七、踩过的那些坑

  • 确保master与node的DNS解析localhost能解析到自己的IP
  • node加入master确保token不过期
  • node确保kubelet正常启动并运行
  • flannel网络要先创建kube-flannel-rbac.ymal再创建 kube-flannel.yml

yum list docker-ce.x86_64--showduplicates|sort-r

八、token过期的解决办法

# 1.查看已经存在的token
kubeadm token list

# 2.创建token
kubeadm token create

# 3.查看ca证书的sha256编码
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

# 4.node使用新的token加入集群
kubeadm join --token acb123 --discovery-token-ca-cert-hash sha256:efg456  172.16.6.79:6443 --skip-preflight-checks
    # abc123    新创建的Token
    # efg456    证书的sha256编码
    # IP+Port   Master的IP+Port

docker-ce.x86_6417.09.0.ce-1.el7.centosdocker-ce-stable

感谢

  • 无痴迷,不成功
  • discsthnew

docker-ce.x86_6417.06.2.ce-1.el7.centosdocker-ce-stable

docker-ce.x86_6417.06.1.ce-1.el7.centosdocker-ce-stable

docker-ce.x86_6417.06.0.ce-1.el7.centosdocker-ce-stable

docker-ce.x86_6417.03.2.ce-1.el7.centosdocker-ce-stable

docker-ce.x86_6417.03.1.ce-1.el7.centosdocker-ce-stable

docker-ce.x86_6417.03.0.ce-1.el7.centosdocker-ce-stable

Kubernetes 1.8已经针对Docker的1.11.2, 1.12.6, 1.13.1和17.03.2等版本做了验证。 因为我们这里在各节点安装docker的17.03.2版本。

yum makecache fast

yum install-y--setopt=obsoletes=0

docker-ce-17.03.2.ce-1.el7.centos

docker-ce-selinux-17.03.2.ce-1.el7.centossystemctl start docker

systemctl enable docker

Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,在各个Docker节点执行下面的命令:

iptables-P FORWARD ACCEPT

可在docker的systemd unit文件中以ExecStartPost加入上面的命令:

ExecStartPost=/usr/sbin/iptables-P FORWARD ACCEPT

systemctl daemon-reload

systemctl restart docker

2.安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

cat</etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=

测试地址

curl

yum makecache fast

yum install-y kubelet kubeadm kubectl...Installed:kubeadm.x86_640:1.8.0-0kubectl.x86_640:1.8.0-0kubelet.x86_640:1.8.0-0DependencyInstalled:kubernetes-cni.x86_640:0.5.1-0socat.x86_640:1.7.3.2-2.el7

从安装结果可以看出还安装了kubernetes-cni和socat两个依赖:

可以看出官方Kubernetes 1.8依赖的cni还是0.5.1版本

socat是kubelet的依赖

我们之前在Kubernetes 1.6 高可用集群部署中手动安装这两个依赖的

Kubernetes文档中kubelet的启动参数:

--cgroup-driverstringDriverthat the kubelet uses to manipulate cgroups on the host.Possiblevalues:'cgroupfs','systemd'(default"cgroupfs")

默认值为cgroupfs,但是我们注意到yum安装kubelet,kubeadm时生成10-kubeadm.conf文件中将这个参数值改成了systemd。

查看kubelet的 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf文件,其中包含如下内容:

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"

使用docker info打印docker信息:

docker info......ServerVersion:17.03.2-ce......CgroupDriver:cgroupfs

可以看出docker 17.03使用的Cgroup Driver为cgroupfs。

于是修改各节点docker的cgroup driver使其和kubelet一致,即修改或创建/etc/docker/daemon.json,加入下面的内容:

{"exec-opts":["native.cgroupdriver=systemd"]}

重启docker:

systemctl restart docker

systemctl status docker

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

3.使用kubeadm init初始化集群

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

kubeadm init --kubernetes-version=v1.8.0--pod-network-cidr=10.244.0.0/16--apiserver-advertise-address=192.168.61.11

因为我们选择flannel作为Pod网络插件,所以上面的命令指定–pod-network-cidr=10.244.0.0/16。

kubeadm init >--kubernetes-version=v1.8.0>--pod-network-cidr=10.244.0.0/16>--apiserver-advertise-address=192.168.61.11[kubeadm]WARNING:kubeadmisinbeta,pleasedonotuseitforproduction clusters.[init]UsingKubernetesversion:v1.8.0[init]UsingAuthorizationmodes:[NodeRBAC][preflight]Runningpre-flight checks[preflight]WARNING:firewalldisactive,pleaseensureports[644310250]are openoryour cluster maynotfunctioncorrectly[preflight]Startingthe kubelet service[kubeadm]WARNING:startingin1.8,tokens expire after24hoursbydefault(ifyourequirea non-expiring tokenuse--token-ttl0)[certificates]Generatedca certificateandkey.[certificates]Generatedapiserver certificateandkey.[certificates]apiserver serving certissignedforDNS names[node1 kubernetes kubernetes.defaultkubernetes.default.svc kubernetes.default.svc.cluster.local]andIPs[10.96.0.1192.168.61.11][certificates]Generatedapiserver-kubelet-client certificateandkey.[certificates]Generatedsa keyandpublickey.[certificates]Generatedfront-proxy-ca certificateandkey.[certificates]Generatedfront-proxy-client certificateandkey.[certificates]Validcertificatesandkeys now existin"/etc/kubernetes/pki"[kubeconfig]WroteKubeConfigfile to disk:"admin.conf"[kubeconfig]WroteKubeConfigfile to disk:"kubelet.conf"[kubeconfig]WroteKubeConfigfile to disk:"controller-manager.conf"[kubeconfig]WroteKubeConfigfile to disk:"scheduler.conf"[controlplane]WroteStaticPodmanifestforcomponent kube-apiserver to"/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane]WroteStaticPodmanifestforcomponent kube-controller-manager to"/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane]WroteStaticPodmanifestforcomponent kube-scheduler to"/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd]WroteStaticPodmanifestforalocaletcd instance to"/etc/kubernetes/manifests/etcd.yaml"[init]Waitingforthe kubelet to boot up the control planeasStaticPodsfromdirectory"/etc/kubernetes/manifests"[init]Thisoften takes around a minute;orlongerifthe control plane images have to be pulled.[apiclient]Allcontrol plane components are healthy after28.505733seconds[uploadconfig]Storingthe configuration usedinConfigMap"kubeadm-config"inthe"kube-system"Namespace[markmaster]Willmark node node1asmasterbyadding a labelanda taint[markmaster]Masternode1 taintedandlabelledwithkey/value:node-role.kubernetes.io/master=""[bootstraptoken]Usingtoken:9e68dd.7117f03c900f9234[bootstraptoken]ConfiguredRBAC rules to allowNodeBootstraptokens to postCSRsinorderfornodes togetlongterm certificate credentials[bootstraptoken]ConfiguredRBAC rules to allow the csrapprover controller automatically approveCSRsfromaNodeBootstrapToken[bootstraptoken]Creatingthe"cluster-info"ConfigMapinthe"kube-public"namespace[addons]Appliedessential addon:kube-dns[addons]Appliedessential addon:kube-proxyYourKubernetesmaster has initialized successfully!Tostartusingyour cluster,you need to run(asa regular user):mkdir-p $HOME/.kube

sudo cp-i/etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id-u):$(id-g)$HOME/.kube/configYoushould now deploy a pod network to the cluster.Run"kubectl apply -f [podnetwork].yaml"withone of the options listed at: now join any number of machinesbyrunning the following on each nodeasroot:kubeadm join--token9e68dd.7117f03c900f9234192.168.61.11:6443--discovery-token-ca-cert-hash sha256:82a08ef9c830f240e588a26a8ff0a311e6fe3127c1ee4c5fc019f1369007c0b7

上面记录了完成的初始化输出的内容。

其中由以下关键内容:

kubeadm 1.8当前还处于beta状态,还不能用于生产环境。目前来看这东西安装的etcd和apiserver都是单节点,当然不能用于生产环境。

RBAC模式已经在Kubernetes 1.8中稳定可用。kubeadm 1.8也默认启用了RBAC

接下来是生成证书和相关的kubeconfig文件,这个目前我们在Kubernetes 1.6 高可用集群部署也是这么做的,目前没看出有什么新东西

生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

另外注意kubeadm还报了starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use –token-ttl 0)的警告

下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir-p $HOME/.kube

sudo cp-i/etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id-u):$(id-g)$HOME/.kube/config

本文由威尼斯手机娱乐官网发布于电脑系统,转载请注明出处:云主机搭建Kubernetes

上一篇:没有了 下一篇:没有了
猜你喜欢
热门排行
精彩图文