Centos7.3 Kubernetes集群部署
    
  
        
        
      
         
         
        
          
        
      
      
      
      
    
    
      
        
          
        
        下午抽空整了下Kubernetes环境,限于资源的原因,这个环境比较简单:一台master,两台node。搞起!
环境介绍及准备
物理机操作系统
物理机操作系统采用CentOS7.4 64位,细节如下。
1 2 3 4
   | [root@opstrip.com ~]# uname -a Linux opstrip.com 3.10.0-693.2.2.el7.x86_64  [root@opstrip.com ~]# cat /etc/redhat-release  CentOS Linux release 7.4.1708 (Core)
   | 
 
主机信息
本文准备了三台机器用于部署k8s的运行环境,细节如下:
| HostName | 
IP | 
Node/function | 
| K8s-master | 
172.31.12.12 | 
kubernetes, etcd | 
| K8s-node1 | 
172.31.12.13 | 
kubernetes-node, flannel | 
| K8s-node1 | 
172.31.12.14 | 
kubernetes-node, flannel | 
环境准备
主机名修改
Master:
1
   | [root@ip-172-31-12-12 ~]#  hostnamectl --static set-hostname  k8s-master
   | 
 
Node1:
1
   | [root@ip-172-31-12-13 ~]#  hostnamectl --static set-hostname  k8s-node1
   | 
 
Node2:
1
   | [root@ip-172-31-12-14 ~]#  hostnamectl --static set-hostname  k8s-node2
   | 
 
安装docker及iptables
1
   | yum install docker iptables-services.x86_64 -y
   | 
 
关闭默认firewalld启动iptables并清除默认规则
1 2 3 4 5 6
   | systemctl stop firewalld systemctl disable firewalld systemctl start iptables systemctl enable iptables iptables -F service iptables save
   | 
 
启动docker并加入开机自启动
1 2
   | systemctl start docker systemctl enable docker
   | 
 
K8S集群部署
MASTER
master安装kubernetes,etcd
1
   | [root@k8s-master ~]# yum install kubernetes etcd -y
   | 
 
配置
etcd
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
   |  [root@k8s-master ~]# cd /etc/etcd/ [root@k8s-master etcd]# vim etcd.conf  9:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 20:ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"     
  [root@k8s-master etcd]# systemctl start etcd [root@k8s-master etcd]# systemctl enable etcd kubernetes     
  [root@k8s-master ~]# cd /etc/kubernetes/ [root@k8s-master kubernetes]# ll total 24 -rw-r--r-- 1 root root 767 Jul  3 23:33 apiserver           -rw-r--r-- 1 root root 655 Jul  3 23:33 config              -rw-r--r-- 1 root root 189 Jul  3 23:33 controller-manager  -rw-r--r-- 1 root root 615 Jul  3 23:33 kubelet -rw-r--r-- 1 root root 103 Jul  3 23:33 proxy -rw-r--r-- 1 root root 111 Jul  3 23:33 scheduler               
  [root@k8s-master kubernetes]# vim config  22:KUBE_MASTER="--master=http://172.31.12.12:8080"     
  [root@k8s-master kubernetes]# vim apiserver  8:KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" 11:KUBE_API_PORT="--port=8080" 14:KUBELET_PORT="--kubelet-port=10250" 17:KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" 23:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"     
  [root@k8s-master kubernetes]# vim  controller-manager  8:KUBELET_ADDRESSES="--machines=172.31.12.13,172.31.12.14"       
  [root@k8s-master ~]# systemctl list-unit-files |grep kube kube-apiserver.service                      disabled      kube-controller-manager.service             disabled      kube-proxy.service                          disabled kube-scheduler.service                      disabled      kubelet.service                             disabled    
  [root@k8s-master ~]# systemctl start kube-apiserver.service kube-controller-manager.service  kube-scheduler.service  
  [root@k8s-master ~]# systemctl is-active  kube-apiserver.service kube-controller-manager.service kube-scheduler.service  active active active
  [root@k8s-master ~]# systemctl enable kube-apiserver.service kube-controller-manager.service  kube-scheduler.service
 
  | 
 
注意:启动顺序 etcd-->kubernetes*
** SLAVE 两个节点相同 **
slave 安装 kubernetes-node
1
   | yum install kubernetes-node.x86_64 flannel -y
   | 
 
slave配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
   | [root@k8s-node1 ~]# cd /etc/kubernetes/ [root@k8s-node1 kubernetes]# ll total 12 -rw-r--r-- 1 root root 655 Jul  3 23:33 config   -rw-r--r-- 1 root root 615 Jul  3 23:33 kubelet  -rw-r--r-- 1 root root 103 Jul  3 23:33 proxy     
  [root@k8s-node1 kubernetes]# vim config  22:KUBE_MASTER="--master=http://172.31.12.12:8080"     
  5:KUBELET_ADDRESS="--address=0.0.0.0" 8:KUBELET_PORT="--port=10250" 11:KUBELET_HOSTNAME="--hostname-override=172.31.12.13" 14:KUBELET_API_SERVER="--api-servers=http://172.31.12.12:8080"      启动并加入开机自启动 [root@k8s-node1 ~]# systemctl list-unit-files |grep kube kube-proxy.service                          disabled kubelet.service                             disabled [root@k8s-node1 ~]# systemctl start kube-proxy.service kubelet.service  [root@k8s-node1 ~]# systemctl is-active kube-proxy.service kubelet.service  active active [root@k8s-node1 ~]# systemctl enable kube-proxy.service kubelet.service      
  [root@k8s-node1 kubernetes]# cd /etc/sysconfig/ [root@k8s-node1 sysconfig]# vim flanneld  4:FLANNEL_ETCD_ENDPOINTS="http://172.31.12.12:2379"     
  systemctl start flanneld.service systemctl enable flanneld.service     
  [root@k8s-node1 sysconfig]# systemctl is-active flanneld.service active
   | 
 
注意:此时flannel启动不了,之所以启动不起来是因为etcd里面没有flannel所需要的网络信息,此时我们需要在etcd里面创建flannel所需要的网络信息
master创建 flannel所需要的网络信息:
1 2
   | [root@k8s-master ~]# etcdctl set /atomic.io/network/config '{ "Network": "172.17.0.0/16" }' { "Network": "172.17.0.0/16" }
  | 
 
集群检查
1 2 3 4
   | [root@k8s-master ~]# kubectl get node NAME             STATUS    AGE 172.31.12.13   Ready     56m 172.31.12.14   Ready     54m
   | 
 
kubernetes集群配置完成!