K8S(01)二进制部署实践-1.15.2

2022/04/13 k8s,docker

K8S 二进制部署实践 1.15.2

1 部署架构

1.1 架构图

mark

架构说明:

  1. etcd 至少 3 台组成一个高可用集群
  2. 两台 proxy 组成高可用代理对外提供 VIP
  3. 两台机器共同承担 master 和 node 节点功能
  4. 运维主机非 K8S 套件,但为 K8S 服务

1.2 安装方式选择

  1. Minikube 预览使用,仅供学习
  2. 二进制安装(生产首选,新手推荐)
  3. kubeadmin 安装 简单,用 k8s 跑 k8s 自己,熟手推荐 新手不推荐的原因是容易知其然不知其所以然 出问题后找不到解决办法

2 部署准备

2.1 准备工作

准备 5 台 2C/2g/50g 虚拟机,网络 10.4.7.0/24 预装 CentOS7.4,做完基础优化 安装部署 Bind9,部署自建 DNS 系统 准备自签证书环境 安装部署 Docker 和 Harbor 仓库

机器列表

主机名 IP地址 用途
hdss7-11 10.4.7.11 proxy1
hdss7-12 10.4.7.12 proxy2
hdss7-21 10.4.7.21 master1
hdss7-22 10.4.7.22 master2
hdss7-200 10.4.7.200 运维主机

基本部署软件

[root@hdss7-11 ~]# hostname
hdss7-11
[root@hdss7-11 ~]# getenforce 
Disabled
[root@hdss7-11 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=10.4.7.11
NETMASK=255.255.255.0
GATEWAY=10.4.7.254
DNS1=10.4.7.254

[root@hdss7-11 ~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -y

2.2 部署 DNS 服务 Bind9

2.2.1 安装配置 DNS 服务

7.11上部署 Bind 的 DNS 服务

yum install bind bind-utils -y

修改并校验配置文件

[root@hdss7-11 ~]# vim /etc/named.conf
listen-on port 53 { 10.4.7.11; }; 
allow-query     { any; };
forwarders      { 10.4.7.254; }; #上一层 DNS 地址(网关或公网 DNS )
recursion yes;
dnssec-enable no;
dnssec-validation no

[root@hdss7-11 ~]# named-checkconf

mark

2.2.2 增加自定义域和对于配置

在域配置中增加自定义域

cat >>/etc/named.rfc1912.zones <<'EOF'
# 添加自定义主机域
zone "host.com" IN {
        type  master;
        file  "host.com.zone";
        allow-update { 10.4.7.11; };
};
# 添加自定义业务域
zone "zq.com" IN {
        type  master;
        file  "zq.com.zone";
        allow-update { 10.4.7.11; };
};
EOF

host.com 和 zq.com 都是我们自定义的域名,一般用 host.com 做为主机域 zq.com 为业务域,业务不同可以配置多个

为自定义域 host.com 创建配置文件

cat >/var/named/host.com.zone <<'EOF'
$ORIGIN host.com.
$TTL 600    ; 10 minutes
@       IN SOA  dns.host.com. dnsadmin.host.com. (
                2020041601 ; serial
                10800      ; refresh (3 hours)
                900        ; retry (15 minutes)
                604800     ; expire (1 week)
                86400      ; minimum (1 day)
                )
            NS   dns.host.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
HDSS7-11           A    10.4.7.11
HDSS7-12           A    10.4.7.12
HDSS7-21           A    10.4.7.21
HDSS7-22           A    10.4.7.22
HDSS7-200          A    10.4.7.200

EOF

为自定义域 zq.com 创建配置文件

cat >/var/named/zq.com.zone <<'EOF'
$ORIGIN zq.com.
$TTL 600    ; 10 minutes
@       IN SOA  dns.zq.com. dnsadmin.zq.com. (
                2020041601 ; serial
                10800      ; refresh (3 hours)
                900        ; retry (15 minutes)
                604800     ; expire (1 week)
                86400      ; minimum (1 day)
                )
            NS   dns.zq.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11

EOF

host.com 域用于主机之间通信,所以要先增加上所有主机 zq.com 域用于后面的业务解析用,因此不需要先添加主机

2.2.3 启动并验证 DNS 服务

再次检查配置并启动 DNS 服务

[root@hdss7-11 ~]# named-checkconf 
[root@hdss7-11 ~]# systemctl start named
[root@hdss7-11 ~]# systemctl enable named
[root@hdss7-11 ~]# ss -lntup|grep 53
udp    UNCONN     0      0      10.4.7.11:53
udp    UNCONN     0      0      :::53
tcp    LISTEN     0      10     10.4.7.11:53
tcp    LISTEN     0      128    127.0.0.1:953
tcp    LISTEN     0      10     :::53
tcp    LISTEN     0      128    ::1:953

# 验证结果
[root@hdss7-11 ~]# dig -t A hdss7-11.host.com @10.4.7.11 +short
10.4.7.11
[root@hdss7-11 ~]# dig -t A hdss7-21.host.com @10.4.7.11 +short
10.4.7.21

2.2.4 所有主机修改网络配置

5 台 K8S 主机都需要按如下方式修改网络配置

# 修改 DNS 并添加搜索域
sed -i 's#^DNS.*#DNS1=10.4.7.11#g' /etc/sysconfig/network-scripts/ifcfg-eth0
echo "search=host.com" >>/etc/sysconfig/network-scripts/ifcfg-eth0 
systemctl restart network

# 检查 DNS 配置
~]# cat /etc/resolv.conf
# Generated by NetworkManager
search host.com
nameserver 10.4.7.11

~]# dig -t A hdss7-21.host.com +short
10.4.7.21

# 一定记得检查 DNS 配置文件中是否有 search 信息

windows 宿主机也要改(或者更改 IPv4 高级-> DNS 自动越点优先级!)

wmnet8 网卡更改 DNS:10.4.7.11
# ping 通才行,否则检查
ping hdss7-200.host.com

2.3 自签发证书环境准备

操作在 7.200 这个运维机上完成

2.3.1 下载安装 cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl*

2.3.2 生成 CA 证书文件

mkdir /opt/certs
cat >/opt/certs/ca-csr.json <<EOF
{
    "CN": "zqcd",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "chengdu",
            "L": "chengdu",
            "O": "zq",
            "OU": "ops"
        }
    ],
    "ca": {
        "expiry": "175200h"
    }
}

EOF

CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法 C: Country, 国家 ST: State,州,省 L: Locality,地区,城市 O: Organization Name,组织名称,公司名称 OU: Organization Unit Name,组织单位名称,公司部门

2.3.3 生成 CA 证书

cd /opt/certs
cfssl gencert -initca ca-csr.json | cfssl-json -bare ca
[root@hdss7-200 certs]# ll
total 16
-rw-r--r-- 1 root root  989 Apr 16 20:53 cacsr
-rw-r--r-- 1 root root  324 Apr 16 20:52 ca-csr.json
-rw------- 1 root root 1679 Apr 16 20:53 ca-key.pem
-rw-r--r-- 1 root root 1330 Apr 16 20:53 ca.pem

2.4 Docker 环境准备

2.4.1 安装并配置 Docker

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
mkdir /etc/docker/
cat >/etc/docker/daemon.json <<EOF
{
  "graph": "/data/docker", 
  "storage-driver": "overlay2",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zq.com"],
  "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
  "bip": "172.7.21.1/24",
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true
}
EOF

注意: bip 要根据宿主机 ip 变化

  • hdss7-21.host.com bip 172.7.21.1/24
  • hdss7-22.host.com bip 172.7.22.1/24
  • hdss7-200.host.com bip 172.7.200.1/24

2.4.2 启动 Docker

mkdir -p /data/docker
systemctl start docker
systemctl enable docker
docker --version

2.5 部署 Harbor 私有仓库

2.5.1 下载并解压

tar xf harbor-offline-installer-v1.8.5.tgz -C /opt/
cd /opt/
mv harbor/ harbor-v1.8.5
ln -s /opt/harbor-v1.8.5/ /opt/harbor

2.5.2 编辑配置文件

[root@hdss7-200 opt]# vi /opt/harbor/harbor.yml
# 以下是修改项,手动在配置文件中更改
hostname: harbor.zq.com
http:
  port: 180
 harbor_admin_password:Harbor12345
data_volume: /data/harbor
log:
    level:  info
    rotate_count:  50
    rotate_size:200M
    location: /data/harbor/logs

[root@hdss7-200 opt]# mkdir -p /data/harbor/logs

2.5.3 使用 docker-compose 启动 Harbor

[root@hdss7-200 opt]cd /opt/harbor/
yum install docker-compose -y
sh /opt/harbor/install.sh 
docker-compose ps
docker ps -a

2.5.4 使用 DNS 解析 Harbor

7.11 DNS 服务上操作

[root@hdss7-11 ~]# vi /var/named/zq.com.zone
2020032002 ; serial   #每次修改 DNS 解析后,都要滚动此 ID
harbor             A    10.4.7.200
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A harbor.zq.com +short
10.4.7.200

2.5.5 使用 Nginx 反向代理 Harbor

回到 7.200 运维机上操作

[root@hdss7-200 harbor]# yum install nginx -y
[root@hdss7-200 harbor]# vi /etc/nginx/conf.d/harbor.zq.com.conf
server {
    listen       80;
    server_name  harbor.zq.com;

    client_max_body_size 1000m;

    location / {
        proxy_pass http://127.0.0.1:180;
    }
}
[root@hdss7-200 harbor]# nginx -t
[root@hdss7-200 harbor]# systemctl start nginx
[root@hdss7-200 harbor]# systemctl enable nginx

浏览器输入:http://harbor.zq.com

  • 用户名:admin
  • 密码:Harbor12345

新建项目:public

访问级别:公开

2.5.6 提前准备 pauser/nginx 基础镜像

pauser 镜像是 K8S 启动 Pod 时,预先用来创建相关资源(如名称空间)的 nginx 镜像是 K8S 部署好以后,我们测试 Pod 创建所用的

docker login harbor.zq.com -uadmin -pHarbor12345
docker pull kubernetes/pause
docker pull nginx:1.17.9

docker tag kubernetes/pause:latest harbor.zq.com/public/pause:latest
docker tag nginx:1.17.9 harbor.zq.com/public/nginx:v1.17.9

docker push harbor.zq.com/public/pause:latest
docker push harbor.zq.com/public/nginx:v1.17.9

2.6 准备 Nginx 文件服务

创建一个 Nginx 虚拟主机,用来提供文件访问访问,主要依赖 Nginx 的 autoindex 属性

2.6.1 创建文件访问

7.200

# 创建配置
cat >/etc/nginx/conf.d/k8s-yaml.zq.com.conf <<EOF
server {
    listen       80;
    server_name  k8s-yaml.zq.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
EOF

# 启动 Nginx
mkdir -p /data/k8s-yaml/coredns
nginx -t
nginx -s reload

2.6.2 添加域名解析

7.11bind9 域名服务器上,增加 DNS 记录

vi /var/named/zq.com.zone
# 在最后添加一条解析记录
k8s-yaml           A    10.4.7.200
# 同时滚动 serial 为
@               IN SOA  dns.zq.com. dnsadmin.zq.com. (
                                2019061803 ; serial

重启服务并验证:

systemctl restart named
[root@hdss7-11 ~]# dig -t A k8s-yaml.zq.com +short
10.4.7.200

3 部署 master 节点 etcd 服务

3.1 部署 etcd 集群

分别在12/21/22 上安装 ectd 服务, 11 节点作为备选节点

3.1.1 创建生成 CA 证书的 JSON 配置文件

7.200 上操作 一个配置里面包含了 server 端,clinet 端和双向( peer )通信所需要的配置,后面创建证书的时候会传入不同的参数调用不同的配置

cat >/opt/certs/ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "175200h"
        },
        "profiles": {
            "server": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "175200h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
} 
EOF

证书时间统一为 20 年,不怕过期 证书类型 client certificate:客户端使用,用于服务端认证客户端,例如 etcdctl、etcd proxy、fleetctl、docker 客户端 server certificate:服务端使用,客户端以此验证服务端身份,例如 docker 服务端、kube-apiserver peer certificate:双向证书,用于 etcd 集群成员间通信

3.1.3.创建生成自签发请求( csr )的 json 配置文件

注意: 需要将所有可能用来部署 etcd 的机器,都加入到 hosts 列表中 否则后期重新加入不在列表中的机器,需要更换所有 etcd 服务的证书

cat >/opt/certs/etcd-peer-csr.json <<EOF
{
    "CN": "k8s-etcd",
    "hosts": [
        "10.4.7.11",
        "10.4.7.12",
        "10.4.7.21",
        "10.4.7.22"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "zq",
            "OU": "ops"
        }
    ]
}
EOF

3.1.4.生成 etcd 证书文件

cd /opt/certs/
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem \
  -config=ca-config.json -profile=peer \
  etcd-peer-csr.json |cfssl-json -bare etcd-peer

[root@hdss7-200 certs]# ll
total 36
-rw-r--r-- 1 root root  837 Apr 19 15:35 ca-config.json
-rw-r--r-- 1 root root  989 Apr 16 20:53 ca.csr
-rw-r--r-- 1 root root  324 Apr 16 20:52 ca-csr.json
-rw------- 1 root root 1679 Apr 16 20:53 ca-key.pem
-rw-r--r-- 1 root root 1330 Apr 16 20:53 ca.pem
-rw-r--r-- 1 root root 1062 Apr 19 15:35 etcd-peer.csr
-rw-r--r-- 1 root root  363 Apr 19 15:35 etcd-peer-csr.json
-rw------- 1 root root 1679 Apr 19 15:35 etcd-peer-key.pem
-rw-r--r-- 1 root root 1419 Apr 19 15:35 etcd-peer.pem

3.2 安装启动 etcd 集群

7.12 做为演示,另外 2 台机器大同小异,不相同的配置都会特别说明

3.2.1 创建 etcd 用户和安装软件

etcd 地址: https://github.com/etcd-io/etcd/tags

建议使用 3.1 版本,更高版本有问题

useradd -s /sbin/nologin -M etcd
wget https://github.com/etcd-io/etcd/archive/v3.1.20.tar.gz
tar xf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/
cd /opt/
mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20
ln -s /opt/etcd-v3.1.20/ /opt/etcd

3.2.2 创建目录,拷贝证书文件

创建证书目录、数据目录、日志目录

mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
chown -R etcd.etcd /opt/etcd-v3.1.20/
chown -R etcd.etcd /data/etcd/
chown -R etcd.etcd /data/logs/etcd-server/

拷贝生成的证书文件

cd /opt/etcd/certs
scp hdss7-200:/opt/certs/ca.pem .
scp hdss7-200:/opt/certs/etcd-peer.pem .
scp hdss7-200:/opt/certs/etcd-peer-key.pem .
chown -R etcd.etcd /opt/etcd/certs

也可以先创建一个 NFS,直接从 NFS 中拷贝

3.2.3 创建 etcd 服务启动脚本

参数说明: https://blog.csdn.net/kmhysoft/article/details/71106995

cat >/opt/etcd/etcd-server-startup.sh <<'EOF'
#!/bin/sh
./etcd \
    --name etcd-server-7-12 \
    --data-dir /data/etcd/etcd-server \
    --listen-peer-urls https://10.4.7.12:2380 \
    --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
    --quota-backend-bytes 8000000000 \
    --initial-advertise-peer-urls https://10.4.7.12:2380 \
    --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \
    --initial-cluster  etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \
    --ca-file ./certs/ca.pem \
    --cert-file ./certs/etcd-peer.pem \
    --key-file ./certs/etcd-peer-key.pem \
    --client-cert-auth  \
    --trusted-ca-file ./certs/ca.pem \
    --peer-ca-file ./certs/ca.pem \
    --peer-cert-file ./certs/etcd-peer.pem \
    --peer-key-file ./certs/etcd-peer-key.pem \
    --peer-client-cert-auth \
    --peer-trusted-ca-file ./certs/ca.pem \
    --log-output stdout
EOF
[root@hdss7-12 ~]# chmod +x /opt/etcd/etcd-server-startup.sh

注意:以上启动脚本,有几个配置项在每个服务器都有所不同

--name    #节点名字
--listen-peer-urls		#监听其他节点所用的地址
--listen-client-urls	#监听 etcd 客户端的地址
--initial-advertise-peer-urls	#与其他节点交互信息的地址
--advertise-client-urls	#与 etcd 客户端交互信息的地址

3.2.4 使用 supervisor 启动 etcd

安装 supervisor 软件

yum install supervisor -y
systemctl start supervisord
systemctl enable supervisord

创建 supervisor 管理 etcd 的配置文件

配置说明参考: https://www.jianshu.com/p/53b5737534e8

cat >/etc/supervisord.d/etcd-server.ini <<EOF
[program:etcd-server]  ; 显示的程序名,类型 my.cnf,可以有多个
command=sh /opt/etcd/etcd-server-startup.sh
numprocs=1             ; 启动进程数 (def 1)
directory=/opt/etcd    ; 启动命令前切换的目录 (def no cwd)
autostart=true         ; 是否自启 (default: true)
autorestart=true       ; 是否自动重启 (default: true)
startsecs=30           ; 服务运行多久判断为成功(def. 1)
startretries=3         ; 启动重试次数 (default 3)
exitcodes=0,2          ; 退出状态码 (default 0,2)
stopsignal=QUIT        ; 退出信号 (default TERM)
stopwaitsecs=10        ; 退出延迟时间 (default 10)
user=etcd              ; 运行用户
redirect_stderr=true   ; 是否重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/etcd-server/etcd.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

启动 etcd 服务并检查

supervisorctl update
supervisorctl status
netstat -lntup|grep etcd

3.2.5 部署启动集群其他机器

3.2.6 检查集群状态

[root@hdss7-12 certs]# /opt/etcd/etcdctl cluster-health
member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379
member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379
member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379
[root@hdss7-12 certs]# /opt/etcd/etcdctl member list
988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false
5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false
f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true

4 部署 mater 节点 kube-apiserver 服务

下载页面: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md

下载地址:

4.1 签发 client 端证书

证书签发都在 7.200 上操作

此证书的用途是 apiserver 和 etcd 之间通信所用

4.1.1 创建生成证书 csr 的 json 配置文件

cat >/opt/certs/client-csr.json <<EOF
{
    "CN": "k8s-node",
    "hosts": [
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "zq",
            "OU": "ops"
        }
    ]
}
EOF

4.1.2 生成 client 证书文件

cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=client \
      client-csr.json |cfssl-json -bare client

[root@hdss7-200 certs]# ll|grep client
-rw-r--r-- 1 root root  993 Apr 20 21:30 client.csr
-rw-r--r-- 1 root root  280 Apr 20 21:30 client-csr.json
-rw------- 1 root root 1675 Apr 20 21:30 client-key.pem
-rw-r--r-- 1 root root 1359 Apr 20 21:30 client.pem

4.2 签发 kube-apiserver 证书

此证书的用途是 apiserver 对外提供的服务的证书

4.2.1 创建生成证书 csr 的 json 配置文件

此配置中的 hosts 包含所有可能会部署 apiserver 的列表 其中 10.4.7.10 是反向代理的 VIP 地址

cat >/opt/certs/apiserver-csr.json <<EOF
{
    "CN": "k8s-apiserver",
    "hosts": [
        "127.0.0.1",
        "192.168.0.1",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local",
        "10.4.7.10",
        "10.4.7.21",
        "10.4.7.22",
        "10.4.7.23"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "zq",
            "OU": "ops"
        }
    ]
}
EOF

4.2.2 生成 kube-apiserver 证书文件

cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=server \
      apiserver-csr.json |cfssl-json -bare apiserver

[root@hdss7-200 certs]# ll|grep apiserver
-rw-r--r-- 1 root root 1249 Apr 20 21:31 apiserver.csr
-rw-r--r-- 1 root root  566 Apr 20 21:31 apiserver-csr.json
-rw------- 1 root root 1675 Apr 20 21:31 apiserver-key.pem
-rw-r--r-- 1 root root 1590 Apr 20 21:31 apiserver.pem

4.3 下载安装 kube-apiserver

7.21为例

# 上传并解压缩
tar xf kubernetes-server-linux-amd64-v1.15.2.tar.gz  -C /opt
cd /opt
mv kubernetes/ kubernetes-v1.15.2
ln -s /opt/kubernetes-v1.15.2/ /opt/kubernetes

# 清理源码包和 Docker 镜像
cd /opt/kubernetes
rm -rf kubernetes-src.tar.gz
cd server/bin
rm -f *.tar
rm -f *_tag

# 创建命令软连接到系统环境变量下
ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl

4.4 部署 apiserver 服务

4.4.1 拷贝证书文件

拷贝证书文件到/opt/kubernetes/server/bin/cert目录下

# 创建目录
mkdir -p /opt/kubernetes/server/bin/cert
cd /opt/kubernetes/server/bin/cert

# 拷贝三套证书
scp hdss7-200:/opt/certs/ca.pem .
scp hdss7-200:/opt/certs/ca-key.pem .
scp hdss7-200:/opt/certs/client.pem .
scp hdss7-200:/opt/certs/client-key.pem .
scp hdss7-200:/opt/certs/apiserver.pem .
scp hdss7-200:/opt/certs/apiserver-key.pem .

4.4.2 创建 audit 配置

audit 日志审计规则配置是 K8S 要求必须要有得配置,可以不理解,直接用

mkdir /opt/kubernetes/server/conf

cat >/opt/kubernetes/server/conf/audit.yaml <<'EOF'
apiVersion: audit.k8s.io/v1beta1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"
EOF

4.4.3 创建 apiserver 启动脚本

cat >/opt/kubernetes/server/bin/kube-apiserver.sh <<'EOF'
#!/bin/bash
./kube-apiserver \
  --apiserver-count 2 \
  --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \
  --audit-policy-file ../conf/audit.yaml \
  --authorization-mode RBAC \
  --client-ca-file ./cert/ca.pem \
  --requestheader-client-ca-file ./cert/ca.pem \
  --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
  --etcd-cafile ./cert/ca.pem \
  --etcd-certfile ./cert/client.pem \
  --etcd-keyfile ./cert/client-key.pem \
  --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
  --service-account-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --service-node-port-range 3000-29999 \
  --target-ram-mb=1024 \
  --kubelet-client-certificate ./cert/client.pem \
  --kubelet-client-key ./cert/client-key.pem \
  --log-dir  /data/logs/kubernetes/kube-apiserver \
  --tls-cert-file ./cert/apiserver.pem \
  --tls-private-key-file ./cert/apiserver-key.pem \
  --v 2
EOF

# 授权
chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh

4.4.4 创建 supervisor 启动 apiserver 的配置

安装 supervisor 软件

yum install supervisor -y
systemctl start supervisord
systemctl enable supervisord
cat >/etc/supervisord.d/kube-apiserver.ini <<EOF
[program:kube-apiserver]      ; 显示的程序名,类似 my.cnf,可以有多个
command=sh /opt/kubernetes/server/bin/kube-apiserver.sh
numprocs=1                    ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true                ; 是否自启 (default: true)
autorestart=true              ; 是否自动重启 (default: true)
startsecs=30                  ; 服务运行多久判断为成功(def. 1)
startretries=3                ; 启动重试次数 (default 3)
exitcodes=0,2                 ; 退出状态码 (default 0,2)
stopsignal=QUIT               ; 退出信号 (default TERM)
stopwaitsecs=10               ; 退出延迟时间 (default 10)
user=root                     ; 运行用户
redirect_stderr=true          ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定 capture 管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

4.4.5 启动 apiserver 服务并检查

mkdir -p /data/logs/kubernetes/kube-apiserver
supervisorctl update
supervisorctl status
netstat -nltup|grep kube-api

4.4.6 部署启动所有 apiserver 机器

集群其他机器的部署,没有不同的地方,所以略

4.5 部署 controller-manager 服务

  • apiserve
  • controller-manager
  • kube-scheduler

三个服务所需的软件在同一套压缩包里面的,因此后两个服务不需要在单独解包 而且这三个服务是在同一个主机上,互相之间通过 http://127.0.0.1 ,也不需要证书

4.5.1 创建 controller-manager 启动脚本

cat >/opt/kubernetes/server/bin/kube-controller-manager.sh <<'EOF'
#!/bin/sh
./kube-controller-manager \
  --cluster-cidr 172.7.0.0/16 \
  --leader-elect true \
  --log-dir /data/logs/kubernetes/kube-controller-manager \
  --master http://127.0.0.1:8080 \
  --service-account-private-key-file ./cert/ca-key.pem \
  --service-cluster-ip-range 192.168.0.0/16 \
  --root-ca-file ./cert/ca.pem \
  --v 2 
EOF

# 授权
chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh

4.5.2 创建 supervisor 配置

cat >/etc/supervisord.d/kube-conntroller-manager.ini <<EOF
[program:kube-controller-manager] ; 显示的程序名
command=sh /opt/kubernetes/server/bin/kube-controller-manager.sh
numprocs=1                    ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true                ; 是否自启 (default: true)
autorestart=true              ; 是否自动重启 (default: true)
startsecs=30                  ; 服务运行多久判断为成功(def. 1)
startretries=3                ; 启动重试次数 (default 3)
exitcodes=0,2                 ; 退出状态码 (default 0,2)
stopsignal=QUIT               ; 退出信号 (default TERM)
stopwaitsecs=10               ; 退出延迟时间 (default 10)
user=root                     ; 运行用户
redirect_stderr=true          ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controller.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

4.5.3 启动服务并检查

mkdir -p /data/logs/kubernetes/kube-controller-manager
supervisorctl update
supervisorctl status

4.5.4 部署启动所有集群

没有不同的地方,所以略

4.6 部署 kube-scheduler 服务

4.6.1 创建启动脚本

cat >/opt/kubernetes/server/bin/kube-scheduler.sh <<'EOF'
#!/bin/sh
./kube-scheduler \
  --leader-elect  \
  --log-dir /data/logs/kubernetes/kube-scheduler \
  --master http://127.0.0.1:8080 \
  --v 2
EOF

# 授权
chmod +x  /opt/kubernetes/server/bin/kube-scheduler.sh

4.6.2 创建 supervisor 配置

cat >/etc/supervisord.d/kube-scheduler.ini <<EOF
[program:kube-scheduler]
command=sh /opt/kubernetes/server/bin/kube-scheduler.sh
numprocs=1                    ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true                ; 是否自启 (default: true)
autorestart=true              ; 是否自动重启 (default: true)
startsecs=30                  ; 服务运行多久判断为成功(def. 1)
startretries=3                ; 启动重试次数 (default 3)
exitcodes=0,2                 ; 退出状态码 (default 0,2)
stopsignal=QUIT               ; 退出信号 (default TERM)
stopwaitsecs=10               ; 退出延迟时间 (default 10)
user=root                     ; 运行用户
redirect_stderr=true          ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定capture管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

4.6.3 启动服务并检查

mkdir -p /data/logs/kubernetes/kube-scheduler
supervisorctl update
supervisorctl status

4.6.4 部署启动所有集群

没有不同的地方,所以略

4.7 检查 master 节点部署情况

[root@hdss7-21 bin]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}

5 部署 4 层反代去代理 apiserver

master 节点上的 3 套服务部署完成后,需要使用反向代理去统一两个 apiservser 的对外端口 这里使用 nginx+keepalived 的高可用架构部署在 7.117.12 两台机器上

5.1 部署 Nginx 四层反代

使用 7443 端口代理 apiserver 的 6443 端口,使用 keepalived 管理 VIP 10.4.7.10

5.1.1 yum 安装程序

yum install -y nginx nginx-mod-stream keepalived

5.1.2 配置 Nginx

四层代理不能写在默认的 conf.d 目录下,因为这个目录默认是数据 http 模块的 include 所以要么把四层代理写到主配置文件最下面,要么模仿七层代理创建一个四层代理文件夹

# 1. 在 nginx 配置文件中增加四层代理配置文件夹
mkdir /etc/nginx/tcp.d/
echo 'include /etc/nginx/tcp.d/*.conf;' >>/etc/nginx/nginx.conf

# 写入代理配置
cat >/etc/nginx/tcp.d/apiserver.conf <<EOF
stream {
    upstream kube-apiserver {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass kube-apiserver;
    }
}
EOF

5.1.3 启动 Nginx

nginx -t
systemctl start nginx
systemctl enable nginx

5.2 配置 keepalived

5.2.1 创建端口监测脚本

创建脚本

cat >/etc/keepalived/check_port.sh <<'EOF'
#!/bin/bash
#keepalived 监控端口脚本
#使用方法:等待 keepalived 传入端口参数,检查改端口是否存在并返回结果
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi
EOF

给与脚本执行权限

chmod +x /etc/keepalived/check_port.sh

5.2.2 创建 keepalived 主配置文件

主机定义为 10.4.7.11 ,从机定义为 10.4.7.12 注意:主配置文件添加了 nopreempt 参数,非抢占式,意味着 VIP 发生漂移后,主重新启动后也不会夺回 VIP,目的是为了稳定性

cat >/etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
   router_id 10.4.7.11
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.4.7.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10
    }
}
EOF

5.2.3 创建 keepalived 从配置文件

cat >/etc/keepalived/keepalived.conf <<'EOF'
! Configuration File for keepalived
global_defs {
    router_id 10.4.7.12
}
vrrp_script chk_nginx {
    script "/etc/keepalived/check_port.sh 7443"
    interval 2
    weight -20
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 251
    mcast_src_ip 10.4.7.12
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10
    }
}
EOF

5.3.4 启动 Keepalived 并验证

systemctl start  keepalived
systemctl enable keepalived
ip addr|grep '10.4.7.10'

6 部署 node 节点

6.1 签发 kubelet 证书

签发证书,都在 7.200

6.1.1 创建生成证书 csr 的 json 配置文件

cd /opt/certs/
cat >/opt/certs/kubelet-csr.json <<EOF
{
    "CN": "k8s-kubelet",
    "hosts": [
    "127.0.0.1",
    "10.4.7.10",
    "10.4.7.21",
    "10.4.7.22",
    "10.4.7.23",
    "10.4.7.24",
    "10.4.7.25",
    "10.4.7.26",
    "10.4.7.27",
    "10.4.7.28"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "zq",
            "OU": "ops"
        }
    ]
}
EOF

6.1.2 生成 kubelet 证书文件

cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=server \
      kubelet-csr.json | cfssl-json -bare kubelet

[root@hdss7-200 certs]# ll |grep kubelet
-rw-r--r-- 1 root root 1115 Apr 22 22:17 kubelet.csr
-rw-r--r-- 1 root root  452 Apr 22 22:17 kubelet-csr.json
-rw------- 1 root root 1679 Apr 22 22:17 kubelet-key.pem
-rw-r--r-- 1 root root 1460 Apr 22 22:17 kubelet.pem

6.2 创建 kubelet 服务

6.2.1 拷贝证书至 node 节点

cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kubelet.pem .
scp hdss7-200:/opt/certs/kubelet-key.pem .

6.2.2 创建 kubelet 配置

创建 kubelet 的配置文件 kubelet.kubeconfig 比较麻烦,需要四步操作才能完成

(1) set-cluster (设置集群参数)

使用 CA 证书创建集群 myk8s ,使用的 apiserver 信息是 10.4.7.10 这个 VIP

cd /opt/kubernetes/server/conf/

kubectl config set-cluster myk8s \
    --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
    --embed-certs=true \
    --server=https://10.4.7.10:7443 \
    --kubeconfig=kubelet.kubeconfig

(2) set-credentials (设置客户端认证参数)

使用 Client 证书创建用户k8s-node

kubectl config set-credentials k8s-node \
    --client-certificate=/opt/kubernetes/server/bin/cert/client.pem \
    --client-key=/opt/kubernetes/server/bin/cert/client-key.pem \
    --embed-certs=true \
    --kubeconfig=kubelet.kubeconfig

(3) set-context(**绑定 namespace **)

创建 myk8s-context ,关联集群 myk8s 和用户 k8s-node

kubectl config set-context myk8s-context \
    --cluster=myk8s \
    --user=k8s-node \
    --kubeconfig=kubelet.kubeconfig

(4) use-context

使用生成的配置文件向 apiserver 注册,注册信息会写入 etcd ,所以只需要注册一次即可

kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig

(5) 查看生成的 kubelet.kubeconfig

[root@hdss7-21 conf]# cat kubelet.kubeconfig 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxxxxxx
    server: https://10.4.7.10:7443
  name: myk8s
contexts:
- context:
    cluster: myk8s
    user: k8s-node
  name: myk8s-context
current-context: myk8s-context
kind: Config
preferences: {}
users:
- name: k8s-node
  user:
    client-certificate-data: xxxxxxxx
    client-key-data: xxxxxxxx

可以看出来,这个配置文件里面包含了集群名字,用户名字,集群认证的公钥,用户的公私钥等

6.2.3 创建 k8s-node.yaml 配置文件

cat >/opt/kubernetes/server/conf/k8s-node.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8s-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
EOF

使用 RBAC 鉴权规则,创建了一个 ClusterRoleBinding 的资源 此资源中定义了一个 userk8s-nodek8s-node 用户绑定了角色 ClusterRole ,角色名为 system:node 使这个用户具有成为集群运算节点角色的权限 由于这个用户名,同时也是 kubeconfig 中指定的用户, 所以通过 kubeconfig 配置启动的 kubelet 节点,就能够成为 node 节点

6.2.4 应用资源配置

应用资源配置,并查看结果

# 应用资源配置
kubectl create -f /opt/kubernetes/server/conf/k8s-node.yaml

# 查看集群角色和角色属性
[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node
NAME       AGE
k8s-node   13s

[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-node -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2020-04-22T14:38:09Z"
  name: k8s-node
  resourceVersion: "21217"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node
  uid: 597ffb0f-f92d-4eb5-aca2-2fe73397e2e4
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s-node
  
#此时只是创建了相应的资源,还没有具体的 node ,如下验证
[root@hdss7-21 conf]# kubectl get nodes
No resources found.

6.2.5 创建 kubelet 启动脚本

--hostname-override 参数每个 node 节点都一样,是节点的主机名,注意修改

cat >/opt/kubernetes/server/bin/kubelet.sh <<'EOF'
#!/bin/sh
./kubelet \
  --hostname-override hdss7-21.host.com \
  --anonymous-auth=false \
  --cgroup-driver systemd \
  --cluster-dns 192.168.0.2 \
  --cluster-domain cluster.local \
  --runtime-cgroups=/systemd/system.slice \
  --kubelet-cgroups=/systemd/system.slice \
  --fail-swap-on="false" \
  --client-ca-file ./cert/ca.pem \
  --tls-cert-file ./cert/kubelet.pem \
  --tls-private-key-file ./cert/kubelet-key.pem \
  --image-gc-high-threshold 20 \
  --image-gc-low-threshold 10 \
  --kubeconfig ../conf/kubelet.kubeconfig \
  --log-dir /data/logs/kubernetes/kube-kubelet \
  --pod-infra-container-image harbor.zq.com/public/pause:latest \
  --root-dir /data/kubelet
EOF

# 创建目录&授权
chmod +x /opt/kubernetes/server/bin/kubelet.sh
mkdir -p /data/logs/kubernetes/kube-kubelet
mkdir -p /data/kubelet

6.2.6 创建 supervisor 配置

cat >/etc/supervisord.d/kube-kubelet.ini  <<EOF
[program:kube-kubelet]
command=sh /opt/kubernetes/server/bin/kubelet.sh
numprocs=1                    ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin    
autostart=true                ; 是否自启 (default: true)
autorestart=true              ; 是否自动重启 (default: true)
startsecs=30                  ; 服务运行多久判断为成功(def. 1)
startretries=3                ; 启动重试次数 (default 3)
exitcodes=0,2                 ; 退出状态码 (default 0,2)
stopsignal=QUIT               ; 退出信号 (default TERM)
stopwaitsecs=10               ; 退出延迟时间 (default 10)
user=root                     ; 运行用户
redirect_stderr=true          ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定 capture 管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

6.2.7 启动服务并检查

supervisorctl update
supervisorctl status
[root@hdss7-21 server]# kubectl get nodes
NAME                STATUS   ROLES    AGE   VERSION
hdss7-21.host.com   Ready    <none>   65s   v1.15.5

6.2.8 部署其他 node 节点

第一个节点部署完成后,其他节点就要简单很多,只需拷贝 kubelet.kubeconfig 配置到本地后,创建启动脚本并用 supervisord 启动即可 也可以不拷贝配置文件,就需要手动再执行创建配置文件的四步

# 拷贝证书
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kubelet.pem .
scp hdss7-200:/opt/certs/kubelet-key.pem .

# 拷贝配置文件
cd /opt/kubernetes/server/conf/
scp hdss7-21:/opt/kubernetes/server/conf/kubelet.kubeconfig .

拷贝完配置后,剩下的步骤参考 6.2.5 创建kubelet启动脚本 ,除脚本中 --hostname-override 不同外,其他都一样

6.2.9 检查所有节点并给节点打上标签

此操作非必须,因为只是打的一个标签,方便识别而已

kubectl get nodes
kubectl label node hdss7-21.host.com node-role.kubernetes.io/master=
kubectl label node hdss7-21.host.com node-role.kubernetes.io/node=

[root@hdss7-22 cert]# kubectl get nodes
NAME                STATUS   ROLES         AGE   VERSION
hdss7-21.host.com   Ready    master,node   9m    v1.15.5
hdss7-22.host.com   Ready    <none>        64s   v1.15.5

6.3 创建 kube-proxy 服务

签发证书在 7.200

6.3.1 签发 kube-proxy 证书

(1) 创建生成证书 csr 的 json 配置文件

cd /opt/certs/
cat >/opt/certs/kube-proxy-csr.json <<EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "beijing",
            "L": "beijing",
            "O": "zq",
            "OU": "ops"
        }
    ]
}
EOF

(2) 生成 kube-proxy 证书文件

cfssl gencert \
      -ca=ca.pem \
      -ca-key=ca-key.pem \
      -config=ca-config.json \
      -profile=client \
      kube-proxy-csr.json |cfssl-json -bare kube-proxy-client

(3) 检查生成的证书文件

[root@hdss7-200 certs]# ll |grep proxy
-rw-r--r-- 1 root root 1005 Apr 22 22:54 kube-proxy-client.csr
-rw------- 1 root root 1675 Apr 22 22:54 kube-proxy-client-key.pem
-rw-r--r-- 1 root root 1371 Apr 22 22:54 kube-proxy-client.pem
-rw-r--r-- 1 root root  267 Apr 22 22:54 kube-proxy-csr.json

6.3.2 拷贝证书文件至各节点

cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kube-proxy-client.pem .
scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .

6.3.3 创建 kube-proxy 配置

同样是四步操作,类似 kubelet

(1) set-cluster

cd /opt/kubernetes/server/conf/

kubectl config set-cluster myk8s \
    --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \
    --embed-certs=true \
    --server=https://10.4.7.10:7443 \
    --kubeconfig=kube-proxy.kubeconfig

(2) set-credentials

kubectl config set-credentials kube-proxy \
    --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \
    --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

(3) set-context

kubectl config set-context myk8s-context \
    --cluster=myk8s \
    --user=kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

(4) use-context

kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig

6.3.4 加载 ipvs 模块以备 kube-proxy 启动用

# 创建开机 ipvs 脚本
cat >/etc/ipvs.sh <<'EOF'
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
  /sbin/modinfo -F filename $i &>/dev/null
  if [ $? -eq 0 ];then
    /sbin/modprobe $i
  fi
done
EOF

# 执行脚本开启 ipvs
sh /etc/ipvs.sh 

# 验证开启结果
[root@hdss7-21 conf]# lsmod |grep ip_vs
ip_vs_wrr              12697  0 
ip_vs_wlc              12519  0 
......略

6.3.5 创建 kube-proxy 启动脚本

同上, --hostname-override 参数在不同的 Node 节点上不一样,需修改

cat >/opt/kubernetes/server/bin/kube-proxy.sh <<'EOF'
#!/bin/sh
./kube-proxy \
  --hostname-override hdss7-21.host.com \
  --cluster-cidr 172.7.0.0/16 \
  --proxy-mode=ipvs \
  --ipvs-scheduler=nq \
  --kubeconfig ../conf/kube-proxy.kubeconfig
EOF

# 授权
chmod +x /opt/kubernetes/server/bin/kube-proxy.sh 

6.3.6 创建 kube-proxy 的 supervisor 配置

cat >/etc/supervisord.d/kube-proxy.ini <<'EOF'
[program:kube-proxy]
command=sh /opt/kubernetes/server/bin/kube-proxy.sh
numprocs=1                    ; 启动进程数 (def 1)
directory=/opt/kubernetes/server/bin
autostart=true                ; 是否自启 (default: true)
autorestart=true              ; 是否自动重启 (default: true)
startsecs=30                  ; 服务运行多久判断为成功(def. 1)
startretries=3                ; 启动重试次数 (default 3)
exitcodes=0,2                 ; 退出状态码 (default 0,2)
stopsignal=QUIT               ; 退出信号 (default TERM)
stopwaitsecs=10               ; 退出延迟时间 (default 10)
user=root                     ; 运行用户
redirect_stderr=true          ; 重定向错误输出到标准输出(def false)
stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log
stdout_logfile_maxbytes=64MB  ; 日志文件大小 (default 50MB)
stdout_logfile_backups=4      ; 日志文件滚动个数 (default 10)
stdout_capture_maxbytes=1MB   ; 设定 capture 管道的大小(default 0)
;子进程还有子进程,需要添加这个参数,避免产生孤儿进程
killasgroup=true
stopasgroup=true
EOF

6.3.7 启动服务并检查

mkdir -p /data/logs/kubernetes/kube-proxy
supervisorctl update
supervisorctl status
[root@hdss7-21 conf]# kubectl get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   192.168.0.1   <none>        443/TCP   47h

# 检查 ipvs ,是否新增了配置
yum install ipvsadm -y
[root@hdss7-21 conf]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.1:443 nq
  -> 10.4.7.21:6443               Masq    1      0          0         
  -> 10.4.7.22:6443               Masq    1      0          0 

6.3.8 部署所有节点

首先需拷贝 kube-proxy.kubeconfig 到 hdss7-22.host.com 的 conf 目录下

# 拷贝证书文件
cd /opt/kubernetes/server/bin/cert
scp hdss7-200:/opt/certs/kube-proxy-client.pem .
scp hdss7-200:/opt/certs/kube-proxy-client-key.pem .

# 拷贝配置文件
cd /opt/kubernetes/server/conf/
scp hdss7-21:/opt/kubernetes/server/conf/kube-proxy.kubeconfig .

其他不同的地方就一个主机名,都已经在前面说明了,略

7 验证 Kubernetes 集群

7.1 在任意一个节点上创建一个资源配置清单

cat >/root/nginx-ds.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        app: nginx-ds
    spec:
      containers:
      - name: my-nginx
        image: harbor.zq.com/public/nginx:v1.17.9
        ports:
        - containerPort: 80
EOF

7.2 应用资源配置,并检查

7.2.1 应用资源配置

kubectl create -f /root/nginx-ds.yaml
[root@hdss7-22 conf]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-j777c   1/1     Running   0          8s
nginx-ds-nwsd6   1/1     Running   0          8s

mark

7.2.2 在另一台 Node 节点上检查

kubectl get pods
kubectl get pods -o wide
curl 172.7.22.2

7.2.3 查看 Kubernetes 是否搭建好

[root@hdss7-22 conf]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-1               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
scheduler            Healthy   ok        

[root@hdss7-21 ~]# kubectl get nodes 
NAME                STATUS   ROLES         AGE    VERSION
hdss7-21.host.com   Ready    master,node   6d1h   v1.15.5
hdss7-22.host.com   Ready    <none>        6d1h   v1.15.5


[root@hdss7-22 ~]# kubectl get pods
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-j777c   1/1     Running   0          6m45s
nginx-ds-nwsd6   1/1     Running   0          6m45s

附:进阶篇

1. 采用 Vagrant + VirtualBox 快速生成测试集群环境,Vagrantfile 内容:

# -*- mode: ruby -*-
# vi: set ft=ruby :

node_servers = {
  :'hdss7-11.host.com' => '10.4.7.11',
  :'hdss7-12.host.com' => '10.4.7.12',
  :'hdss7-21.host.com' => '10.4.7.21',
  :'hdss7-22.host.com' => '10.4.7.22',
  :'hdss7-200.host.com' => '10.4.7.200'
}

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7.5_18.04_x86-64"

  node_servers.each do |node_servers_name, node_server_ip|
    config.vm.define node_servers_name do |node_config|
      node_config.vm.hostname = "#{node_servers_name.to_s}"
      node_config.vm.network :private_network, ip: node_server_ip
      node_config.vm.provider "virtualbox" do |vb|
        vb.name = node_servers_name.to_s
      end
  
      node_config.vm.provision "shell", inline: <<-SHELL
        sudo -i
        # disable selinux
        sed -i 's/\(^SELINUX=\).*/\1disabled/g' /etc/selinux/config
        setenforce 0
      
        # 关闭防火墙
        systemctl stop firewalld
        systemctl disable firewalld

        ##开启ssh root密码登录
        sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config 
        sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
        systemctl restart sshd
        # set root password
        echo 'root' |  passwd --stdin vagrant

        ##生成工作目录
        mkdir -p /server/{tools,scripts,backup,docker-compose}
      
        ##更换阿里云源
        mv /etc/yum.repos.d/CentOS-Base.repo{,.backup}
        curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
        sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
        #epel
        wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
        #yum clean all && yum makecache

        # 安装常用工具
        yum install -y wget net-tools telnet tree nmap sysstat lrzsz dos2unix ntp ntpdate

        ##设置时区
        timedatectl set-timezone Asia/Shanghai

        #设置时间同步
        ntpdate ntp.aliyun.com
        #加入定时计划任务,每隔10分钟同步一下时钟
        echo "0-59/10 * * * * /usr/sbin/ntpdate us.pool.ntp.org | logger -t NTP" >> /var/spool/cron/root
        systemctl restart crond

        #安装docker
        yum remove -y docker \
        docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-selinux \
        docker-engine-selinux \
        docker-engine
    
        yum install -y yum-utils device-mapper-persistent-data lvm2
        yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
        yum makecache fast
      
        #安装指定 docker 版本:yum install -y docker-ce-19.03.5 docker-ce-cli-19.03.5
        yum -y install docker-ce
      
        ##镜像加速
        mkdir -p /etc/docker  /data/docker

        BIP=$(ip ad |grep 'eth1$' |awk -F '[ :]+|/' '{print $3}' |awk -F. '{print 172"."$3"."$4".1/24"}')

        cat > /etc/docker/daemon.json <<EOF
{
  "graph": "/data/docker",
  "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.zq.com"],
  "registry-mirrors": ["https://2apmvngw.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "bip": "$BIP",
  "live-restore": true
}
EOF
        systemctl daemon-reload
        systemctl start docker
        systemctl enable docker

        ##安装最新docker-compose
        curl -L https://mirrors.aliyun.com/docker-toolbox/linux/compose/1.21.2/docker-compose-Linux-x86_64 -o  /usr/bin/docker-compose
        chmod +x /usr/bin/docker-compose
        docker-compose -v
      SHELL
    end
  end
end
  1. 使用 SCP 命令无交互免认证确认传输证书方法
sshpass -p123456 scp -r -o StrictHostKeyChecking=no hdss7-200:/opt/certs/kube-proxy*.pem /opt/kubernetes/server/bin/cert/

原文链接:https://www.cnblogs.com/noah-luo/p/13345164.html

Search

    Table of Contents