kubesphere3.0部署

集群机器配置

IP hostname 操作系统 配置
30.0.1.248 k8s-master centos7.6 8C8G100G
30.0.1.185 k8s-node1 centos7.6 8C8G100G
30.0.1.200 k8s-node2 centos7.6 8C8G100G

k8s-node2 用于创建好集群后通过add node的形式添加到集群中

环境搭建

环境配置

  • 修改hostname(每台都需要)
    hostnamectl set-hostname k8s-master

  • 添加hosts文件(每台都需要)

1
2
3
4
# vi /etc/hosts
...
30.0.1.248 k8s-master
30.0.1.185 k8s-node1
  • 关闭防火墙(每台都需要)
1
2
systemctl stop firewalld
systemctl disable firewalld
  • 关闭Selinux(每台都需要)
1
2
3
# vi /etc/sysconfig/selinux
...
SELINUX=disabled
  • 替换阿里源(每台都需要)
1
2
3
4
5
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
#备份
#下载新的 CentOS-Base.repo 到 /etc/yum.repos.d/
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum makecache
  • 安装常用软件(每台都需要)

yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp vim lrzsz bash-completion

  • 关闭swap分区(每台都需要)
1
2
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

在线安装kubesphere

  • 下载 KubeKey 安装程序

curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -

  • 添加kk权限

chmod +x kk

  • 创建集群配置文件

./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9

  • 默认文件 config-sample.yaml 创建后,根据环境修改该文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# vi ~/config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-master, address: 30.0.1.248, internalAddress: 30.0.1.248, user: root, password: root@openlab}
- {name: k8s-node1, address: 30.0.1.185, internalAddress: 30.0.1.185, user: root, password: root@openlab}
roleGroups:
etcd:
- k8s-master
master:
- k8s-master
worker:
- k8s-master
- k8s-node1
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: "6443"
kubernetes:
version: v1.17.9
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []


---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
notification:
enabled: false
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
  • 用 KubeKey 安装集群

./kk create cluster -f config-sample.yaml

  • 验证安装结果

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

如果最后返回Welcome to KubeSphere,则表示已安装成功

验证安装结果

打开浏览器,输入主机IP+30880端口,登录账号:admin 登录密码:p@88w0rd
检查集群是否正常运行,以及组件状态是否正常

离线安装kubesphere

  • 安装包整合下载
1
2
# md5: 65e9a1158a682412faa1166c0cf06772
curl -Ok https://kubesphere-installer.pek3b.qingstor.com/offline/v3.0.0/kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz
  • 创建集群配置文件
    安装包解压后进入 kubesphere-all-v3.0.0-offline-linux-amd64 目录
    ./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
    具体配置文件修改见上文

  • 环境初始化

1
2
3
4
5
# 执行如下命令会对配置文件中所有节点安装依赖:
./kk init os -f config-sample.yaml -s ./dependencies/

# 如需使用kk创建自签名镜像仓库,可执行如下命令:
./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
  • 导入镜像
1
2
3
# 进入 kubesphere-all-v3.0.0-offline-linux-amd64/kubesphere-images-v3.0.0 目录
# 脚本后镜像仓库地址请填写真实仓库地址
./push-images.sh dockerhub.kubekey.local
  • 安装集群

./kk create cluster -f config-sample.yaml

配置本地私有镜像仓库

使用自签名证书

1
2
3
4
5
6
7
8
9
10
11
12
13
# 生成自签名证书
mkdir -p certs

# 当您生成自己的证书时,请确保在字段 Common Name 中指定一个域名。例如,本示例中该字段被指定为 dockerhub.kubekey.local
openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 36500 -out certs/domain.crt

# 生成鉴权密码文件
yum install -y httpd
mkdir -p auth
htpasswd -Bbn admin pwd123456 > auth/htpasswd

部署docker registry

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
docker run -d \
--restart=always \
--name registry \
-v "$(pwd)"/certs:/certs \
-v "$(pwd)"/auth:/auth \
-v /mnt/registry:/var/lib/registry \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2

其他节点配置和使用仓库

1
2
3
4
5
6
7
8
9
# 修改/etc/hosts
192.168.0.2 dockerhub.kubekey.local

# 复制证书到指定目录,并使 Docker 信任该证书
# 如果修改了域名 记得修改路径
mkdir -p /etc/docker/certs.d/dockerhub.kubekey.local
cp certs/domain.crt /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt

# 使用测试docker login、push、pull进行测试

集群新增节点

环境配置

详细配置见上文

添加节点

  • 修改 config-sample.yaml 配置文件
    新增node2

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    ···
    hosts:
    - {name: k8s-master, address: 30.0.1.248, internalAddress: 30.0.1.248, user: root, password: root@openlab}
    - {name: k8s-node1, address: 30.0.1.185, internalAddress: 30.0.1.185, user: root, password: root@openlab}
    - {name: k8s-node2, address: 30.0.1.200, internalAddress: 30.0.1.200, user: root, password: root@openlab}
    roleGroups:
    etcd:
    - k8s-master
    master:
    - k8s-master
    worker:
    - k8s-master
    - k8s-node1
    - k8s-node2
    ···
  • 执行命令添加节点

./kk add nodes -f config-sample.yaml

  • 验证结果
1
2
3
4
5
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master,worker 2d v1.17.9
k8s-node1 Ready worker 2d v1.17.9
k8s-node2 Ready worker 31h v1.17.9

参考

https://kubesphere.com.cn/forum/d/4019-51openlab-kubesphere-k8s
https://kubesphere.com.cn/forum/d/2034-kubekey-kubesphere-v300/23
https://kubesphere.com.cn/docs/installing-on-linux/cluster-operation/add-new-nodes/