kubesphere3.0部署 集群机器配置
IP
hostname
操作系统
配置
30.0.1.248
k8s-master
centos7.6
8C8G100G
30.0.1.185
k8s-node1
centos7.6
8C8G100G
30.0.1.200
k8s-node2
centos7.6
8C8G100G
k8s-node2 用于创建好集群后通过add node的形式添加到集群中
环境搭建 环境配置
1 2 3 4 ... 30.0.1.248 k8s-master 30.0.1.185 k8s-node1
1 2 systemctl stop firewalld systemctl disable firewalld
1 2 3 4 5 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum makecache
yum -y install epel-release.noarch conntrack ipvsadm ipset jq sysstat curl iptables libseccomp vim lrzsz bash-completion
1 2 swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
在线安装kubesphere
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
chmod +x kk
./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9
默认文件 config-sample.yaml 创建后,根据环境修改该文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 apiVersion: kubekey.kubesphere.io/v1alpha1 kind: Cluster metadata: name: sample spec: hosts: - {name: k8s-master , address: 30.0 .1 .248 , internalAddress: 30.0 .1 .248 , user: root , password: root@openlab } - {name: k8s-node1 , address: 30.0 .1 .185 , internalAddress: 30.0 .1 .185 , user: root , password: root@openlab } roleGroups: etcd: - k8s-master master: - k8s-master worker: - k8s-master - k8s-node1 controlPlaneEndpoint: domain: lb.kubesphere.local address: "" port: "6443" kubernetes: version: v1.17.9 imageRepo: kubesphere clusterName: cluster.local network: plugin: calico kubePodsCIDR: 10.233 .64 .0 /18 kubeServiceCIDR: 10.233 .0 .0 /18 registry: registryMirrors: [] insecureRegistries: [] addons: [] --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.1.0 spec: persistence: storageClass: "" authentication: jwtSecret: "" zone: "" local_registry: "" etcd: monitoring: false endpointIps: localhost port: 2379 tlsEnable: true common: redis: enabled: false redisVolumSize: 2Gi openldap: enabled: false openldapVolumeSize: 2Gi minioVolumeSize: 20Gi monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 es: elasticsearchMasterVolumeSize: 4Gi elasticsearchDataVolumeSize: 20Gi logMaxAge: 7 elkPrefix: logstash basicAuth: enabled: false username: "" password: "" externalElasticsearchUrl: "" externalElasticsearchPort: "" console: enableMultiLogin: true port: 30880 alerting: enabled: false auditing: enabled: false devops: enabled: false jenkinsMemoryLim: 2Gi jenkinsMemoryReq: 1500Mi jenkinsVolumeSize: 8Gi jenkinsJavaOpts_Xms: 512m jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g events: enabled: false ruler: enabled: true replicas: 2 logging: enabled: false logsidecar: enabled: true replicas: 2 metrics_server: enabled: false monitoring: storageClass: "" prometheusMemoryRequest: 400Mi prometheusVolumeSize: 20Gi multicluster: clusterRole: none network: networkpolicy: enabled: false ippool: type: none topology: type: none notification: enabled: false openpitrix: store: enabled: false servicemesh: enabled: false kubeedge: enabled: false cloudCore: nodeSelector: {"node-role.kubernetes.io/worker": "" } tolerations: [] cloudhubPort: "10000" cloudhubQuicPort: "10001" cloudhubHttpsPort: "10002" cloudstreamPort: "10003" tunnelPort: "10004" cloudHub: advertiseAddress: - "" nodeLimit: "100" service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" edgeWatcher: nodeSelector: {"node-role.kubernetes.io/worker": "" } tolerations: [] edgeWatcherAgent: nodeSelector: {"node-role.kubernetes.io/worker": "" } tolerations: []
./kk create cluster -f config-sample.yaml
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
如果最后返回Welcome to KubeSphere,则表示已安装成功
验证安装结果 打开浏览器,输入主机IP+30880端口,登录账号:admin 登录密码:p@88w0rd 检查集群是否正常运行,以及组件状态是否正常
离线安装kubesphere
1 2 curl -Ok https://kubesphere-installer.pek3b.qingstor.com/offline/v3.0.0/kubesphere-all-v3.0.0-offline-linux-amd64.tar.gz
1 2 3 4 5 ./kk init os -f config-sample.yaml -s ./dependencies/ ./kk init os -f config-sample.yaml -s ./dependencies/ --add-images-repo
1 2 3 ./push-images.sh dockerhub.kubekey.local
./kk create cluster -f config-sample.yaml
配置本地私有镜像仓库 使用自签名证书
1 2 3 4 5 6 7 8 9 10 11 12 13 mkdir -p certs openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 36500 -out certs/domain.crt yum install -y httpd mkdir -p auth htpasswd -Bbn admin pwd123456 > auth/htpasswd
部署docker registry
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 docker run -d \ --restart=always \ --name registry \ -v "$(pwd) " /certs:/certs \ -v "$(pwd) " /auth:/auth \ -v /mnt/registry:/var/lib/registry \ -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ -p 443:443 \ registry:2
其他节点配置和使用仓库
1 2 3 4 5 6 7 8 9 192.168.0.2 dockerhub.kubekey.local mkdir -p /etc/docker/certs.d/dockerhub.kubekey.local cp certs/domain.crt /etc/docker/certs.d/dockerhub.kubekey.local/ca.crt
集群新增节点 环境配置 详细配置见上文
添加节点
./kk add nodes -f config-sample.yaml
1 2 3 4 5 $ kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master,worker 2d v1.17.9 k8s-node1 Ready worker 2d v1.17.9 k8s-node2 Ready worker 31h v1.17.9
参考 https://kubesphere.com.cn/forum/d/4019-51openlab-kubesphere-k8s https://kubesphere.com.cn/forum/d/2034-kubekey-kubesphere-v300/23 https://kubesphere.com.cn/docs/installing-on-linux/cluster-operation/add-new-nodes/