# KubeSphere

# 一、Kubernetes上安装KubeSphere

主机 IP 主机名 集群角色
192.168.50.75 k8s-master master,etcd
192.168.50.211 k8s-node1 node
192.168.50.171 k8s-node2 node

# 1.1 前置步骤(所有机器)

传送门

# 1.2 安装Kubernetes

# 1.2.1 安装docker(所有机器)

传送门

# 1.2.2 网络配置(所有机器)

#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
1
2
3
4
5
6
7
8
9
10

# 1.2.3 安装kubelet、kubeadm、kubectl(所有机器)

#配置k8s的yum源地址
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


#安装 kubelet,kubeadm,kubectl
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9

#启动kubelet
sudo systemctl enable --now kubelet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

# 1.2.4.初始化master节点

  1. 初始化
# # 所有网络范围不重叠
# --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images  可以设为其他源
# --control-plane-endpoint改为主节点
# --apiserver-advertise-address 改为主节点的ip
# --pod-network-cidr 未来给pod每个容器分配的ip范围
kubeadm init \
--apiserver-advertise-address=192.168.50.75 \
--control-plane-endpoint=k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
1
2
3
4
5
6
7
8
9
10
11
12
  1. 记录关键信息

记录master执行完成后的日志

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-master:6443 --token uatl0w.7aff19p03rjwqeln \
    --discovery-token-ca-cert-hash sha256:77d36430542ca11d6204548a492c798e847177e1ef631429e660a3227789e0ae \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token uatl0w.7aff19p03rjwqeln \
    --discovery-token-ca-cert-hash sha256:77d36430542ca11d6204548a492c798e847177e1ef631429e660a3227789e0ae
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

按照上面的信息执行命令

  1. 复制配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 以上命令可以让你使用kubectl查看系统信息,例如:
k8s-master-➜  ~ kubectl get nodes
NAME         STATUS     ROLES                  AGE   VERSION
k8s-master   NotReady   control-plane,master   37h   v1.20.9
1
2
3
4
5
6
7
8
  1. 安装Calico网络插件
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pod -A

# 以下可选 可选 可选
# 如果前面初始化的时候--pod-network-cidr的值跟配置文件不同,这里可以修改(可选)
#   value: "192.168.0.0/16"
#改为 
#value: "10.244.0.0/16"
1
2
3
4
5
6
7
8
9
  1. 加入worker节点,在每台node上执行
kubeadm join k8s-master:6443 --token uatl0w.7aff19p03rjwqeln \
    --discovery-token-ca-cert-hash sha256:77d36430542ca11d6204548a492c798e847177e1ef631429e660a3227789e0ae
    
# node加完以后看看集群状态
kubectl get nodes

# 以后要加入新机器,需要重新生成token
kubeadm token create --print-join-command
1
2
3
4
5
6
7
8

以上完成操作都完成以后看看k8s的状态,如果没问题k8s就安装完成了

k8s-master-➜  ~ kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
k8s-master   Ready    control-plane,master   37h    v1.20.9
k8s-node1    Ready    <none>                 107s   v1.20.9
k8s-node2    Ready    <none>                 101s   v1.20.9
k8s-master-➜  ~ kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-558995777d-shxjj   1/1     Running   0          12m
kube-system   calico-node-5cmxd                          1/1     Running   0          89s
kube-system   calico-node-7tqth                          1/1     Running   0          95s
kube-system   calico-node-qrpmt                          1/1     Running   0          12m
kube-system   coredns-5897cd56c4-nslf6                   1/1     Running   0          37h
kube-system   coredns-5897cd56c4-p2s2r                   1/1     Running   0          37h
kube-system   etcd-k8s-master                            1/1     Running   0          37h
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          37h
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          37h
kube-system   kube-proxy-2ff5l                           1/1     Running   0          37h
kube-system   kube-proxy-4z9zh                           1/1     Running   0          95s
kube-system   kube-proxy-dmd66                           1/1     Running   0          89s
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          37h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

# 1.3 安装KubeSphere前置环境

# 1.3.1 nfs文件系统

  1. nfs 服务

传送门

2.k8s 配置默认存储

配置动态供应的默认存储类

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.50.3 ## 指定自己nfs服务器地址
            - name: NFS_PATH
              value: /mnt/Apps/k8s-data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.50.3
            path: /mnt/Apps/k8s-data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125

3.确认配置是否生效

kubectl apply -f sc.yaml
kubectl get sc
kubectl get pod -A 
# 要有一个  nfs-client-provisioner
1
2
3
4

创建一个申请书测试下

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 200Mi
#  storageClassName: xxx  这里不写的话就用默认的
1
2
3
4
5
6
7
8
9
10
11
[root@master01 ~]# kubectl apply -f pvc.yaml
persistentvolumeclaim/nginx-pvc created
[root@master01 ~]# kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nginx-pvc   Bound    pvc-fe394f05-d696-4aa3-b847-b8176f424a22   200Mi      RWX            nfs-storage    16s
[root@master01 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   REASON   AGE
pvc-fe394f05-d696-4aa3-b847-b8176f424a22   200Mi      RWX            Delete           Bound    default/nginx-pvc   nfs-storage             35s
1
2
3
4
5
6
7
8

# 1.3.2 metrics-server(可选)

集群指标监控组件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --kubelet-insecure-tls
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          periodSeconds: 10
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187

应用以后看到metrics-server是运行状态就好了,然后输入命令看看效果

[root@master01 ~]# kubectl top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master01   300m         3%     2270Mi          29%
slave01    172m         2%     1232Mi          15%
slave02    127m         1%     1070Mi          13%
[root@master01 ~]# kubectl top pods -A
NAMESPACE              NAME                                         CPU(cores)   MEMORY(bytes)
default                hello-server-6cbb679d85-cgcbq                1m           6Mi
default                hello-server-6cbb679d85-xrwnc                1m           6Mi
default                nfs-client-provisioner-78f6d59786-qv6pp      3m           10Mi
default                nginx-demo-7d56b74b84-f52dx                  0m           7Mi
default                nginx-demo-7d56b74b84-mh969                  0m           5Mi
ingress-nginx          ingress-nginx-controller-54676dffd6-7pg7r    3m           159Mi
kube-system            calico-kube-controllers-659bd7879c-d7b68     3m           23Mi
kube-system            calico-node-b8bdz                            47m          148Mi
kube-system            calico-node-dt4sh                            47m          153Mi
kube-system            calico-node-gl78r                            45m          132Mi
kube-system            coredns-5897cd56c4-5fz58                     4m           16Mi
kube-system            coredns-5897cd56c4-tbbqh                     3m           15Mi
kube-system            etcd-master01                                17m          89Mi
kube-system            kube-apiserver-master01                      67m          412Mi
kube-system            kube-controller-manager-master01             21m          58Mi
kube-system            kube-proxy-5m7zk                             1m           20Mi
kube-system            kube-proxy-bgwl8                             1m           22Mi
kube-system            kube-proxy-j6nrr                             1m           20Mi
kube-system            kube-scheduler-master01                      3m           25Mi
kube-system            metrics-server-6497cc6c5f-bsfpz              4m           21Mi
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc-49xfj   1m           14Mi
kubernetes-dashboard   kubernetes-dashboard-658485d5c7-hltgp        1m           14Mi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

# 1.4 安装KubeSphere

官方文档 (opens new window)

# 1.4.1 下载核心文件

wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/cluster-configuration.yaml
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/kubesphere-installer.yaml
1
2

预先下载需要的镜像

# 需要的镜像列表,可以全部下载或者根据实际情况筛选
wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/images-list.txt

# 镜像下载脚本
sudo tee ./images.sh <<-'EOF'
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
done
EOF


# 执行
chmod +x ./images.sh && ./images.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

# 1.4.2 修改cluster-configuration

在 cluster-configuration.yaml中指定我们需要开启的功能 参照官网启用可插拔组件 (opens new window) ,我调整的列表

    storageClass: ""  #如果没有默认存储类这里需要填写,我们前面已经建立了nfs所以忽略
    etcd:
      monitoring: true
      endpointIps: 192.168.50.75
    redis:
      enabled: true
      volumeSize: 2Gi
    openldap:
      enabled: true
      volumeSize: 2Gi
    alerting:  
      enabled: true  
    auditing: 
      enabled: true
    devops:
      enabled: true 
    events: 
      enabled: true
    logging:
      enabled: true
    metrics_server:
      enabled: true        # 如果上面没装metrics_server这可以可以打开
    network:
      networkpolicy:
        enabled: true
    ippool: 
      type: calico
    openpitrix: 
      store:
        enabled: true
    servicemesh:
      enabled: true  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

# 1.4.3 执行安装

kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
1
2

# 1.4.4 查看安装进度

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

# 访问之前看看是不是所有pod都在运行,长时间未运行的看看到底什么状态
kubectl describe pod -n kubesphere-system  ks-installer-54c6bcf76b-2lwlc

# 解决etcd监控证书找不到问题
# 可以看到  MountVolume.SetUp failed for volume "secret-kube-etcd-client-certs" : secret "kube-etcd-client-certs" not found的错误
k8s-master-➜  ~ kubectl describe pod -n kubesphere-monitoring-system   prometheus-k8s-0


kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
1
2
3
4
5
6
7
8
9
10
11

# 1.4.5 访问

群集任意机器访问30880端口 http://master01:30880/ 账号 : admin 密码 : P@88w0rd

# 二、Linux单机安装KubeSphere

官方文档 (opens new window)

# 2.1 下载 KubeKey

export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh -
chmod +x kk
1
2
3

# 2.2 开始安装

./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.0
1

# 2.3 验证安装结果

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.235.130:30880
Account: admin
Password: P@88w0rd

NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2021-11-14 23:07:42
#####################################################
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20

# 三、多租户实践

todo 官方文档 https://kubesphere.com.cn/docs/quick-start/create-workspace-and-project/

# 四、KubeSphere给Kubernetes上部署中应用

# 一、部署MySQL

传送门

# 二、部署redis

  1. 创建配置(ConfigMap)
# name
redis6-conf

# key
redis.conf

# value
appendonly yes
port 6379
bind 0.0.0.0
1
2
3
4
5
6
7
8
9
10
  1. statefulsets创建
# Name
redis6

#Image
redis:6

# start command
## Command
redis-server
## Parameters
/etc/redis/redis.conf

# 勾选 Synchronize Host Timezone

# Volumes
## Volume Name
redis6-pvc
## access mode
Read and write
## Mount path
/data

# Configmap
## access mode
Read-only
## Mount path
/etc/redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
  1. 配置网络
# Name
redis6-node

# Internal Access Mode
Virtual IP Address

# Ports
6379

# External Access
NodePort
1
2
3
4
5
6
7
8
9
10
11

# 三、部署ElasticSearch

  1. es容器启动
# 创建数据目录
mkdir -p /mydata/es-01 && chmod 777 -R /mydata/es-01

# 容器启动
docker run --restart=always -d -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-v es-config:/usr/share/elasticsearch/config \
-v /mydata/es-01/data:/usr/share/elasticsearch/data \
--name es-01 \
elasticsearch:7.13.4

docker ps |grep es-01
docker exec -it es-01 /bin/bash
docker rm -f es-01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  1. es配置

两个文件 elasticsearch.ymljvm.options

# 四、部署Nacos

  1. 配置

application.propertiescluster.conf

配置文件路径 /home/nacos/conf

  1. 外部网络访问
kind: Service
apiVersion: v1
metadata:
  name: nacos-node
  namespace: default
  labels:
    app: nacos-node
  annotations:
    kubesphere.io/creator: admin
spec:
  ports:
    - name: http-8848
      protocol: TCP
      port: 8848
      targetPort: 8848
      nodePort: 31307
  selector:
    app: nacos
  clusterIP: 10.233.2.176
  clusterIPs:
    - 10.233.2.176
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

五、RuoYi-Cloud

Dockerfile

FROM openjdk:8-jdk
LABEL maintainer=leifengyang


#docker run -e PARAMS="--server.port 9090"
ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.discovery.server-addr=nacos-lth4.default:8848 --spring.cloud.nacos.config.server-addr=nacos-lth4.default:8848 --spring.cloud.nacos.config.namespace=prod --spring.cloud.nacos.config.file-extension=yml"
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

COPY target/*.jar /app.jar
EXPOSE 8080

#
ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}"]
1
2
3
4
5
6
7
8
9
10
11
12
13

创建镜像

docker build -t ruoyi-auth:v1.0 -f Dockerfile .
1

推送镜像

  • 开通阿里云“容器镜像服务(个人版)”
    • 创建一个名称空间(lfy_ruoyi)。(存储镜像)
    • 推送镜像到阿里云镜像仓库
$ docker login --username=forsum**** registry.cn-hangzhou.aliyuncs.com

#把本地镜像,改名,成符合阿里云名字规范的镜像。
$ docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker tag 461955fe1e57 registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1

$ docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/镜像名:[镜像版本号]
## docker push registry.cn-hangzhou.aliyuncs.com/lfy_ruoyi/ruoyi-visual-monitor:v1
1
2
3
4
5
6
7
8

# 五、DevOps

更新时间: 12/7/2022, 1:43:20 PM