• Stars
    star
    284
  • Rank 145,616 (Top 3 %)
  • Language
    Shell
  • Created about 3 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Vagrant一键安装Kubernetes集群。安装 Metrics Server 、Kuboard 、Kubernetes Dashboard、KubePi、Kubernetes集群监控prometheus-operator

vagrant-kubernetes-cluster

Vagrant一键安装Kubernetes集群。安装 Metrics Server 、Kuboard 、Kubernetes Dashboard、KubePi、Kubernetes集群监控prometheus-operator等。

安装环境

  • Vagrant 版本: 2.2.18
  • VirtualBox 版本: 6.1.26

虚拟机网卡设置如图所示:

image-20211012134939433

CentOS7 环境安装版本

  • Ubuntu 版本: centos7
  • Containerd 版本: 1.4.11
  • Kubernetes 版本: v1.22.2

Ubuntu 环境安装版本

  • Ubuntu 版本: 20.04.2 LTS
  • Containerd 版本: 1.5.5
  • Kubernetes 版本: v1.22.0

一键安装

vagrant up

Bringing machine 'kmaster' up with 'virtualbox' provider...
Bringing machine 'kworker1' up with 'virtualbox' provider...
Bringing machine 'kworker2' up with 'virtualbox' provider...
==> kmaster: Importing base box 'generic/ubuntu2004'...
==> kmaster: Matching MAC address for NAT networking...
==> kmaster: Setting the name of the VM: kmaster
==> kmaster: Clearing any previously set network interfaces...
==> kmaster: Preparing network interfaces based on configuration...
    kmaster: Adapter 1: nat
    kmaster: Adapter 2: hostonly
==> kmaster: Forwarding ports...
    kmaster: 22 (guest) => 2222 (host) (adapter 1)
==> kmaster: Running 'pre-boot' VM customizations...
==> kmaster: Booting VM...
==> kmaster: Waiting for machine to boot. This may take a few minutes...
    kmaster: SSH address: 127.0.0.1:2222
    kmaster: SSH username: vagrant
    kmaster: SSH auth method: private key
    kmaster:
    kmaster: Vagrant insecure key detected. Vagrant will automatically replace
    kmaster: this with a newly generated keypair for better security.
    kmaster:
    kmaster: Inserting generated public key within guest...
    kmaster: Removing insecure key from the guest if it's present...
    kmaster: Key inserted! Disconnecting and reconnecting using new SSH key...
==> kmaster: Machine booted and ready!
==> kmaster: Checking for guest additions in VM...
==> kmaster: Setting hostname...
==> kmaster: Configuring and enabling network interfaces...
==> kmaster: Mounting shared folders...
    kmaster: /vagrant => D:/Vagrant/kubernetes-cluster
==> kmaster: Running provisioner: shell...
    kmaster: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1qfj4jz.sh
    kmaster: [TASK 0] Setting TimeZone
    kmaster: [TASK 1] Setting DNS
    kmaster: [TASK 2] Setting Ubuntu System Mirrors
    kmaster: [TASK 3] Disable and turn off SWAP
    kmaster: [TASK 4] Stop and Disable firewall
    kmaster: [TASK 5] Enable and Load Kernel modules
    kmaster: [TASK 6] Add Kernel settings
    kmaster: [TASK 7] Install containerd runtime
    kmaster: [TASK 8] Add apt repo for kubernetes
    kmaster: Warning: apt-key output should not be parsed (stdout is not a terminal)
    kmaster: OK
    kmaster: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl)
    kmaster: [TASK 10] Enable ssh password authentication
    kmaster: [TASK 11] Set root password
    kmaster: [TASK 12] Update /etc/hosts file
==> kmaster: Running provisioner: shell...
    kmaster: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-11nj6h4.sh
    kmaster: [TASK 1] Pull required containers
    kmaster: [TASK 2] Initialize Kubernetes Cluster
    kmaster: [TASK 3] Deploy Calico network
    kmaster: [TASK 4] Generate and save cluster join command to /joincluster.sh
==> kworker1: Importing base box 'generic/ubuntu2004'...
==> kworker1: Matching MAC address for NAT networking...
==> kworker1: Setting the name of the VM: kworker1
==> kworker1: Fixed port collision for 22 => 2222. Now on port 2200.
==> kworker1: Clearing any previously set network interfaces...
==> kworker1: Preparing network interfaces based on configuration...
    kworker1: Adapter 1: nat
    kworker1: Adapter 2: hostonly
==> kworker1: Forwarding ports...
    kworker1: 22 (guest) => 2200 (host) (adapter 1)
==> kworker1: Running 'pre-boot' VM customizations...
==> kworker1: Booting VM...
==> kworker1: Waiting for machine to boot. This may take a few minutes...
    kworker1: SSH address: 127.0.0.1:2200
    kworker1: SSH username: vagrant
    kworker1: SSH auth method: private key
    kworker1:
    kworker1: Vagrant insecure key detected. Vagrant will automatically replace
    kworker1: this with a newly generated keypair for better security.
    kworker1:
    kworker1: Inserting generated public key within guest...
    kworker1: Removing insecure key from the guest if it's present...
    kworker1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> kworker1: Machine booted and ready!
==> kworker1: Checking for guest additions in VM...
==> kworker1: Setting hostname...
==> kworker1: Configuring and enabling network interfaces...
==> kworker1: Mounting shared folders...
    kworker1: /vagrant => D:/Vagrant/kubernetes-cluster
==> kworker1: Running provisioner: shell...
    kworker1: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-6qmkd4.sh
    kworker1: [TASK 0] Setting TimeZone
    kworker1: [TASK 1] Setting DNS
    kworker1: [TASK 2] Setting Ubuntu System Mirrors
    kworker1: [TASK 3] Disable and turn off SWAP
    kworker1: [TASK 4] Stop and Disable firewall
    kworker1: [TASK 5] Enable and Load Kernel modules
    kworker1: [TASK 6] Add Kernel settings
    kworker1: [TASK 7] Install containerd runtime
    kworker1: [TASK 8] Add apt repo for kubernetes
    kworker1: Warning: apt-key output should not be parsed (stdout is not a terminal)
    kworker1: OK
    kworker1: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl)
    kworker1: [TASK 10] Enable ssh password authentication
    kworker1: [TASK 11] Set root password
    kworker1: [TASK 12] Update /etc/hosts file
==> kworker1: Running provisioner: shell...
    kworker1: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-vmdbxa.sh
    kworker1: [TASK 1] Join node to Kubernetes Cluster
==> kworker2: Importing base box 'generic/ubuntu2004'...
==> kworker2: Matching MAC address for NAT networking...
==> kworker2: Setting the name of the VM: kworker2
==> kworker2: Fixed port collision for 22 => 2222. Now on port 2201.
==> kworker2: Clearing any previously set network interfaces...
==> kworker2: Preparing network interfaces based on configuration...
    kworker2: Adapter 1: nat
    kworker2: Adapter 2: hostonly
==> kworker2: Forwarding ports...
    kworker2: 22 (guest) => 2201 (host) (adapter 1)
==> kworker2: Running 'pre-boot' VM customizations...
==> kworker2: Booting VM...
==> kworker2: Waiting for machine to boot. This may take a few minutes...
    kworker2: SSH address: 127.0.0.1:2201
    kworker2: SSH username: vagrant
    kworker2: SSH auth method: private key
    kworker2:
    kworker2: Vagrant insecure key detected. Vagrant will automatically replace
    kworker2: this with a newly generated keypair for better security.
    kworker2:
    kworker2: Inserting generated public key within guest...
    kworker2: Removing insecure key from the guest if it's present...
    kworker2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> kworker2: Machine booted and ready!
==> kworker2: Checking for guest additions in VM...
==> kworker2: Setting hostname...
==> kworker2: Configuring and enabling network interfaces...
==> kworker2: Mounting shared folders...
    kworker2: /vagrant => D:/Vagrant/kubernetes-cluster
==> kworker2: Running provisioner: shell...
    kworker2: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1s6ys4c.sh
    kworker2: [TASK 0] Setting TimeZone
    kworker2: [TASK 1] Setting DNS
    kworker2: [TASK 2] Setting Ubuntu System Mirrors
    kworker2: [TASK 3] Disable and turn off SWAP
    kworker2: [TASK 4] Stop and Disable firewall
    kworker2: [TASK 5] Enable and Load Kernel modules
    kworker2: [TASK 6] Add Kernel settings
    kworker2: [TASK 7] Install containerd runtime
    kworker2: [TASK 8] Add apt repo for kubernetes
    kworker2: Warning: apt-key output should not be parsed (stdout is not a terminal)
    kworker2: OK
    kworker2: [TASK 9] Install Kubernetes components (kubeadm, kubelet and kubectl)
    kworker2: [TASK 10] Enable ssh password authentication
    kworker2: [TASK 11] Set root password
    kworker2: [TASK 12] Update /etc/hosts file
==> kworker2: Running provisioner: shell...
    kworker2: Running: C:/Users/swfeng/AppData/Local/Temp/vagrant-shell20211012-49908-1qxwo1n.sh
    kworker2: [TASK 1] Join node to Kubernetes Cluster

安装后三台机器的 IP 为:

机器名 IP
kmaster 192.168.56.100
kworker1 192.168.56.101
kworker2 192.168.56.102

root用户密码为kubeadmin

配置.kube/config

root@kmaster:~# mkdir -p $HOME/.kube
root@kmaster:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kmaster:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

集群状态:

root@kmaster:~# kubectl cluster-info
Kubernetes control plane is running at https://kmaster.k8s.com:6443
CoreDNS is running at https://kmaster.k8s.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
root@kmaster:~# kubectl get node,po,svc -A -owide

Every 2.0s: kubectl get node,po,svc -A -owide                                                                                                             kmaster: Tue Oct 12 13:53:57 2021

NAME            STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
node/kmaster    Ready    control-plane,master   20m     v1.22.0   192.168.56.100   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5
node/kworker1   Ready    <none>                 9m40s   v1.22.0   192.168.56.101   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5
node/kworker2   Ready    <none>                 7m35s   v1.22.0   192.168.56.102   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5

NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE       NOMINATED NODE   READINESS GATES
kube-system   pod/calico-kube-controllers-7659fb8886-dwvc4   1/1     Running   0          20m     192.168.189.2    kmaster    <none>           <none>
kube-system   pod/calico-node-2w8x5                          1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>
kube-system   pod/calico-node-vqjsc                          1/1     Running   0          7m35s   192.168.56.102   kworker2   <none>           <none>
kube-system   pod/calico-node-zj98h                          1/1     Running   0          9m40s   192.168.56.101   kworker1   <none>           <none>
kube-system   pod/coredns-7568f67dbd-4jssz                   1/1     Running   0          20m     192.168.189.3    kmaster    <none>           <none>
kube-system   pod/coredns-7568f67dbd-vn8ph                   1/1     Running   0          20m     192.168.189.1    kmaster    <none>           <none>
kube-system   pod/etcd-kmaster                               1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>
kube-system   pod/kube-apiserver-kmaster                     1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>
kube-system   pod/kube-controller-manager-kmaster            1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>
kube-system   pod/kube-proxy-2sqmm                           1/1     Running   0          7m35s   192.168.56.102   kworker2   <none>           <none>
kube-system   pod/kube-proxy-8z758                           1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>
kube-system   pod/kube-proxy-brgl8                           1/1     Running   0          9m40s   192.168.56.101   kworker1   <none>           <none>
kube-system   pod/kube-scheduler-kmaster                     1/1     Running   0          20m     192.168.56.100   kmaster    <none>           <none>

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  20m   <none>
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   20m   k8s-app=kube-dns

安装 metrics-server

root@kmaster:/vagrant/metrics# kubectl apply -f metrics.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

安装 kuboard

root@kmaster:~# kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
namespace/kuboard created
configmap/kuboard-v3-config created
serviceaccount/kuboard-boostrap created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-boostrap-crb created
daemonset.apps/kuboard-etcd created
deployment.apps/kuboard-v3 created
service/kuboard-v3 created

访问 kuboard http://192.168.56.100:30080

用户名: admin 密码: Kuboard123

image-20211012140900479

安装 kubernetes-dashboard

root@kmaster:/vagrant/kubernetes-dashboard# kubectl apply -f kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19; use the "seccompProfile" field instead
deployment.apps/dashboard-metrics-scraper created
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

# 执行下面命令后手动将type: ClusterIP 改为 type: NodePort
root@kmaster:~# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

# 查看svc,放行端口
root@kmaster:~# kubectl get svc -A |grep kubernetes-dashboard

kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.111.109.182   <none>        8000/TCP                                       2m53s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.97.250.165    <none>        443:31825/TCP                                  2m53s


# 获取访问令牌
root@kmaster:~# kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9BODl1TGtTRjUzWUl4dnJKUHdpYnB1V0RIZGpxNkxoT2VMWEEzNW1yVk0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXdtN3hqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzAzOGNhZC1jYjE2LTQ3ZjAtYTIxZS1hODNlNjhjYjA4ZGMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.iPxLZnueJz9y2ngFTtgEuZ36Ae0QLK2oFXEBXinYcsM5712_sw3iyYODB9Eyu9AzscMDin-jL4ssctl6dQt-3PD6vdrLjSWAlDNK_PXXYlnFCTehrcFjZNGWv3yM7e5dfUOqmrl0ROwYEKFtF93sQAYPtXHZUqDnQOQ15VE-NVd7RyCgHHNtCiV_UeDrRg7M0YBvPtL24w35MaaKyeLIs_YWZpNgjV3zNfdl86Lo3SEoU0_nVAqwZzBroUxrE6ekBDGisWvQ6NtrEZLRTgk2izPCUiT3XOj4bENwf3Ba1bCKGvIzmWx41KIVdNamN_c1YOiY1HL__1ryKwMad4JR-w

访问 kubernetes-dashboard https://192.168.56.100:31825

image-20211012140957412

集群概况

Every 2.0s: kubectl get node,po,svc -A -owide                                                                                                             kmaster: Tue Oct 12 14:08:09 2021

NAME            STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
node/kmaster    Ready    control-plane,master   35m   v1.22.0   192.168.56.100   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5
node/kworker1   Ready    <none>                 23m   v1.22.0   192.168.56.101   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5
node/kworker2   Ready    <none>                 21m   v1.22.0   192.168.56.102   <none>        Ubuntu 20.04.2 LTS   5.4.0-77-generic   containerd://1.5.5

NAMESPACE              NAME                                             READY   STATUS    RESTARTS        AGE     IP               NODE       NOMINATED NODE   READINESS GATES
kube-system            pod/calico-kube-controllers-7659fb8886-dwvc4     1/1     Running   0               34m     192.168.189.2    kmaster    <none>           <none>
kube-system            pod/calico-node-2w8x5                            1/1     Running   0               34m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/calico-node-vqjsc                            1/1     Running   0               21m     192.168.56.102   kworker2   <none>           <none>
kube-system            pod/calico-node-zj98h                            1/1     Running   0               23m     192.168.56.101   kworker1   <none>           <none>
kube-system            pod/coredns-7568f67dbd-4jssz                     1/1     Running   0               34m     192.168.189.3    kmaster    <none>           <none>
kube-system            pod/coredns-7568f67dbd-vn8ph                     1/1     Running   0               34m     192.168.189.1    kmaster    <none>           <none>
kube-system            pod/etcd-kmaster                                 1/1     Running   0               34m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/kube-apiserver-kmaster                       1/1     Running   0               35m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/kube-controller-manager-kmaster              1/1     Running   0               34m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/kube-proxy-2sqmm                             1/1     Running   0               21m     192.168.56.102   kworker2   <none>           <none>
kube-system            pod/kube-proxy-8z758                             1/1     Running   0               34m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/kube-proxy-brgl8                             1/1     Running   0               23m     192.168.56.101   kworker1   <none>           <none>
kube-system            pod/kube-scheduler-kmaster                       1/1     Running   0               35m     192.168.56.100   kmaster    <none>           <none>
kube-system            pod/metrics-server-9577d976b-xzrgt               1/1     Running   0               9m27s   192.168.41.129   kworker1   <none>           <none>
kubernetes-dashboard   pod/dashboard-metrics-scraper-856586f554-kdgtw   1/1     Running   0               6m57s   192.168.41.130   kworker1   <none>           <none>
kubernetes-dashboard   pod/kubernetes-dashboard-67484c44f6-lbp5l        1/1     Running   0               6m57s   192.168.77.129   kworker2   <none>           <none>
kuboard                pod/kuboard-agent-2-767f88b647-pr7br             1/1     Running   1 (5m57s ago)   6m26s   192.168.189.5    kmaster    <none>           <none>
kuboard                pod/kuboard-agent-656c95877f-g968n               1/1     Running   1 (5m37s ago)   6m26s   192.168.189.6    kmaster    <none>           <none>
kuboard                pod/kuboard-etcd-th9nq                           1/1     Running   0               8m39s   192.168.56.100   kmaster    <none>           <none>
kuboard                pod/kuboard-questdb-68d5bfb5b-2tnwf              1/1     Running   0               6m26s   192.168.189.7    kmaster    <none>           <none>
kuboard                pod/kuboard-v3-5fc46b5557-44hlj                  1/1     Running   0               8m39s   192.168.189.4    kmaster    <none>           <none>

安装KubePi

https://kubeoperator.io/docs/kubepi/install/

kubectl apply -f https://raw.githubusercontent.com/KubeOperator/KubePi/master/docs/deploy/kubectl/kubepi.yaml

获取访问地址

# 获取 NodeIp
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")

# 获取 NodePort
export NODE_PORT=$(kubectl -n kube-system get services kubepi -o jsonpath="{.spec.ports[0].nodePort}")

# 获取 Address
echo http://$NODE_IP:$NODE_PORT

登录

地址: http://$NODE_IP:$NODE_PORT
用户名: admin
密码: kubepi

导入集群,获取token

kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

2021-10-28_134300.png

2021-10-28_134337.png

2021-10-28_134639.png


以下环境需要调整虚拟机配置,至少需4核8G内存

安装KubeSphere

安装KubeSphere前置环境

安装nfs文件系统

安装nfs-server

# 在每个机器。
yum install -y nfs-utils

# 在kmaster 执行以下命令 192.168.56.100
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data

# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r

#检查配置是否生效
exportfs

配置nfs-client

showmount -e 192.168.56.100
mkdir -p /nfs/data
mount -t nfs 192.168.56.100:/nfs/data /nfs/data

配置默认存储

配置动态供应的默认存储类

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: docker.io/v5cn/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.56.100 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.56.100
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

确认配置是否生效

kubectl get sc

安装KubeSphere

KubeSphere目前还不支持kubernetes 1.22,这部分内容稍后就来...

安装Kubernetes集群监控prometheus-operator

查看集群信息

kubectl cluster-info

克隆prometheus-operator

git clone https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus

创建namespace, CustomResourceDefinitions & operator pod

因为原配置里面的好多镜拉取不下来,因此应用修改过的配置文件(当前目录下的kube-prometheus)

kubectl apply -f manifests/setup

查看namespace

kubectl get ns monitoring

查看pod

kubectl get pods -n monitoring

应用部署配置文件

kubectl apply -f manifests/

查看pods,svc

kubectl get pods,svc -n monitoring

调整SVC访问方式

Prometheus:

kubectl --namespace monitoring patch svc prometheus-k8s -p '{"spec": {"type": "NodePort"}}'

Alertmanager:

kubectl --namespace monitoring patch svc alertmanager-main -p '{"spec": {"type": "NodePort"}}'

Grafana:

kubectl --namespace monitoring patch svc grafana -p '{"spec": {"type": "NodePort"}}'

查看端口

$ kubectl -n monitoring get svc  | grep NodePort
alertmanager-main       NodePort    10.96.212.116   <none>        9093:30496/TCP,8080:30519/TCP   7m53s
grafana                 NodePort    10.96.216.187   <none>        3000:31045/TCP                  7m50s
prometheus-k8s          NodePort    10.96.180.95    <none>        9090:30253/TCP,8080:30023/TCP   7m44s

访问 Grafana Dashboard

http://192.168.56.100:31045

Username: admin
Password: admin

2021-10-29_162836.png

2021-10-29_163551.png

2021-10-29_163637.png

2021-10-29_163837.png

2021-10-29_164027.png

访问 Prometheus Dashboard

http://192.168.56.100:30253

访问 Alert Manager Dashboard

http://192.168.56.100:30496

销毁prometheus-operator监控服务

kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup

https://computingforgeeks.com/setup-prometheus-and-grafana-on-kubernetes

More Repositories

1

awesome-ios-animation

A curated list of awesome iOS animation, including Objective-C and Swift libraries
5,134
star
2

awesome-ios-chart

A curated list of awesome iOS chart libraries, including Objective-C and Swift
1,526
star
3

notes

notes
Shell
1,513
star
4

ELK

搭建ELK日志分析平台。
775
star
5

oltu-oauth2-example

使用Apache Oltu 搭建Oauth2 Server及Client开放授权
Java
339
star
6

spring-quartz-cluster-sample

Spring整合Quartz基于数据库的分布式定时任务,可动态添加、删除、修改定时任务。
Java
313
star
7

maven-framework-project

基于maven的多框架和多视图融合技术(Struts1、Struts2、Spring、SpringMVC、Hibernate、Ibatis、MyBatis、Spring Data JPA、DWR)
Java
212
star
8

FoodPin

用Swift写的一个简单的App
Swift
199
star
9

elasticsearch-jest-example

ElasticSearch Java Rest Client Examples
Java
189
star
10

Swift-PM25

一个基于Swift实现的PM2.5查询示例
Swift
137
star
11

cas-server-webapp

CAS Server 4.0二次开发。添加登录错误三次及以上验证码校验、用户登录数据库认证、CAS Server REST等
Java
123
star
12

shiro-jwt-springboot

shiro整合jwt前后端分离权限认证示例
Java
104
star
13

cas-sso-samples

CAS单点登录案例。整合了CAS OAuth2、Apache Shiro、Spring Security等
HTML
66
star
14

spring-activiti-webapp

Spring整合Activiti的简单例子
Java
59
star
15

NotifyQQ

基于Mojo-WebQQ的Jenkins构建后QQ提醒插件
Java
57
star
16

activiti-demo

一个基于Activiti5.13和Bootstrap3.0.3的请假流程
Java
56
star
17

elasticsearch

elasticsearch中文版,基于elasticsearch-1.7.1。集成常用的各种插件,不定期更新
JavaScript
55
star
18

cloud

云计算之hadoop、hive、hue、oozie、sqoop、hbase、zookeeper环境搭建及配置文件
Shell
51
star
19

sharding-jdbc-sample

基于当当Sharding-JDBC数据库分库分表访问示例程序
Java
46
star
20

GolangStudy

用Swift写的Golang学习App
Swift
45
star
21

springboot-dubbox-simple

Dubbox整合Spring Boot基于Avro、Thrift协议构建REST服务
Java
40
star
22

spring-boot-oauth2-example

Java
32
star
23

dubbo-example

dubbo example
Java
25
star
24

spring-boot-apollo-sample

Demo project for Spring Boot Apollo
Java
22
star
25

WebIM

JavaScript/jQuery、HTML、CSS 构建 Web IM 远程及时聊天通信程序
JavaScript
20
star
26

bing-wallpaper

Java
20
star
27

mybatis-spring

MyBatis整合Spring并使用log4jdbc或者p6spy输出真实的sql语句
Java
17
star
28

tomcat7-nginx-redis-memcached-cluster

使用Redis或Memcached实现Tomcat7+Nginx集群及Session共享
Nginx
17
star
29

FullCalendar

基于FullCalendar二次开发,支持农历功能。中文测试OK
JavaScript
15
star
30

distributed-lock-examples

史上最全的分布式锁案例合辑。我们不造轮子,只需用好轮子!
Java
14
star
31

kafka-log4j

使用kafka实现log4j日志集中管理
Java
13
star
32

graphql-example

graphql spring boot example
Java
11
star
33

springboot-weixin-mp

SpringBoot整合weixin-java-tools实现微信公众号登录授权
Java
11
star
34

docker-hub

Shell
11
star
35

mina-examples

一个简单的spring整合mina实例
Java
11
star
36

programminghive

Programming Hive读书笔记
11
star
37

MovieSite

Mahout入门实例-基于 Apache Mahout 构建社会化推荐引擎-实战(参考IBM)
Java
10
star
38

hibernate-search-example

hibernate search example(分别使用hibernate、jpa两种方式实现,使用IKAnalyzer、paoding两种分词器实现中文分词)
Java
10
star
39

spring-data-elasticsearch-example

Spring Data Elasticsearch Example
Java
9
star
40

storm-example

a storm kafka examples
Java
9
star
41

apache2-tomcat7-cluster

Apache、Tomcat7集群session共享及负载均衡
ApacheConf
8
star
42

SpringQuartzClusterSample

Spring Quartz分布式集群配置
Java
8
star
43

activemq-example

ActiveMQ Spring Jms Example
Java
8
star
44

cas-oauth-example-3.5.x

cas通用公共组件,基于数据库和oauth认证。
Java
7
star
45

swagger-springmvc-example

使用Swagger构建SpringMVC REST服务API文档
JavaScript
7
star
46

springcloud-zookeeper-example

Java
7
star
47

solr-ik-database

solr3.6.1整合tomcat及中文分词,并索引mysql数据库实现搜索功能
Shell
6
star
48

zipkin-server-example

Java
6
star
49

resteasy-restfull-examples

基于resteasy的restfull api接口示例
Java
6
star
50

weibo-trending-hot-search

Python
6
star
51

spring-log4j-activemq

将log4j日志输出到activemq
Java
6
star
52

javaagent-samples

java instrument samples
Java
6
star
53

k8s-example

Spring Boot整合Kubernetes
Java
5
star
54

mybatis-generator-example

mybatis-generator-example
Java
5
star
55

52pojie_sign_bot

Python
5
star
56

k3s-istio-lab

搭建k3s集群和istio环境
Shell
5
star
57

Nutch1.0

Nutch1.0修改版(整合中文分词)源码修改,编译打包。
Java
4
star
58

sqoop-tutorial

Sqoop 2 Java Tutorial
Java
4
star
59

spring-boot-docker-example

Demo project for Spring Boot Docker
Java
4
star
60

spring-boot-oauth2-jdbc-simple

Java
4
star
61

jersey2-restfull-examples

基于jersey2的restfull api接口示例
Java
4
star
62

springcloud-alibaba-example

Spring Cloud Alibaba全家桶整合,一路踩坑(基于最新版本)
Java
4
star
63

springboot-package-example

spring boot 应用多环境打包部署,增量更新、自动化shell脚本
Shell
4
star
64

struts2-spring-compass

Struts2整合Spring3、Hibernate、Compass实现全文检索(基于lucene2.4.1和极易中文分词器)
Java
3
star
65

goblog

基于beego的简易博客
JavaScript
3
star
66

mvn-project-demo

一个maven的项目,含继承和聚合,使用分模块管理和开发
Java
3
star
67

spring-boot-prometheus-grafana-example

Demo project for Spring Boot Prometheus Grafana Sample
Java
3
star
68

spring-mybatis-example

spring-mybatis-example
Java
3
star
69

maven-repo

一个建立在github上的简易maven repo
3
star
70

solr-nutch

solr集成nutch环境(将nutch从互联网上爬取的索引,导入到solr的环境中。使用solr来查询nutch的索引),可以使用solrj api来操作。只是集成环境,solrj的使用参考官方wiki文档
Shell
3
star
71

spring-boot-data-rest-example

Java
2
star
72

shiro-quickstart

shiro-quickstart
Java
2
star
73

image-syncer

aliyun image-syncer https://github.com/AliyunContainerService/image-syncer
2
star
74

spring-cloud-kubernetes-samples

spring-cloud-kubernetes云原生
Java
2
star
75

hive-tutorial

hive jdbc tutorial
Java
2
star
76

hello-spring-cloud-alibaba

spring-cloud-alibaba
Java
2
star
77

taobao-tfs-example

taobao tfs install and config
Java
2
star
78

jbpm4mail

jbpm4之邮件发送例子
Java
2
star
79

netty-chat

Java
2
star
80

websocket

分布式websocket
Java
2
star
81

ebook-downloader

一个现代、实用的国家中小学电子教材下载客户端,使用 Python + flet 构建,支持Windows和macOS平台。
Python
2
star
82

ameizi

2
star
83

sync-docker-image

Sync Docker Image to Docker Hub
Shell
1
star
84

concurrency

java concurrency test
Java
1
star
85

javassist-example

Java
1
star
86

mybatis-plus-codegen

mybatis-plus-codegen
Java
1
star
87

rabbitmq-examples

RabbitMQ使用案例合辑
Java
1
star
88

spring-security-oauth2-samples

spring-security-oauth2-samples
Java
1
star
89

MyBatisGenerator

使用Ant构建MyBatis配置文件
Java
1
star
90

spring-cloud-consul-example

Java
1
star
91

geoip

数据来源 https://dev.maxmind.com/geoip
Shell
1
star
92

spring-boot-jwt-sample

spring-boot-jwt-sample
Java
1
star
93

traefik-lab

Shell
1
star
94

auto-green

1
star
95

jmh-benchmark-sample

jmh benchmark sample
Java
1
star
96

argocd-in-action

1
star
97

springcloud-microservices-sample

Java
1
star
98

go-micro-springcloud-grpc-with-consul

Java
1
star
99

spring-cloud-alibaba-dubbo

spring-cloud-alibaba-dubbo
Java
1
star
100

nats-example

Demo project for Spring Boot NATS
Java
1
star