kubernetesv1.16監控系列——kube-prometheus安裝

下載源碼

我這直接通過kube-prometheus的源碼進行安裝,當讓也可以通helm 安裝。

採用代碼如下:

<code>git clone https://github.com/coreos/kube-prometheus.git
cd manifests
alertmanager-alertmanager.yaml prometheus-adapter-clusterRole.yaml
alertmanager-secret.yaml prometheus-adapter-configMap.yaml
alertmanager-serviceAccount.yaml prometheus-adapter-deployment.yaml
alertmanager-serviceMonitor.yaml prometheus-adapter-roleBindingAuthReader.yaml
alertmanager-service.yaml prometheus-adapter-serviceAccount.yaml
grafana-dashboardDatasources.yaml prometheus-adapter-service.yaml
grafana-dashboardDefinitions.yaml prometheus-clusterRoleBinding.yaml
grafana-dashboardSources.yaml prometheus-clusterRole.yaml
grafana-deployment.yaml prometheus-kubeControllerManagerService.yaml
grafana-serviceAccount.yaml
grafana-serviceMonitor.yaml prometheus-kubeSchedulerService.yaml
grafana-service.yaml prometheus-operator-serviceMonitor.yaml
kube-state-metrics-clusterRoleBinding.yaml prometheus-prometheus.yaml
kube-state-metrics-clusterRole.yaml prometheus-roleBindingConfig.yaml
kube-state-metrics-deployment.yaml prometheus-roleBindingSpecificNamespaces.yaml
kube-state-metrics-serviceAccount.yaml prometheus-roleConfig.yaml
kube-state-metrics-serviceMonitor.yaml prometheus-roleSpecificNamespaces.yaml
kube-state-metrics-service.yaml prometheus-rules.yaml
node-exporter-clusterRoleBinding.yaml prometheus-serviceAccount.yaml
node-exporter-clusterRole.yaml prometheus-serviceMonitorApiserver.yaml
node-exporter-daemonset.yaml prometheus-serviceMonitorCoreDNS.yaml
node-exporter-serviceAccount.yaml
node-exporter-serviceMonitor.yaml prometheus-serviceMonitorKubeControllerManager.yaml
node-exporter-service.yaml prometheus-serviceMonitorKubelet.yaml
prometheus-adapter-apiService.yaml prometheus-serviceMonitorKubeScheduler.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml prometheus-serviceMonitor.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml prometheus-service.yaml
prometheus-adapter-clusterRoleBinding.yaml setup
prometheus-adapter-clusterRoleServerResources.yaml /<code>
<code>需要修改以下文件:
grafana-service.yaml
/<code>

為了能直接訪問grafana、Prometheus、alertmanager修改以下文件

修改grafana-service.yaml文件,修改如下:

<code>修改grafana-service.yaml文件,修改如下:
grafana-service.yaml /<code>

修改alertmanager-secret.yaml文件,修改如下:

<code>apiVersion: v1
kind: Service
metadata:
labels:
alertmanager: main
name: alertmanager-main
namespace: monitoring
spec:
ports:
- name: web
port: 9093
targetPort: web
nodePort: 30093
type: NodePort
selector:
alertmanager: main
app: alertmanager
sessionAffinity: ClientIP/<code>

修改prometheus-service.yaml文件,修改如下:

<code>apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
ports:
- name: web
port: 9090
targetPort: web
nodePort: 30090
type: NodePort
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP/<code>

如果是二進制安裝kubernetes的集群,ControllerManager和Scheduler就是監控不到的,所以需要新建這個service

prometheus-kubeControllerManagerService.yaml

<code>apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: kube-controller-manager
labels:
k8s-app: kube-controller-manager
spec:
type: ClusterIP
clusterIP: None
ports:
- name: http-metrics
port: 10252
targetPort: 10252
protocol: TCP

---
apiVersion: v1
kind: Endpoints
metadata:
labels:
k8s-app: kube-controller-manager
name: kube-controller-manager
namespace: kube-system
subsets:
- addresses:
- ip: 10.6.2.121
ports:
- name: http-metrics
port: 10252
protocol: TCP/<code>

prometheus-kubeSchedulerService.yaml

<code>apiVersion: v1
kind: Service
metadata:
namespace: kube-system
name: kube-scheduler
labels:
k8s-app: kube-scheduler
spec:
ports:
- name: http-metrics
port: 10251
targetPort: 10251
protocol: TCP
clusterIP: None
type: ClusterIP

---
apiVersion: v1
kind: Endpoints
metadata:
name: kube-scheduler
namespace: kube-system
labels:
k8s-app: kube-scheduler
subsets:
- addresses:
- ip: 10.6.2.121
nodeName: kube-scheduler
ports:
- name: http-metrics
port: 10251
protocol: TCP/<code>

執行命令:

<code>kubectl apply -f setup/
kubectl apply -f ./<code>

驗證

部署完成後,會創建一個名為monitoring的 namespace,所以資源對象對將部署在改命名空間下面,此外 kube-prometheus會自動創建6個 CRD 資源對象:

<code>[root@master-121 manifests]# kubectl get crd |grep coreos
alertmanagers.monitoring.coreos.com 2020-02-22T15:19:18Z
podmonitors.monitoring.coreos.com 2020-02-22T15:19:18Z
prometheuses.monitoring.coreos.com 2020-02-22T15:19:18Z
prometheusrules.monitoring.coreos.com 2020-02-22T15:19:18Z
servicemonitors.monitoring.coreos.com 2020-02-22T15:19:18Z
thanosrulers.monitoring.coreos.com 2020-02-22T15:19:19Z

[root@master-121 manifests]# kubectl get pods -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 3d21h
alertmanager-main-1 2/2 Running 0 3d21h
alertmanager-main-2 2/2 Running 0 3d21h
grafana-849658db4d-cp5h5 1/1 Running 0 3d20h
kube-state-metrics-5859ffdc64-hknmj 1/1 Running 0 3d21h
node-exporter-gb2x4 2/2 Running 0 3d21h
node-exporter-mjllw 2/2 Running 0 3d21h
node-exporter-v8vwd 2/2 Running 0 3d21h
node-exporter-xtwzj 2/2 Running 0 3d21h

prometheus-adapter-5cd5798d96-k9hqm 1/1 Running 0 3d21h
prometheus-k8s-0 3/3 Running 1 88m
prometheus-k8s-1 3/3 Running 1 88m
prometheus-operator-5d94cdc9bf-6flp9 1/1 Running 0 3d13h

[root@master-121 manifests]# kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main NodePort 10.16.101.12 <none> 9093:30093/TCP 3d21h
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 3d21h
blackbox-exporter ClusterIP None <none> 9115/TCP 18h
grafana NodePort 10.16.77.172 <none> 3000:32000/TCP 3d21h
kube-state-metrics ClusterIP None <none> 8080/TCP,8081/TCP 3d21h
node-exporter ClusterIP None <none> 9100/TCP 3d21h
prometheus-adapter ClusterIP 10.16.39.20 <none> 443/TCP 3d21h
prometheus-k8s NodePort 10.16.230.93 <none> 9090:30090/TCP 3d21h
prometheus-operated ClusterIP None <none> 9090/TCP 3d21h
prometheus-operator ClusterIP None <none> 8080/TCP 3d13h/<none>/<none>/<none>/<none>/<none>/<none>/<none>/<none>/<none>/<none>/<code>

檢查無失敗,訪問Prometheus和grafana查查看

http://10.6.2.121:30090/targets

kubernetesv1.16監控系列——kube-prometheus安裝

http://10.6.2.121:32000/

kubernetesv1.16監控系列——kube-prometheus安裝

kubernetesv1.16監控系列——kube-prometheus安裝

<code>注:如果還是沒有ControllerManager和Scheduler的監控,查看集群安裝是否把監聽地址改為--address=0.0.0.0/<code>


分享到:


相關文章: