K8S集群监控 Prometheus

2023-05-16

  • Prometheus(普罗米修斯)是一个最初在SoundCloud上构建的监控系统,自2012年成为社区开源项目,拥有非常活跃的开发人员和用户社区,为强调开源及独立维护,Prometheus于2016年加入CNCF,成为继kubernetes之后的第二个托管项目

https://prometheus.io/

https://github.com/prometheus

https://blog.stanley.wang/2019/01/18/%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A34%EF%BC%9Akubernetes%E9%9B%86%E7%BE%A4%E7%9A%84%E7%9B%91%E6%8E%A7%E5%92%8C%E6%97%A5%E5%BF%97%E5%88%86%E6%9E%90/

Prometheus特点

  • 多维数据模型: 由度量名称和键值对标识的时间序列数据
  • 内置时间序列数据库:TSDB
  • promQL: 一种灵活的查询语言,可以利用多维数据完成复杂查询
  • 基于HTTP的pull(拉取)方式采集时间序列数据(exporter
  • 同时支持PushGateway组件收集数据
  • 通过服务发现或静态配置发现目标
  • 多种图形模式及仪表盘支持
  • 支持做为数据源接入Grafana

Prometheus架构

在这里插入图片描述

  • Exporters(可以自定义开发)

    • http接口
    • 定义监控项和监控项的标签(维度)
    • 按一定的数据结构组织监控数据
    • 以时间序列被收集
  • Prometheus Server

    • Retrieve(数据收集器)收集自动发现的规则,收集exporter监控指标
    • TSDB(时间序列数据库)
    • Configure (static_config、 kubernetes_sd(基于k8s元数据自动发现)、 file_sd(把自动发现规则写到文件,基于文件自动发现))
    • HTTP Server 给Alertmanager推送告警,给grafana提供查询接口
  • Grafana

    • 多种多样的插件
    • 数据源(Prometheus)
    • Dashboard(PromQL)
  • Alertmanager

    • rules.yml (PromQL)

Prometueus与zabbix对比

Prometheus | zabbix

  • | -
    后端golang开发,定制化难度较低 | 后端用C开发,界面用PHP开发,定制化难度很高
    更适合云环境的监控,尤其对K8S支持更好 | 更适合监控物理机、虚拟机环境
    监控数据存储在基于时间序列的数据库内,便于对已有数据进行新的聚合 | 监控数据存储在关系型数据库内,很难从现有数据中扩展维度
    自身界面相对较弱,很多配置需要修改配置文件,但可以由grafana出图 | 图形界面相对成熟,界面上基本能完成全部的配置操作
    支持更大的集群规模,速度也更快 | 集群规模上限为10000个节点
    2015年后开始快速发展,社区活跃,适用场景越来越多 | 发展时间更长,对于很多监控场景,都有现成的解决方案

exporter

kube-state-metrics

监控K8S基本运行状态

https://quay.io/repository/coreos/kube-state-metrics?tab=info

下载kube-state-metrics镜像

docker pull quay.io/coreos/kube-state-metrics:v1.5.0
docker tag 91599517197a harbor.od.com/public/kube-state-metrics:v1.5.0
docker push harbor.od.com/public/kube-state-metrics:v1.5.0
mkdir /data/k8s-yaml/kube-state-metrics
cd /data/k8s-yaml/kube-state-metrics

cat rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs:
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system

cat dp.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  labels:
    grafanak8sapp: "true"
    app: kube-state-metrics
  name: kube-state-metrics
  namespace: kube-system
spec:
  selector:
    matchLabels:
      grafanak8sapp: "true"
      app: kube-state-metrics
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        grafanak8sapp: "true"
        app: kube-state-metrics
    spec:
      containers:
      - name: kube-state-metrics
        image: harbor.od.com/public/kube-state-metrics:v1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          name: http-metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
      serviceAccountName: kube-state-metrics
kubectl apply -f http://k8s-yaml.od.com/kube-state-metrics/rbac.yaml
kubectl apply -f http://k8s-yaml.od.com/kube-state-metrics/dp.yaml

检测容器

curl 172.7.22.9:8080/healthz
curl http://10.154.3.4:8080/healthz
curl http://10.154.3.4:8080/metrics

node-exporter

监控运算节点资源
采集机器(物理机、虚拟机、云主机等)的监控指标数据,能够采集到的指标包括CPU, 内存,磁盘,网络,文件数等信息

下载镜像

docker pull prom/node-exporter:v0.15.0
docker tag 12d51ffa2b22 harbor.od.com/public/node-exporter:v0.15.0
docker push harbor.od.com/public/node-exporter:v0.15.0

/data/k8s-yaml/node-exporter目录:
cat ds.yaml

kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    daemon: "node-exporter"
    grafanak8sapp: "true"
spec:
  selector:
    matchLabels:
      daemon: "node-exporter"
      grafanak8sapp: "true"
  template:
    metadata:
      name: node-exporter
      labels:
        daemon: "node-exporter"
        grafanak8sapp: "true"
    spec:
      volumes:
      - name: proc
        hostPath: 
          path: /proc
          type: ""
      - name: sys
        hostPath:
          path: /sys
          type: ""
      containers:
      - name: node-exporter
        image: harbor.od.com/public/node-exporter:v0.15.0
        imagePullPolicy: IfNotPresent
        args:
        - --path.procfs=/host_proc
        - --path.sysfs=/host_sys
        ports:
        - name: node-exporter
          hostPort: 9100
          containerPort: 9100
          protocol: TCP
        volumeMounts:
        - name: sys
          readOnly: true
          mountPath: /host_sys
        - name: proc
          readOnly: true
          mountPath: /host_proc
      hostNetwork: true
      tolerations:              #设置容忍所有污点,防止节点被设置污点
        - operator: "Exists"

应用

kubectl apply -f http://k8s-yaml.od.com/node-exporter/ds.yaml

node-exporter监听端口

netstat -lntup |grep 9100
tcp6       0      0 :::9100                 :::*                    LISTEN      102858/node_exporte

查看metrics

curl localhost:9100/metrics
curl http://192.168.41.31:9100/metrics
#访问kube-state-metrics
curl 172.7.22.9:8080/metrics

cadvisor

监控容器内部使用资源

docker pull google/cadvisor:v0.28.3
docker tag 75f88e3ec333 harbor.od.com/public/cadvisor:v0.28.3
docker push harbor.od.com/public/cadvisor:v0.28.3

/data/k8s-yaml/cadvisor目录:
cat ds.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cadvisor
  namespace: kube-system
  labels:
    app: cadvisor
spec:
  selector:
    matchLabels:
      name: cadvisor
  template:
    metadata:
      labels:
        name: cadvisor
    spec:
      hostNetwork: true
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: cadvisor
        image: harbor.od.com/public/cadvisor:v0.28.3
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker
          readOnly: true
        ports:
          - name: http
            containerPort: 4194
            protocol: TCP
        readinessProbe:
          tcpSocket:
            port: 4194
          initialDelaySeconds: 5
          periodSeconds: 10
        args:
          - --housekeeping_interval=10s
          - --port=4194
      terminationGracePeriodSeconds: 30
      volumes:
      - name: rootfs
        hostPath:
          path: /
      - name: var-run
        hostPath:
          path: /var/run
      - name: sys
        hostPath:
          path: /sys
      - name: docker
        hostPath:
          path: /data/docker

修改运算节点软连接
所有运算节点上:

mount -o remount,rw /sys/fs/cgroup/
ln -s /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuacct,cpu

应用

kubectl apply -f http://k8s-yaml.od.com/cadvisor/ds.yaml

查看metrics

curl http://192.168.41.32:4194/metrics

blackbox-exporter

监控业务容器是否存活

下载镜像

docker pull prom/blackbox-exporter:v0.15.1
docker tag 81b70b6158be harbor.od.com/public/blackbox-exporter:v0.15.1
docker push harbor.od.com/public/blackbox-exporter:v0.15.1

/data/k8s-yaml/blackbox-export目录:
cat cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: kube-system
data:
  blackbox.yml: |-
    modules:
      http_2xx:
        prober: http
        timeout: 2s
        http:
          valid_http_versions: ["HTTP/1.1", "HTTP/2"]
          valid_status_codes: [200,301,302]
          method: GET
          preferred_ip_protocol: "ip4"
      tcp_connect:
        prober: tcp
        timeout: 2s

cat dp.yaml

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: blackbox-exporter
  namespace: kube-system
  labels:
    app: blackbox-exporter
  annotations:
    deployment.kubernetes.io/revision: 1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      volumes:
      - name: config
        configMap:
          name: blackbox-exporter
          defaultMode: 420
      containers:
      - name: blackbox-exporter
        image: harbor.od.com/public/blackbox-exporter:v0.15.1
        imagePullPolicy: IfNotPresent
        args:
        - --config.file=/etc/blackbox_exporter/blackbox.yml
        - --log.level=info
        - --web.listen-address=:9115
        ports:
        - name: blackbox-port
          containerPort: 9115
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 50Mi
        volumeMounts:
        - name: config
          mountPath: /etc/blackbox_exporter
        readinessProbe:
          tcpSocket:
            port: 9115
          initialDelaySeconds: 5
          timeoutSeconds: 5
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3

cat svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: blackbox-exporter
  namespace: kube-system
spec:
  selector:
    app: blackbox-exporter
  ports:
    - name: blackbox-port
      protocol: TCP
      port: 9115

cat ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: blackbox-exporter
  namespace: kube-system
spec:
  rules:
  - host: blackbox.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: blackbox-exporter
          servicePort: blackbox-port

dns解析

blackbox           A    10.4.7.10

应用

kubectl apply -f http://k8s-yaml.od.com/blackbox-export/cm.yaml
kubectl apply -f http://k8s-yaml.od.com/blackbox-export/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/blackbox-export/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/blackbox-export/ingress.yaml

浏览器访问:
http://blackbox.od.com/

部署Prometheus-server

下载镜像

docker pull prom/prometheus:v2.14.0
docker tag 7317640d555e harbor.od.com/infra/prometheus:v2.14.0
docker push harbor.od.com/infra/prometheus:v2.14.0

准备prometheus的配置文件

mkdir -pv /data/nfs-volume/prometheus/{etc,prom-db}
cd /data/nfs-volume/prometheus/etc/

cp /opt/certs/ca.pem .
cp -a /opt/certs/client.pem .
cp -a /opt/certs/client-key.pem .

prometheus配置文件
cat /data/nfs-volume/prometheus/etc/prometheus.yml

global:
  scrape_interval:     15s
  evaluation_interval: 15s
scrape_configs:
- job_name: 'etcd'
  tls_config:
    ca_file: /data/etc/ca.pem
    cert_file: /data/etc/client.pem
    key_file: /data/etc/client-key.pem
  scheme: https
  static_configs:
  - targets:
    - '10.4.7.12:2379'
    - '10.4.7.21:2379'
    - '10.4.7.22:2379'
- job_name: 'kubernetes-apiservers'
  kubernetes_sd_configs:
  - role: endpoints
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  relabel_configs:
  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: default;kubernetes;https
- job_name: 'kubernetes-pods'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'kubernetes-kubelet'
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __address__
    replacement: ${1}:10255
- job_name: 'kubernetes-cadvisor'
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __address__
    replacement: ${1}:4194
- job_name: 'kubernetes-kube-state'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
  - source_labels: [__meta_kubernetes_pod_label_grafanak8sapp]
    regex: .*true.*
    action: keep
  - source_labels: ['__meta_kubernetes_pod_label_daemon', '__meta_kubernetes_pod_node_name']
    regex: 'node-exporter;(.*)'
    action: replace
    target_label: nodename
- job_name: 'blackbox_http_pod_probe'
  metrics_path: /probe
  kubernetes_sd_configs:
  - role: pod
  params:
    module: [http_2xx]
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
    action: keep
    regex: http
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port,  __meta_kubernetes_pod_annotation_blackbox_path]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+);(.+)
    replacement: $1:$2$3
    target_label: __param_target
  - action: replace
    target_label: __address__
    replacement: blackbox-exporter.kube-system:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'blackbox_tcp_pod_probe'
  metrics_path: /probe
  kubernetes_sd_configs:
  - role: pod
  params:
    module: [tcp_connect]
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
    action: keep
    regex: tcp
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __param_target
  - action: replace
    target_label: __address__
    replacement: blackbox-exporter.kube-system:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'traefik'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: keep
    regex: traefik
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name

说明:
action:重新标签动作

  • replace:默认,通过regex匹配source_label的值,适用replacement来引用表达式匹配的分组
  • keep: 删除regex与连接不匹配的目标source_label
  • drop: 删除regex与连接匹配的目标source_label
  • hashmod: 设置target_label为modules连接的哈希值source_label
  • labelmap: 匹配regex所有标签名称,然后复制匹配标签的值进行分组,replacement分组引用($1,$2)替代

/data/k8s-yaml/prometheus目录:
cat rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
  namespace: infra
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: infra

cat dp.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "5"
  labels:
    name: prometheus
  name: prometheus
  namespace: infra
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: prometheus
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      nodeName: hdss7-22.host.com
      containers:
      - name: prometheus
        image: harbor.od.com/infra/prometheus:v2.14.0
        imagePullPolicy: IfNotPresent
        command:
        - /bin/prometheus
        args:
        - --config.file=/data/etc/prometheus.yml
        - --storage.tsdb.path=/data/prom-db
        - --storage.tsdb.retention=72h
        - --storage.tsdb.min-block-duration=10m
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: data
        resources:
          requests:
            cpu: "1000m"
            memory: "1.5Gi"
          limits:
            cpu: "2000m"
            memory: "3Gi"
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      serviceAccountName: prometheus
      volumes:
      - name: data
        nfs:
          server: hdss7-200
          path: /data/nfs-volume/prometheus

cat svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: infra
spec:
  ports:
  - port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    app: prometheus

cat ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: traefik
  name: prometheus
  namespace: infra
spec:
  rules:
  - host: prometheus.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus
          servicePort: 9090

dns解析

prometheus         A    10.4.7.10

应用

kubectl apply -f http://k8s-yaml.od.com/prometheus/rbac.yaml
kubectl apply -f http://k8s-yaml.od.com/prometheus/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/prometheus/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/prometheus/ingress.yaml

浏览器访问:
http://prometheus.od.com

监控traefik-ingress-controller

在traefik的pod控制器上加annotations,并重启pod,监控生效
"annotations": {
  "prometheus_io_scheme": "traefik",
  "prometheus_io_path": "/metrics",
  "prometheus_io_port": "8080",
  "blackbox_path": "/",
  "blackbox_port": "8080",
  "blackbox_scheme": "http"
}

在这里插入图片描述

blackbox(*****) 监控服务是否存活

在pod控制器上加annotations,并重启pod,监控生效

tcp

"annotations": {
  "blackbox_port": "20880",
  "blackbox_scheme": "tcp"
}

在这里插入图片描述

http

"annotations": {
  "blackbox_path": "/hello?name=health",
  "blackbox_port": "8080",
  "blackbox_scheme": "http"
}

jvm

"annotations": {
  "prometheus_io_scrape": "true",
  "prometheus_io_port": "12346",
  "prometheus_io_path": "/",
  "blackbox_path": "/hello",
  "blackbox_port": "8080",
  "blackbox_scheme": "http"
}

Grafana

准备镜像

docker pull grafana/grafana:5.4.2
docker tag 6f18ddf9e552 harbor.od.com/infra/grafana:v5.4.2
docker push harbor.od.com/infra/grafana:v5.4.2
mkdir /data/k8s-yaml/grafana
cd /data/k8s-yaml/grafana

资源配置清单
cat rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: grafana
rules:
- apiGroups:
  - "*"
  resources:
  - namespaces
  - deployments
  - pods
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: grafana
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: grafana
subjects:
- kind: User
  name: k8s-node

cat dp.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: grafana
    name: grafana
  name: grafana
  namespace: infra
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      name: grafana
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: grafana
        name: grafana
    spec:
      containers:
      - name: grafana
        image: harbor.od.com/infra/grafana:v5.4.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /var/lib/grafana
          name: data
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - nfs:
          server: hdss7-200
          path: /data/nfs-volume/grafana
        name: data

cat svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: infra
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: grafana

cat ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana
  namespace: infra
spec:
  rules:
  - host: grafana.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000

挂载目录

mkdir /data/nfs-volume/grafana

应用

kubectl apply -f http://k8s-yaml.od.com/grafana/rbac.yaml
kubectl apply -f http://k8s-yaml.od.com/grafana/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/grafana/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/grafana/ingress.yaml

dns解析

grafana            A    10.4.7.10

浏览器访问:
http://grafana.od.com
admin
admin
密码修改为 admin123

安装插件

安装方法1:
进入容器装插件

kubectl exec -it -n infra grafana-d6588db94-mtd6b -- /bin/bash

grafana-cli plugins install grafana-kubernetes-app
grafana-cli plugins install grafana-clock-panel
grafana-cli plugins install grafana-piechart-panel
grafana-cli plugins install briangann-gauge-panel
grafana-cli plugins install natel-discrete-panel

安装方法2(这里使用方法2):
/data/nfs-volume/grafana/plugins目录下

wget https://grafana.com/api/plugins/grafana-kubernetes-app/versions/1.0.1/download -O  grafana-kubernetes-app.zip
wget https://grafana.com/api/plugins/grafana-clock-panel/versions/1.0.2/download -O grafana-clock-panel.zip
wget https://grafana.com/api/plugins/grafana-piechart-panel/versions/1.3.6/download -O grafana-piechart-panel.zip
wget https://grafana.com/api/plugins/briangann-gauge-panel/versions/0.0.6/download -O briangann-gauge-panel.zip
wget https://grafana.com/api/plugins/natel-discrete-panel/versions/0.0.9/download -O natel-discrete-panel.zip

下载之后解压并改名即可

插件装完之后重启grafana生效

在这里插入图片描述

data sources

在这里插入图片描述

prometheus在k8s集群中的访问地址
http://prometheus.infra.svc:9090

在这里插入图片描述

在这里插入图片描述

Plugins

在这里插入图片描述

在这里插入图片描述

在这里插入图片描述

/root/.kube/config 中的密文使用base64 -d解密

在这里插入图片描述

注意:K8S Container中,所有Pannel的
pod_name -> container_label_io_kubernetes_pod_name

grafana图形
https://grafana.com/grafana/dashboards/

alertmanager

准备镜像

docker pull docker.io/prom/alertmanager:v0.19.0
docker tag 30594e96cbe8 harbor.od.com/infra/alertmanager:v0.19.0
docker push harbor.od.com/infra/alertmanager:v0.19.0
docker pull docker.io/prom/alertmanager:v0.14.0
docker images |grep alert
docker tag 23744b2d645c harbor.od.com/infra/alertmanager:v0.14.0
docker push harbor.od.com/infra/alertmanager:v0.14.0

资源配置清单

mkdir /data/k8s-yaml/alertmanager

cat cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: infra
data:
  config.yml: |-
    global:
      # 在没有报警的情况下声明为已解决的时间
      resolve_timeout: 5m
      # 配置邮件发送信息
      smtp_smarthost: 'smtp.163.com:25'
      smtp_from: 'xiadongzhi1@163.com'
      smtp_auth_username: 'xiadongzhi1@163.com'
      smtp_auth_password: 'xxxxxx'
      smtp_require_tls: false
    # 所有报警信息进入后的根路由,用来设置报警的分发策略
    route:
      # 这里的标签列表是接收到报警信息后的重新分组标签,例如,接收到的报警信息里面有许多具有 cluster=A 和 alertname=LatncyHigh 这样的标签的报警信息将会批量被聚合到一个分组里面
      group_by: ['alertname', 'cluster']
      # 当一个新的报警分组被创建后,需要等待至少group_wait时间来初始化通知,这种方式可以确保您能有足够的时间为同一分组来获取多个警报,然后一起触发这个报警信息。
      group_wait: 30s

      # 当第一个报警发送后,等待'group_interval'时间来发送新的一组报警信息。
      group_interval: 5m

      # 如果一个报警信息已经发送成功了,等待'repeat_interval'时间来重新发送他们
      repeat_interval: 5m

      # 默认的receiver:如果一个报警没有被一个route匹配,则发送给默认的接收器
      receiver: default

    receivers:
    - name: 'default'
      email_configs:
      - to: '87527941@qq.com'
        send_resolved: true

cat dp.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: alertmanager
  namespace: infra
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alertmanager
  template:
    metadata:
      labels:
        app: alertmanager
    spec:
      containers:
      - name: alertmanager
        image: harbor.od.com/infra/alertmanager:v0.14.0
        args:
          - "--config.file=/etc/alertmanager/config.yml"
          - "--storage.path=/alertmanager"
        ports:
        - name: alertmanager
          containerPort: 9093
        volumeMounts:
        - name: alertmanager-cm
          mountPath: /etc/alertmanager
      volumes:
      - name: alertmanager-cm
        configMap:
          name: alertmanager-config
      imagePullSecrets:
      - name: harbor

cat svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: infra
spec:
  selector: 
    app: alertmanager
  ports:
    - port: 80
      targetPort: 9093

应用

kubectl apply -f http://k8s-yaml.od.com/alertmanager/cm.yaml
kubectl apply -f http://k8s-yaml.od.com/alertmanager/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/alertmanager/svc.yaml

报警规则

vi /data/nfs-volume/prometheus/etc/rules.yml

groups:
- name: hostStatsAlert
  rules:
  - alert: hostCpuUsageAlert
    expr: sum(avg without (cpu)(irate(node_cpu{mode!='idle'}[5m]))) by (instance) > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)"
  - alert: hostMemUsageAlert
    expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)"
  - alert: OutOfInodes
    expr: node_filesystem_free{fstype="overlay",mountpoint ="/"} / node_filesystem_size{fstype="overlay",mountpoint ="/"} * 100 < 10
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Out of inodes (instance {{ $labels.instance }})"
      description: "Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})"
  - alert: OutOfDiskSpace
    expr: node_filesystem_free{fstype="overlay",mountpoint ="/rootfs"} / node_filesystem_size{fstype="overlay",mountpoint ="/rootfs"} * 100 < 10
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Out of disk space (instance {{ $labels.instance }})"
      description: "Disk is almost full (< 10% left) (current value: {{ $value }})"
  - alert: UnusualNetworkThroughputIn
    expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual network throughput in (instance {{ $labels.instance }})"
      description: "Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})"
  - alert: UnusualNetworkThroughputOut
    expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual network throughput out (instance {{ $labels.instance }})"
      description: "Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskReadRate
    expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk read rate (instance {{ $labels.instance }})"
      description: "Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskWriteRate
    expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk write rate (instance {{ $labels.instance }})"
      description: "Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskReadLatency
    expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk read latency (instance {{ $labels.instance }})"
      description: "Disk latency is growing (read operations > 100ms) (current value: {{ $value }})"
  - alert: UnusualDiskWriteLatency
    expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk write latency (instance {{ $labels.instance }})"
      description: "Disk latency is growing (write operations > 100ms) (current value: {{ $value }})"
- name: http_status
  rules:
  - alert: ProbeFailed
    expr: probe_success == 0
    for: 1m
    labels:
      severity: error
    annotations:
      summary: "Probe failed (instance {{ $labels.instance }})"
      description: "Probe failed (current value: {{ $value }})"
  - alert: StatusCode
    expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400
    for: 1m
    labels:
      severity: error
    annotations:
      summary: "Status Code (instance {{ $labels.instance }})"
      description: "HTTP status code is not 200-399 (current value: {{ $value }})"
  - alert: SslCertificateWillExpireSoon
    expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "SSL certificate will expire soon (instance {{ $labels.instance }})"
      description: "SSL certificate expires in 30 days (current value: {{ $value }})"
  - alert: SslCertificateHasExpired
    expr: probe_ssl_earliest_cert_expiry - time()  <= 0
    for: 5m
    labels:
      severity: error
    annotations:
      summary: "SSL certificate has expired (instance {{ $labels.instance }})"
      description: "SSL certificate has expired already (current value: {{ $value }})"
  - alert: BlackboxSlowPing
    expr: probe_icmp_duration_seconds > 2
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Blackbox slow ping (instance {{ $labels.instance }})"
      description: "Blackbox ping took more than 2s (current value: {{ $value }})"
  - alert: BlackboxSlowRequests
    expr: probe_http_duration_seconds > 2 
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Blackbox slow requests (instance {{ $labels.instance }})"
      description: "Blackbox request took more than 2s (current value: {{ $value }})"
  - alert: PodCpuUsagePercent
    expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),"pod","$1","container_label_io_kubernetes_pod_name", "(.*)"))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores *100 )by(container,namespace,node,pod,severity) > 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)"

vi /data/nfs-volume/prometheus/etc/prometheus.yml

#在最后添加
alerting:
  alertmanagers:
    - static_configs:
        - targets: ["alertmanager"]
rule_files:
 - "/data/etc/rules.yml"

平滑重启Prometheus

[root@hdss7-21 ~]# ps -aux |grep prometheus
root      19509  0.7  0.6 169268 37432 ?        Ssl  11:28   3:44 traefik traefik --api --kubernetes --logLevel=INFO --insecureskipverify=true --kubernetes.endpoint=https://10.4.7.10:7443 --accesslog --accesslog.filepath=/var/log/traefik_access.log --traefiklog --traefiklog.filepath=/var/log/traefik.log --metrics.prometheus
root     111285  0.0  0.0 112712   980 pts/0    S+   20:00   0:00 grep --color=auto prometheus
root     122031  3.6  7.9 1060732 485196 ?      Ssl  10:26  21:12 /bin/prometheus --config.file=/data/etc/prometheus.yml --storage.tsdb.path=/data/prom-db --storage.tsdb.retention=72h --storage.tsdb.min-block-duration=10m
[root@hdss7-21 ~]# kill -SIGHUP 122031
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

K8S集群监控 Prometheus 的相关文章

  • Prometheus-如何监控其他docker容器

    我想用普罗米修斯监控我的docker容器 我可以跑普罗米修斯 with Grafana但我不知道如何指示它监视其他 docker 容器 如果可能的话我想看一些例子 例如我有Ubuntu在我的主机上运行的容器以及Gentoo容器 我如何告诉
  • Prometheus 中的increase() 有时会将值加倍:如何避免?

    我发现对于某些图表 我从 Prometheus 获得双精度值 其中应该只是一个 我使用的查询 increase signups count 4m 刮擦间隔设置为建议最大 https stackoverflow com questions 4
  • Prometheus基于Label的过滤

    如何在Prometheus查询中添加标签过滤器 kube pod 信息 kube pod info created by kind ReplicaSet created by name alertmanager 6d9f74d4c5 ins
  • 有没有办法使用 prometheus 监控 kube cron 作业

    有没有办法监控 kube cronjob 我有一个 kube cronjob 它在我的集群上每 10 分钟运行一次 有没有一种方法可以在每次我的 cronjob 由于某些错误而失败时收集指标 或者在我的 cronjob 在一定时间后尚未完成
  • 运行 pod 和节点的 Kubernetes prometheus 指标?

    我已经设置了 prometheus 通过遵循 prometheus 来监控 kubernetes 指标文档 https github com prometheus docs blob master content docs operatin
  • 按指标值过滤普罗米修斯结果,而不是按标签值

    Because Prometheus topk 返回的结果超出预期 https stackoverflow com questions 38783424 prometheus topk returns more results than e
  • prometheus基本介绍

    官网 https prometheus io docs introduction overview 中文 https www prometheus wang Prometheus 选择 Prometheus 并不是偶然 因为 Prometh
  • 如何使用Istio的Prometheus配置kubernetes hpa?

    我们有一个 Istio 集群 我们正在尝试为 Kubernetes 配置水平 pod 自动缩放 我们希望使用请求计数作为 hpa 的自定义指标 我们如何利用 Istio 的 Prometheus 来达到同样的目的 这个问题比我想象的要复杂得
  • 如何从多个 python-flask 子进程收集普罗米修斯指标?

    我有 main 函数 它生成两个单独的子进程 这两个子流程共享指标 如何共享两个流程的指标并保持更新 这是我的片段 以供更多理解 from multiprocessing import Process import prometheus c
  • 如何以零停机时间将 istio 1.4.3 升级到最新版本

    我是新聘的工程师 最近开始使用 istio 我的应用程序当前在 istio 1 4 3 上运行 当我尝试使用 istioctl Upgrade 升级到最新版本时遇到问题 以下是我尝试过的步骤 1 使用 istioctl version 验证
  • 使用 Helm 安装后 Prometheus 服务器处于挂起状态

    我是 k8s 的新手 正在尝试为 k8s 设置 prometheus 监控 我用了 helm install 来设置普罗米修斯 现在 two pods are still in pending state 普罗米修斯服务器 普罗米修斯警报管
  • Prometheus 中的最小 scrape_interval 是多少?

    我想知道普罗米修斯的最短时间是多少scrape interval范围 根据普罗米修斯文档 https prometheus io docs prometheus latest configuration configuration 此参数的
  • Prometheus 为每个 pod 的多个指标端点抓取配置

    我们有一个 Kubernetes Pod 它提供多个指标端点 3093 metrics and 9113 metrics 但它还有一个不提供任何指标的端口 80 TL DR 是否可以只刮掉端口3093 and 9113 我们正在使用示例配置
  • Prometheus(在 Docker 容器中)无法在主机上抓取目标

    Prometheus 在 docker 容器内运行 版本 18 09 2 内部版本 6247962 docker compose xml如下 并且抓取目标已打开localhost 8000它是由 Python 3 脚本创建的 失败的抓取目标
  • 使用 Puppet 配置远程规则集

    我正在尝试使普罗米修斯自动化node exporter和我的普罗米修斯服务器 为了node exporter我已经编写了一个模块来安装所有需要的软件包 设置 ipaddress基于facter还有更多 现在我想确保收集到的信息 hostna
  • Docker容器CPU使用率监控

    根据 docker 的文档 我们可以通过以下方式获取 docker 容器的 CPU 使用率码头工人统计命令 CPU 列将给出容器正在使用的主机 CPU 的百分比 假设我限制容器使用 50 的主机单个 CPU 我可以通过 cpus 0 5 选
  • prometheus 节点实例列表

    是否可以使用 prometheus 获取节点实例列表 我有一个节点导出器 但我没有看到这样的指标 我们应该添加一个新的运算符吗 您可以使用kube 状态指标 https github com kubernetes kube state me
  • 普罗米修斯警报中缺少标签

    我对 Prometheus 警报规则有疑问 我设置了各种 cAdvisor 特定警报 例如 alert ContainerCpuUsage expr sum rate container cpu usage seconds total 3m
  • PromQL:查询警报是否被静音

    我已成功消除了当前已关闭节点的警报 并且在我们有时间物理替换它之前会持续一段时间 虽然我认为沉默会阻止警报在 Slack 通道中重新出现 但我也想在我们在 Prometheus 之上运行的 Grafana 仪表板上删除它 这是对 grafa
  • 将矩阵与 SUM 相乘

    我想将一个指标与另一个指标之和的结果相乘 我想尝试做的事情 MeticOne SUM MetricTwo Thanks 假设你有MetricOne带标签id and name 你有MetricTwo还有标签id and name 然后你就得

随机推荐

  • 三、Docker:命令

    其他文章 xff1a 一 Docker xff1a 概述 二 Docker xff1a 安装 三 Docker xff1a 命令 四 Docker xff1a 可视化管理 五 Docker xff1a 镜像 xff08 image 六 Do
  • mysql group by 报错Expression #2 of SELECT list is not in GROUP BY clause and contains nonaggregated c

    当使用group by的语句中 xff0c select后面跟的列 xff0c 在group by后面没有时 xff0c 会报以下错误 xff1a Expression 2 of SELECT list is not in GROUP BY
  • Opencv快速入门(C++版),新手向

    Opencv快速入门 C 43 43 版 xff09 前言1 图像的读取与显示所使用的API接口 xff1a 代码演示 xff1a 2 图像色彩空间转换所使用的API接口 xff1a 代码演示 xff1a 3 图像对象的创建与赋值所使用的A
  • 前台解析jwt token 前后端分离 ant design pro

    前言 在如今得环境下 xff0c 越来越多得项目采用微服务 xff0c 前后端分离项目 优点在于同时开发 xff0c 分开部署 缺点在于需要约定的太多 xff0c 导致前后端联调产生分歧 就标题而言 xff0c 解决前端antd 接收后台返
  • win10 双击启动nacos报错 Unable to start web server...... Unable to start embedded Tomcat

    1 遇到的问题 win10双击启动nacos报错 2 分析 从启动cmd开始查看 发现 启动模式为集群模式 定位成功 3 解决 修改startup中启动模式 重新启动 成功
  • IDEA 远程debugger SpringBoot项目 超赞!!!

    如题哦 xff0c 项目发布到服务器上后 xff0c 每天被不同的bug所困扰 强大的idea超出你的想象 xff0c 强大到可以远程debugger xff0c 就和在本地一样一样的 进入正题 前提概要 线上即服务器代码必须与本地一致 x
  • git提交时 # Please enter the commit message for your changes. Lines starting # with ‘#‘ will be ignored

    问题 xff1a Please enter the commit message for your changes Lines starting with 39 39 will be ignored and an empty message
  • canal 修改配置信息后监听不到mysql数据并报错can‘t find start position for example

    原由 xff1a 数据库地址变化 canal 需要修改监听 问题 xff1a 修改配置信息后重启canal 但并无监听到数据库信息变化 分析 xff1a canal 与数据库之间断层 xff0c 导致信息传输失败 解决 xff1a xff0
  • AI那点事儿

    从古至今 xff0c 改朝换代 一代崛起 xff0c 就标志着一代的灭亡 AI的兴起 xff0c 让无数程序梦想客死他乡 无论是学者还是技术科研者 xff0c 无一不在说 xff0c AI的时代到了 然而 xff0c 我们扣心自问 xff0
  • win7 配置JDK环境变量

    第一步 xff1a 安装jdk 8u101 windows x64 exe xff0c 路径为默认路径 xff0c 一直下一步直到完成安装 安装最好不要修改安装路径 xff0c 防止自己找不到 第二步 xff1a 设置环境变量 xff1a
  • 完整的搭建内网穿透ngrok详细教程(有图有真相)

    如上 网上找到的都是不稳定的 还不如自己搭建一个 去问度娘了 xff0c 发现了一堆 好吧 xff0c 那就动手开干吧 准备工作 xff08 其实也是硬性条件 xff09 xff1a 1 服务器一台 2 备案域名一个 xff08 好多都说可
  • lsyncd-实时同步(镜像)守护程序

    E mail 1226032602 64 qq com 官方文档 https axkibe github io lsyncd https github com axkibe lsyncd 简介 Lsyncd使用文件系统事件接口 xff08
  • Dockerfile

    docker安装 yum span class token function install span y yum utils device mapper persistent data lvm2 span class token func
  • c51单片机学习笔记-LED闪烁编程

    目的 xff1a 使LED灯闪烁 xff0c 需循环让 D1 指示灯先亮一会后熄灭 xff0c 因此只需编写一个循环函数 xff0c 专门在那循环运行即可实现延时功能 编译软件 xff1a keil5 过程 1 书写延时函数 函数名 xff
  • 网络管理命令-nmcli

    网络管理工具 iproute 软件包包括 ip ss 命令 net tools软件包包括 ifconfig route netstat命令 ip 命令相当于之前的 ifconfig route ss 命令相当于之前的 netstat nmt
  • nginx

    本文作者 五行哥 QQ 1226032602 E mail 1226032602 64 qq com web服务器种类 apache nginx tomcat resin Lighttpd IIS WebLogic Jetty Node j
  • kubernetes ingress

    https kubernetes io docs concepts services networking ingress 负载均衡软件 NginxTraefikEnvoy https github com kubernetes ingre
  • kubernetes configMap secret

    配置容器化应用的方式 自定义命令行参数把配置文件直接焙进镜像环境变量 cloud native的应用程序一般可直接通过环境变量加载配置通过entrypoint脚本来预处理变量 存储卷 configMap 配置中心 pod从configMap
  • 运维精华面试题

    一 基本概念 1 常见的Linux发行版本都有什么 xff1f 你最擅长哪一个 xff1f 它的官网网站是什么 xff1f 说明你擅长哪一块 xff1f 常见的Linux发行版本有Redhat Centos Debian Ubuntu Su
  • K8S集群监控 Prometheus

    Prometheus xff08 普罗米修斯 xff09 是一个最初在SoundCloud上构建的监控系统 xff0c 自2012年成为社区开源项目 xff0c 拥有非常活跃的开发人员和用户社区 xff0c 为强调开源及独立维护 xff0c