首先,您需要配置应用程序以提供自定义指标。它位于开发应用程序方面。下面是一个例子,如何用Go语言制作:使用 Prometheus 查看指标 https://mycodesmells.com/post/watching-metrics-with-prometheus
-
其次,您需要定义应用程序(或 Pod,或任何您想要的)的 Deployment 并将其部署到 Kubernetes,例如:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: podinfo
spec:
replicas: 2
template:
metadata:
labels:
app: podinfo
annotations:
prometheus.io/scrape: 'true'
spec:
containers:
- name: podinfod
image: stefanprodan/podinfo:0.0.1
imagePullPolicy: Always
command:
- ./podinfo
- -port=9898
- -logtostderr=true
- -v=2
volumeMounts:
- name: metadata
mountPath: /etc/podinfod/metadata
readOnly: true
ports:
- containerPort: 9898
protocol: TCP
readinessProbe:
httpGet:
path: /readyz
port: 9898
initialDelaySeconds: 1
periodSeconds: 2
failureThreshold: 1
livenessProbe:
httpGet:
path: /healthz
port: 9898
initialDelaySeconds: 1
periodSeconds: 3
failureThreshold: 2
resources:
requests:
memory: "32Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "100m"
volumes:
- name: metadata
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
---
apiVersion: v1
kind: Service
metadata:
name: podinfo
labels:
app: podinfo
spec:
type: NodePort
ports:
- port: 9898
targetPort: 9898
nodePort: 31198
protocol: TCP
selector:
app: podinfo
关注领域annotations: prometheus.io/scrape: 'true'
。需要请求Prometheus从资源中读取metrics。另请注意,还有两个注释,它们具有默认值;但如果您在应用程序中更改它们,则需要使用正确的值添加它们:
-
prometheus.io/path
:如果metrics路径不是/metrics,则用该注解定义。
-
prometheus.io/port
:在指定端口而不是 pod 声明的端口上抓取 pod(如果没有声明,则默认为无端口目标)。
-
接下来,Istio 中的 Prometheus 使用自己针对 Istio 目的进行修改的配置,并且默认情况下它会跳过 Pod 中的自定义指标。因此,您需要对其进行一些修改。
就我而言,我从以下位置获取了 Pod 指标的配置这个例子 https://github.com/stefanprodan/k8s-prom-hpa/blob/master/prometheus/prometheus-cfg.yaml并仅针对 Pod 修改了 Istio 的 Prometheus 配置:
kubectl edit configmap -n istio-system prometheus
我根据前面提到的示例更改了标签的顺序:
# pod's declared ports (default is a port-free target if none are declared).
- job_name: 'kubernetes-pods'
# if you want to use metrics on jobs, set the below field to
# true to prevent Prometheus from setting the `job` label
# automatically.
honor_labels: false
kubernetes_sd_configs:
- role: pod
# skip verification so you can do HTTPS to pods
tls_config:
insecure_skip_verify: true
# make sure your labels are in order
relabel_configs:
# these labels tell Prometheus to automatically attach source
# pod and namespace information to each collected sample, so
# that they'll be exposed in the custom metrics API automatically.
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod
# these labels tell Prometheus to look for
# prometheus.io/{scrape,path,port} annotations to configure
# how to scrape
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
之后,自定义指标出现在 Prometheus 中。但,更改 Prometheus 配置时要小心,因为 Istio 所需的一些指标可能会消失,所以请仔细检查所有内容。
-
现在是时候安装了Prometheus 自定义公制适配器 https://github.com/directxman12/k8s-prometheus-adapter.
- 下载this https://github.com/directxman12/k8s-prometheus-adapter存储库
- 更改文件中 Prometheus 服务器的地址
<repository-directory>/deploy/manifests/custom-metrics-apiserver-deployment.yaml
。例子,- --prometheus-url=http://prometheus.istio-system:9090/
- 运行命令
kubectl apply -f <repository-directory>/deploy/manifests
一段时间后,custom.metrics.k8s.io/v1beta1
应出现在命令“kubectl api-versions”的输出中。
另外,使用命令检查自定义 API 的输出kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .
and kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
最后一个的输出应如下例所示:
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
},
"items": [
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "podinfo-6b86c8ccc9-kv5g9",
"apiVersion": "/__internal"
},
"metricName": "http_requests",
"timestamp": "2018-01-10T16:49:07Z",
"value": "901m" },
{
"describedObject": {
"kind": "Pod",
"namespace": "default",
"name": "podinfo-6b86c8ccc9-nm7bl",
"apiVersion": "/__internal"
},
"metricName": "http_requests",
"timestamp": "2018-01-10T16:49:07Z",
"value": "898m"
}
]
}
如果是这样,您可以继续下一步。如果没有,请在 CustomMetrics 中查看哪些 API 可用于 Podkubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "pods/"
对于 http_requestskubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . | grep "http"
。 MetricNames 是根据 Prometheus 从 Pod 收集的指标生成的,如果它们为空,则需要朝该方向查看。
-
最后一步是配置 HPA 并对其进行测试。因此,就我而言,我为 podinfo 应用程序创建了 HPA,定义如下:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: podinfo
spec:
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: podinfo
minReplicas: 2
maxReplicas: 10
metrics:
- type: Pods
pods:
metricName: http_requests
targetAverageValue: 10
并使用简单的 Go 应用程序来测试负载:
#install hey
go get -u github.com/rakyll/hey
#do 10K requests rate limited at 25 QPS
hey -n 10000 -q 5 -c 5 http://<K8S-IP>:31198/healthz
一段时间后,我看到使用命令的缩放发生了变化kubectl describe hpa
and kubectl get hpa