Absence of kn.revision.rps.stable metric. #16209
-
|
My goal is to create an isolation with namespace and resource quota with the help of a helm chart and finally have a Grafana dashboard of each namespace resource usage and request for each Knative service. So far I've write a Helm-Chart for this purpose and install kube-prometheus-stack via its helm and add ServiceMonitor resource based on the Observability documentation on official doc. Here is my Helm chart template: ---
# Source: knative-tenant/templates/resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: knative-tenant-alpha-quota
namespace: knative-tenant-alpha
spec:
hard:
requests.cpu: 1000m
requests.memory: 1Gi
limits.cpu: 2000m
limits.memory: 2Gi
count/kservices.serving.knative.dev: 5
---
# Source: knative-tenant/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: knative-tenant-alpha-admin
namespace: knative-tenant-alpha
---
# Source: knative-tenant/templates/serviceaccount.yaml
apiVersion: v1
kind: Secret
metadata:
name: knative-tenant-alpha-admin-token
namespace: knative-tenant-alpha
annotations:
kubernetes.io/service-account.name: knative-tenant-alpha-admin
type: kubernetes.io/service-account-token
---
# Source: knative-tenant/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: knative-tenant-alpha-admin
namespace: knative-tenant-alpha
rules:
- apiGroups: ["serving.knative.dev"]
resources: ["services", "configurations", "revisions", "routes"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods", "pods/log", "events"]
verbs: ["get", "list", "watch"]
- apiGroups: ["eventing.knative.dev"]
resources: ["brokers", "triggers", "eventtypes"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
# Source: knative-tenant/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: knative-tenant-alpha-admin-binding
namespace: knative-tenant-alpha
subjects:
- kind: ServiceAccount
name: knative-tenant-alpha-admin
namespace: knative-tenant-alpha
roleRef:
kind: Role
name: knative-tenant-alpha-admin
apiGroup: rbac.authorization.k8s.io
---
# Source: knative-tenant/templates/knative-service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: knative-tenant-alpha-service
namespace: knative-tenant-alpha
spec:
template:
spec:
serviceAccountName: knative-tenant-alpha-admin
containers:
- image: "ghcr.io/knative/helloworld-go:latest"
ports:
- containerPort: 8080
env:
- name: TARGET
value: "From Helm-Chart"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
---
# Source: knative-tenant/templates/broker.yaml
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: knative-tenant-alpha-broker
namespace: knative-tenant-alpha
---
# Source: knative-tenant/templates/trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: knative-tenant-alpha-trigger
namespace: knative-tenant-alpha
spec:
broker: knative-tenant-alpha-broker
filter:
attributes:
type: dev.knative.testing.ping
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: knative-tenant-alpha-serviceI can guarantee that service monitor is connect and scraping data.
# knative-serving-kourier.yaml
apiVersion: v1
kind: Namespace
metadata:
name: knative-serving
---
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
# Optional: set HA replicas for production readiness
high-availability:
replicas: 2
# --- Install and configure Kourier as the ingress layer ---
ingress:
kourier:
enabled: true
config:
network:
ingress-class: "kourier.ingress.networking.knative.dev"
observability:
metrics-protocol: prometheus
request-metrics-protocol: http/protobuf
request-metrics-endpoint: http://knative-kube-prometheus-st-prometheus.observability.svc:9090/api/v1/otlp/v1/metrics
tracing-protocol: http/protobuf
tracing-endpoint: http://jaeger-collector.observability.svc:4318/v1/traces
tracing-sampling-rate: "1"
As far as I have restrict resource quota in each namespace I need to add resource limits and requests for the The problem is some metrics related to queue-proxy container doesn't appear in the Prometheus database like: I need to count and graph the number of requests that each Knative Service in each isolated namespace. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
|
This is logs of There is no sign of exporting metrics or |
Beta Was this translation helpful? Give feedback.
-
This is an autoscaling metric https://knative.dev/docs/serving/observability/metrics/serving-metrics/#knrevisionrpsstable It's reported by the autoscaler when the autoscaling metric is RPS https://knative.dev/docs/serving/autoscaling/autoscaling-metrics/#setting-metrics-per-revision |
Beta Was this translation helpful? Give feedback.
This is an autoscaling metric
https://knative.dev/docs/serving/observability/metrics/serving-metrics/#knrevisionrpsstable
It's reported by the autoscaler when the autoscaling metric is RPS
https://knative.dev/docs/serving/autoscaling/autoscaling-metrics/#setting-metrics-per-revision