-
Notifications
You must be signed in to change notification settings - Fork 440
Description
Describe the bug
In 2024, we tried installing the Grafana Operator on OpenShift through OperatorHub. Initially the installation failed and wasn't continued. Then recently we tried a new installation after cleaning all the related resources (configmaps, subscription, secret, CRDs etc...) but it still fails.
Version
Version is 5.19.4 by Grafana Labs through OpenShift's OperatorHub https://operatorhub.io/operator/grafana-operator/v5/grafana-operator.v5.19.4 .
To Reproduce
Steps to reproduce the behavior:
- Go to 'OperatorHub' in an OpenShift cluster
- Search for 'Grafana'
- Install "Grafana Operator"
- Select v5, installation on specific namespace (already tried also all namespaces various times, it will install in openshift-operators in that case) with Automatic upgrades.
- Check the logs and events for the created deployment, "grafana-operator-controller-manager-v5".
- The installation fails on CSV with "install failed: deployment grafana-operator-controller-manager-v5 not ready before timeout: deployment "grafana-operator-controller-manager-v5" exceeded its progress deadline".
- This is due to the generated pod grafana-operator-controller-manager-v5-5595b667f6-f44tr never reaching a state of Ready 1/1 .
Expected behavior
The grafana-operator-controller-manager-v5 pod should reach a ready state. This works correctly in various labs and different environments. We couldn't reproduce this bug in any other environments. For some unexpected reason the issue only occurs on this specific cluster.
Suspect component/Location where the bug might be occurring
The events for the pod show the probes failing:
Readiness probe error: Get "http://10.247.7.91:8081/readyz": read tcp 10.247.6.2:39806->10.247.7.91:8081: read: connection reset by peer body:
Liveness probe failed: Get "http://10.247.7.91:8081/healthz": EOF
Logs don't show any particular errors (the SIGTERM received most probably comes from the failing probe):
2025-09-12T14:19:38.879Z info setup GOMEMLIMIT is updated {"version": "v5.19.4", "package": "github.com/KimMachineGun/automemlimit/memlimit", "GOMEMLIMIT": 519045120, "previous": 9223372036854775807}
2025-09-12T14:19:38.880Z info setup maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS {"version": "v5.19.4"}
2025-09-12T14:19:38.977Z info setup label restrictions for cached resources are active {"version": "v5.19.4", "level": "safe"}
2025-09-12T14:19:38.999Z info setup operator running in namespace scoped mode {"version": "v5.19.4", "namespace": "test-monitoring"}
2025-09-12T14:19:39.064Z info setup starting manager {"version": "v5.19.4"}
2025-09-12T14:19:39.077Z info controller-runtime.metrics Starting metrics server
2025-09-12T14:19:39.077Z info controller-runtime.metrics Serving metrics server {"bindAddress": "0.0.0.0:9090", "secure": false}
2025-09-12T14:19:39.077Z info starting server {"name": "pprof", "addr": "[::]:8888"}
2025-09-12T14:19:39.077Z info starting server {"name": "health probe", "addr": "[::]:8081"}
I0912 14:19:39.179218 1 leaderelection.go:257] attempting to acquire leader lease test-monitoring/f75f3bba.integreatly.org...
I0912 14:19:55.211363 1 leaderelection.go:271] successfully acquired lease test-monitoring/f75f3bba.integreatly.org
2025-09-12T14:19:55.212Z info Starting EventSource {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana", "source": "kind source: *v1.ConfigMap"}
2025-09-12T14:19:55.213Z info Starting EventSource {"controller": "grafanadatasource", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDatasource", "source": "kind source: *v1beta1.GrafanaDatasource"}
2025-09-12T14:19:55.213Z info Starting EventSource {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana", "source": "kind source: *v1beta1.Grafana"}
2025-09-12T14:19:55.213Z info Starting EventSource {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana", "source": "kind source: *v1.Deployment"}
2025-09-12T14:19:55.214Z info Starting EventSource {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard", "source": "kind source: *v1.ConfigMap"}
2025-09-12T14:19:55.215Z info Starting EventSource {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel", "source": "kind source: *v1.ConfigMap"}
2025-09-12T14:19:55.216Z info Starting EventSource {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard", "source": "kind source: *v1beta1.GrafanaDashboard"}
2025-09-12T14:19:55.216Z info Starting EventSource {"controller": "grafanafolder", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaFolder", "source": "kind source: *v1beta1.GrafanaFolder"}
2025-09-12T14:19:55.217Z info Starting EventSource {"controller": "grafanacontactpoint", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaContactPoint", "source": "kind source: *v1beta1.GrafanaContactPoint"}
2025-09-12T14:19:55.217Z info Starting EventSource {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel", "source": "kind source: *v1beta1.GrafanaLibraryPanel"}
2025-09-12T14:19:55.217Z info Starting EventSource {"controller": "grafanaalertrulegroup", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaAlertRuleGroup", "source": "kind source: *v1beta1.GrafanaAlertRuleGroup"}
2025-09-12T14:19:55.218Z info Starting EventSource {"controller": "grafananotificationtemplate", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationTemplate", "source": "kind source: *v1beta1.GrafanaNotificationTemplate"}
2025-09-12T14:19:55.219Z info Starting EventSource {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy", "source": "kind source: *v1beta1.GrafanaNotificationPolicyRoute"}
2025-09-12T14:19:55.219Z info Starting EventSource {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy", "source": "kind source: *v1beta1.GrafanaNotificationPolicy"}
2025-09-12T14:19:55.219Z info Starting EventSource {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy", "source": "kind source: *v1beta1.GrafanaContactPoint"}
2025-09-12T14:19:55.220Z info Starting EventSource {"controller": "grafanamutetiming", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaMuteTiming", "source": "kind source: *v1beta1.GrafanaMuteTiming"}
2025-09-12T14:19:55.314Z info Starting Controller {"controller": "grafanadatasource", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDatasource"}
2025-09-12T14:19:55.314Z info Starting workers {"controller": "grafanadatasource", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDatasource", "worker count": 1}
2025-09-12T14:19:55.315Z info Starting Controller {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana"}
2025-09-12T14:19:55.315Z info Starting workers {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana", "worker count": 1}
2025-09-12T14:19:55.317Z info Starting Controller {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel"}
2025-09-12T14:19:55.317Z info Starting workers {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel", "worker count": 1}
2025-09-12T14:19:55.317Z info Starting Controller {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard"}
2025-09-12T14:19:55.317Z info Starting workers {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard", "worker count": 1}
2025-09-12T14:19:55.318Z info Starting Controller {"controller": "grafanafolder", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaFolder"}
2025-09-12T14:19:55.318Z info Starting workers {"controller": "grafanafolder", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaFolder", "worker count": 1}
2025-09-12T14:19:55.318Z info Starting Controller {"controller": "grafanaalertrulegroup", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaAlertRuleGroup"}
2025-09-12T14:19:55.318Z info Starting workers {"controller": "grafanaalertrulegroup", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaAlertRuleGroup", "worker count": 1}
2025-09-12T14:19:55.319Z info Starting Controller {"controller": "grafananotificationtemplate", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationTemplate"}
2025-09-12T14:19:55.319Z info Starting workers {"controller": "grafananotificationtemplate", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationTemplate", "worker count": 1}
2025-09-12T14:19:55.319Z info Starting Controller {"controller": "grafanacontactpoint", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaContactPoint"}
2025-09-12T14:19:55.319Z info Starting workers {"controller": "grafanacontactpoint", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaContactPoint", "worker count": 1}
2025-09-12T14:19:55.320Z info Starting Controller {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy"}
2025-09-12T14:19:55.320Z info Starting workers {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy", "worker count": 1}
2025-09-12T14:19:55.321Z info Starting Controller {"controller": "grafanamutetiming", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaMuteTiming"}
2025-09-12T14:19:55.321Z info Starting workers {"controller": "grafanamutetiming", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaMuteTiming", "worker count": 1}
2025-09-12T14:20:05.221Z info GrafanaReconciler Grafana status sync complete
2025-09-12T14:20:36.338Z info Stopping and waiting for non leader election runnables
2025-09-12T14:20:36.338Z info Stopping and waiting for leader election runnables
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanamutetiming", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaMuteTiming"}
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy"}
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanacontactpoint", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaContactPoint"}
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafananotificationtemplate", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationTemplate"}
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanaalertrulegroup", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaAlertRuleGroup"}
2025-09-12T14:20:36.338Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanafolder", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaFolder"}
2025-09-12T14:20:36.339Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard"}
2025-09-12T14:20:36.339Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel"}
2025-09-12T14:20:36.339Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana"}
2025-09-12T14:20:36.339Z info Shutdown signal received, waiting for all workers to finish {"controller": "grafanadatasource", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDatasource"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanamutetiming", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaMuteTiming"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafananotificationpolicy", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationPolicy"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafananotificationtemplate", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaNotificationTemplate"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanaalertrulegroup", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaAlertRuleGroup"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanacontactpoint", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaContactPoint"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanafolder", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaFolder"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanalibrarypanel", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaLibraryPanel"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanadashboard", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDashboard"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafanadatasource", "controllerGroup": "grafana.integreatly.org", "controllerKind": "GrafanaDatasource"}
2025-09-12T14:20:36.339Z info All workers finished {"controller": "grafana", "controllerGroup": "grafana.integreatly.org", "controllerKind": "Grafana"}
2025-09-12T14:20:36.339Z info Stopping and waiting for caches
2025-09-12T14:20:36.339Z info Stopping and waiting for webhooks
2025-09-12T14:20:36.339Z info Stopping and waiting for HTTP servers
2025-09-12T14:20:36.340Z info shutting down server {"name": "health probe", "addr": "[::]:8081"}
2025-09-12T14:20:36.340Z info shutting down server {"name": "pprof", "addr": "[::]:8888"}
2025-09-12T14:20:36.340Z info controller-runtime.metrics Shutting down metrics server with timeout of 1 minute
2025-09-12T14:20:36.340Z info Wait completed, proceeding to shutdown the manager
2025-09-12T14:20:36.340Z info setup SIGTERM request gotten, shutting down operator {"version": "v5.19.4"}
Screenshots
We can attach any required screenshots upon request.
Runtime (please complete the following information):
- OS: Linux/Fedora CoreOS 38.20230902.3.0
- Grafana Operator Version: grafana-operator v5.19.4
- Environment: OpenShift (specifically OKD with community operators) version 4.13.0-0.okd-2023-09-30-084937
- Deployment type: Openshift OLM
- Other: OpenShift installed as UPI (static-ip virtualized machines)
Additional context
This bug was already reported here #1294 , but it is not solved yet and the existing solutions didn't fix the problem.
The image in use for the CrashLoopBackOff pod is "ghcr.io/grafana/grafana-operator@sha256:b678bc3faad11f4ef53c240e2a650ce939b9e8920dd72a47681b2efcadc7b837" and is successfully pulled:
Successfully pulled image "ghcr.io/grafana/grafana-operator@sha256:b678bc3faad11f4ef53c240e2a650ce939b9e8920dd72a47681b2efcadc7b837" in 935.02102ms (935.044798ms including waiting)
Here is also the YAML for the failing deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
annotations:
deployment.kubernetes.io/revision: '1'
resourceVersion: '392738075'
name: grafana-operator-controller-manager-v5
uid: e48d07b8-327a-4242-ad25-c5679e660dbf
creationTimestamp: '2025-09-15T07:39:01Z'
generation: 2
namespace: openshift-operators
ownerReferences:
- apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
name: grafana-operator.v5.19.4
uid: e5a3359d-8211-4b09-b1b9-414adf5e9961
controller: false
blockOwnerDeletion: false
labels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
olm.deployment-spec-hash: 76c4999bf5
olm.owner: grafana-operator.v5.19.4
olm.owner.kind: ClusterServiceVersion
olm.owner.namespace: openshift-operators
operators.coreos.com/grafana-operator.openshift-operators: ''
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/managed-by: olm
app.kubernetes.io/name: grafana-operator
annotations:
operators.operatorframework.io/builder: operator-sdk-v1.32.0
operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
olm.targetNamespaces: ''
operatorframework.io/properties: >-
{"properties":[{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"Grafana","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaAlertRuleGroup","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaContactPoint","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaDashboard","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaDatasource","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaFolder","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaLibraryPanel","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaMuteTiming","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaNotificationPolicy","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaNotificationPolicyRoute","version":"v1beta1"}},{"type":"olm.gvk","value":{"group":"grafana.integreatly.org","kind":"GrafanaNotificationTemplate","version":"v1beta1"}},{"type":"olm.package","value":{"packageName":"grafana-operator","version":"5.19.4"}}]}
repository: 'https://github.com/grafana/grafana-operator'
support: Grafana Labs
alm-examples: |-
[
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "Grafana",
"metadata": {
"labels": {
"dashboards": "grafana-a",
"folders": "grafana-a"
},
"name": "grafana-a"
},
"spec": {
"config": {
"auth": {
"disable_login_form": "false"
},
"log": {
"mode": "console"
},
"security": {
"admin_password": "start",
"admin_user": "root"
}
}
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaAlertRuleGroup",
"metadata": {
"name": "grafanaalertrulegroup-sample"
},
"spec": {
"folderRef": "test-folder-from-operator",
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"interval": "5m",
"rules": [
{
"condition": "B",
"data": [
{
"datasourceUid": "grafanacloud-demoinfra-prom",
"model": {
"datasource": {
"type": "prometheus",
"uid": "grafanacloud-demoinfra-prom"
},
"editorMode": "code",
"expr": "weather_temp_c{}",
"instant": true,
"intervalMs": 1000,
"legendFormat": "__auto",
"maxDataPoints": 43200,
"range": false,
"refId": "A"
},
"refId": "A",
"relativeTimeRange": {
"from": 600
}
},
{
"datasourceUid": "__expr__",
"model": {
"conditions": [
{
"evaluator": {
"params": [
0
],
"type": "gt"
},
"operator": {
"type": "and"
},
"query": {
"params": [
"C"
]
},
"reducer": {
"params": [],
"type": "last"
},
"type": "query"
}
],
"datasource": {
"type": "__expr__",
"uid": "__expr__"
},
"expression": "A",
"intervalMs": 1000,
"maxDataPoints": 43200,
"refId": "B",
"type": "threshold"
},
"refId": "B",
"relativeTimeRange": {
"from": 600
}
}
],
"execErrState": "Error",
"for": "5m0s",
"noDataState": "NoData",
"title": "Temperature below freezing",
"uid": "4843de5c-4f8a-4af0-9509-23526a04faf8"
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaContactPoint",
"metadata": {
"labels": {
"app.kubernetes.io/created-by": "grafana-operator",
"app.kubernetes.io/instance": "grafanacontactpoint-sample",
"app.kubernetes.io/managed-by": "kustomize",
"app.kubernetes.io/name": "grafanacontactpoint",
"app.kubernetes.io/part-of": "grafana-operator"
},
"name": "grafanacontactpoint-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"name": "grafanacontactpoint-sample",
"settings": {
"email": null
},
"type": "email"
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaDashboard",
"metadata": {
"name": "grafanadashboard-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"json": "{\n\n \"id\": null,\n \"title\": \"Simple Dashboard\",\n \"tags\": [],\n \"style\": \"dark\",\n \"timezone\": \"browser\",\n \"editable\": true,\n \"hideControls\": false,\n \"graphTooltip\": 1,\n \"panels\": [],\n \"time\": {\n \"from\": \"now-6h\",\n \"to\": \"now\"\n },\n \"timepicker\": {\n \"time_options\": [],\n \"refresh_intervals\": []\n },\n \"templating\": {\n \"list\": []\n },\n \"annotations\": {\n \"list\": []\n },\n \"refresh\": \"5s\",\n \"schemaVersion\": 17,\n \"version\": 0,\n \"links\": []\n}\n"
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaDatasource",
"metadata": {
"name": "grafanadatasource-sample"
},
"spec": {
"datasource": {
"access": "proxy",
"isDefault": true,
"jsonData": {
"timeInterval": "5s",
"tlsSkipVerify": true
},
"name": "prometheus",
"type": "prometheus",
"url": "http://prometheus-service:9090"
},
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"plugins": [
{
"name": "grafana-clock-panel",
"version": "1.3.0"
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaFolder",
"metadata": {
"name": "grafanafolder-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana-a"
}
},
"title": "Example Folder"
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaLibraryPanel",
"metadata": {
"name": "grafana-library-panel-inline-envs"
},
"spec": {
"envs": [
{
"name": "CUSTOM_RANGE_ENV",
"value": "now - 12h"
}
],
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"jsonnet": "local myRange = std.extVar('CUSTOM_RANGE_ENV'); {\n\n model: {}\n}\n",
"plugins": [
{
"name": "grafana-piechart-panel",
"version": "1.3.9"
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaMuteTiming",
"metadata": {
"name": "mutetiming-sample"
},
"spec": {
"editable": false,
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"name": "mutetiming-sample",
"time_intervals": [
{
"days_of_month": [
"1",
"15"
],
"location": "Asia/Shanghai",
"times": [
{
"end_time": "06:00",
"start_time": "00:00"
}
],
"weekdays": [
"saturday"
]
}
]
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaNotificationPolicy",
"metadata": {
"name": "grafananotificationpolicy-sample"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"route": {
"group_by": [
"grafana_folder",
"alertname"
],
"receiver": "Grafana Cloud OnCall",
"routes": [
{
"object_matchers": [
[
"foo",
"=",
"bar"
]
],
"receiver": "grafana-default-email",
"routes": [
{
"object_matchers": [
[
"severity",
"=",
"critical"
]
],
"receiver": "Grafana Cloud OnCall"
}
]
}
]
}
}
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaNotificationPolicyRoute",
"metadata": {
"labels": {
"app.kubernetes.io/created-by": "grafana-operator",
"app.kubernetes.io/instance": "grafananotificationpolicyroute-sample",
"app.kubernetes.io/managed-by": "kustomize",
"app.kubernetes.io/name": "grafananotificationpolicyroute",
"app.kubernetes.io/part-of": "grafana-operator"
},
"name": "grafananotificationpolicyroute-sample"
},
"spec": null
},
{
"apiVersion": "grafana.integreatly.org/v1beta1",
"kind": "GrafanaNotificationTemplate",
"metadata": {
"name": "test"
},
"spec": {
"instanceSelector": {
"matchLabels": {
"dashboards": "grafana"
}
},
"name": "test",
"template": "{{ define \"SlackAlert\" }}\n [{{.Status}}] {{ .Labels.alertname }}\n {{ .Annotations.AlertValues }}\n{{ end }}\n\n{{ define \"SlackAlertMessage\" }}\n {{ if gt (len .Alerts.Firing) 0 }}\n {{ len .Alerts.Firing }} firing:\n {{ range .Alerts.Firing }} {{ template \"SlackAlert\" . }} {{ end }}\n {{ end }}\n {{ if gt (len .Alerts.Resolved) 0 }}\n {{ len .Alerts.Resolved }} resolved:\n {{ range .Alerts.Resolved }} {{ template \"SlackAlert\" . }} {{ end }}\n {{ end }}\n{{ end }}\n\n{{ template \"SlackAlertMessage\" . }}\n"
}
}
]
capabilities: Basic Install
olm.operatorNamespace: openshift-operators
containerImage: >-
ghcr.io/grafana/grafana-operator@sha256:b678bc3faad11f4ef53c240e2a650ce939b9e8920dd72a47681b2efcadc7b837
createdAt: '2025-08-21T08:38:00Z'
categories: Monitoring
description: 'Deploys and manages Grafana instances, dashboards and data sources'
olm.operatorGroup: global-operators
spec:
containers:
- resources:
limits:
cpu: 200m
memory: 550Mi
requests:
cpu: 100m
memory: 20Mi
readinessProbe:
httpGet:
path: /readyz
port: 8081
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
name: manager
livenessProbe:
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
env:
- name: RELATED_IMAGE_GRAFANA
value: >-
docker.io/grafana/grafana@sha256:6ac590e7cabc2fbe8d7b8fc1ce9c9f0582177b334e0df9c927ebd9670469440f
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: 'metadata.annotations[''olm.targetNamespaces'']'
- name: OPERATOR_CONDITION_NAME
value: grafana-operator.v5.19.4
securityContext:
allowPrivilegeEscalation: false
ports:
- name: metrics
containerPort: 9090
protocol: TCP
- name: pprof
containerPort: 8888
protocol: TCP
imagePullPolicy: Always
terminationMessagePolicy: File
image: >-
ghcr.io/grafana/grafana-operator@sha256:b678bc3faad11f4ef53c240e2a650ce939b9e8920dd72a47681b2efcadc7b837
args:
- '--health-probe-bind-address=:8081'
- '--metrics-bind-address=0.0.0.0:9090'
- '--leader-elect'
restartPolicy: Always
terminationGracePeriodSeconds: 10
dnsPolicy: ClusterFirst
serviceAccountName: grafana-operator-controller-manager
serviceAccount: grafana-operator-controller-manager
securityContext:
runAsNonRoot: true
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 1
progressDeadlineSeconds: 600
status:
observedGeneration: 2
replicas: 1
updatedReplicas: 1
unavailableReplicas: 1
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2025-09-15T07:39:01Z'
lastTransitionTime: '2025-09-15T07:39:01Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2025-09-15T07:39:01Z'
lastTransitionTime: '2025-09-15T07:39:01Z'
reason: ReplicaSetUpdated
message: >-
ReplicaSet "grafana-operator-controller-manager-v5-5595b667f6" is
progressing.Already tried removing/reinstalling the operator a few times, unfortunately it always gets stuck at this stage.
Any help would be greatly appreciated in solving this issue, so far we could't find any successfull workarounds. Thank you!