Skip to content

Conversation

@wesjdj
Copy link
Contributor

@wesjdj wesjdj commented Oct 23, 2025

Adds an option to the Renku Helm chart which limits access to the Keycloak admin console to localhost.

This secures the Keycloak admin console from being accessed from the internet, but still allows admins to access the Keycloak admin console by port-forwarding the Keycloak pod on port 8080.

The secureAdminConsole option is set to false by default to maintain current default behaviour, but we recommend setting it to true for production instances.

/deploy #extra-values=global.keycloak.secureAdminConsole=false #notest

@wesjdj wesjdj requested a review from a team as a code owner October 23, 2025 10:18
@RenkuBot
Copy link
Collaborator

You can access the deployment of this PR at https://ci-renku-4219.dev.renku.ch

@rokroskar
Copy link
Member

Can we not limit it to be available only from certain networks instead of requiring port-forwarding? If something goes wrong in the k8s / kubectl chain we can be effectively locked out.

@wesjdj wesjdj requested a review from a team as a code owner October 23, 2025 12:53
@aledegano
Copy link
Contributor

Can we not limit it to be available only from certain networks instead of requiring port-forwarding? If something goes wrong in the k8s / kubectl chain we can be effectively locked out.

To me this solution seems a lot more secure and reliable: if we cannot use Kubernetes we are screwed anyway, having Keycloak admin access but not Kubernetes access won't do any good.
And if we instead make it reachable only from certain networks, that would mean (for us) that anyone in the ETH network could reach it which is not a big filtering, at least given how little we need KC admin vs how risky it is to have it reachable.

@wesjdj
Copy link
Contributor Author

wesjdj commented Oct 23, 2025

Can we not limit it to be available only from certain networks instead of requiring port-forwarding? If something goes wrong in the k8s / kubectl chain we can be effectively locked out.

To me this solution seems a lot more secure and reliable: if we cannot use Kubernetes we are screwed anyway, having Keycloak admin access but not Kubernetes access won't do any good. And if we instead make it reachable only from certain networks, that would mean (for us) that anyone in the ETH network could reach it which is not a big filtering, at least given how little we need KC admin vs how risky it is to have it reachable.

In addition to this, we'd have to separate out the single core Renku ingress in order to achieve this, as you can't filter the IPs at an ingress path level, it has to be at ingress level.

The REST API endpoint is still publicly accessible, so it's not that we'd be completely stuck.

In the case that we lost access to Kubernetes and needed Keycloak console admin access, if GitOps still works, we can just change the secureAdminConsole value in the deployment values.

@rokroskar
Copy link
Member

if we cannot use Kubernetes we are screwed anyway, having Keycloak admin access but not Kubernetes access won't do any good.

Maybe I have rancher ptsd... when the Rancher cluster was down and we couldn't access the services through k8s but they were actually all happily working... we don't really have that issue anymore, I know.

In addition to this, we'd have to separate out the single core Renku ingress in order to achieve this, as you can't filter the IPs at an ingress path level, it has to be at ingress level.

yes, I assumed this was the main difficulty.

Comment on lines +310 to +311
- configMapRef:
name: keycloak-admin-console
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason not to use extraEnv directly? If the value changes and helm upgrade is executed, then the extraEnvFrom will not update which means the new value will not cause a pod template re-render.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed whether we wanted to make the change as an extraEnv in our deployment values, but it would mean having to copy all the extraEnvs currently in the Renku chart values.yaml into our deployment values. The other option was to modify the Renku chart. The consensus was to make the change in the chart, but the value change not triggering a restart of the pod is not ideal.

@aledegano a compromise could be keeping the new configMapRef in the values.yaml, but keeping it commented out, and using that as the secureAdminConsole toggle instead?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand: if you add the definition to extraEnv just above, you can use helm chart rendering with the value, so I don't see why it's not just added there.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that extraEnv in the Renku chart's values.yaml is defined as a multi-line string, not as a YAML list. This means we can't simply append to it from our deployment values.

When you override a string value in Helm, it completely replaces the original. So if we add our new environment variable to extraEnv in our deployment values, we would lose all the existing environment variables defined in the chart's values.yaml.

To use extraEnv from our deployment values, we'd have to copy all existing environment variables from the chart plus add our new one. That's fragile because if the chart updates those values in the future, our deployment values won't pick up those changes.

That's why modifying the chart directly (using extraEnvFrom with a ConfigMap) seemed like cleaner options, though I agree the pod restart issue with extraEnvFrom is not ideal.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make sure things are clear: what I am suggesting is simply to set:

  extraEnv: |
    - name: KC_DB
      value: postgres
    - name: KC_DB_PORT
      value: "5432"
    - name:  PROXY_ADDRESS_FORWARDING
      value: "true"
    - name: JAVA_OPTS_APPEND
      value: >-
        -Djgroups.dns.query={{ include "keycloak.fullname" . }}-headless
    - name: KC_DB_POOL_MAX_SIZE
      value: "10"
    {{- if .Values.global.keycloak.secureAdminConsole }}
    - name: KC_HOSTNAME_ADMIN_URL
      value: http://localhost:8080
    {{- end -}}

in the values here. This requires no further change for deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants