-
Couldn't load subscription status.
- Fork 30
Description
Hello,
I am currently deploying OpenShift Lightspeed using the Lightspeed Operator.
I would like to configure custom environment variables for the pods managed by the operator (e.g. ols-api, ols-console, ols-data-collector).
At the moment, any manual changes to the underlying Deployment resources are reverted by the operator’s reconciliation loop. However, the OLSConfig Custom Resource does not seem to expose a way to define custom environment variables.
Use case:
Im working behind a proxy and the following OLSConfig is not working:
apiVersion: ols.openshift.io/v1alpha1
kind: OLSConfig
metadata:
name: cluster
spec:
llm:
providers:
- credentialsSecretRef:
name: credentials
deploymentName: MYTEST
models:
- name: gpt-4o
name: myAzure
type: azure_openai
url: 'https://MYTEST.openai.azure.com/'
ols:
additionalCAConfigMapRef:
name: trusted-certs
defaultModel: gpt-4o
defaultProvider: myAzure
logLevel: DEBUG
proxyConfig:
proxyCACertificate:
name: trusted-certs
proxyURL: 'http://proxy.fake.url:8080/'
With Debug I get this error:
2025-08-18 18:02:35,337 [httpcore.proxy:_trace.py:47] DEBUG: start_tls.failed exception=ConnectError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)'))
8292025-08-18 18:02:35,337 [openai._base_client:_base_client.py:992] DEBUG: Encountered Exception
830Traceback (most recent call last):
831File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
832yield
833File "/usr/local/lib/python3.11/site-packages/httpx/_transports/default.py", line 236, in handle_request
834resp = self._pool.handle_request(req)
835^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
836File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
837raise exc from None
838File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
839response = connection.handle_request(
840^^^^^^^^^^^^^^^^^^^^^^^^^^
841File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/http_proxy.py", line 316, in handle_request
842stream = stream.start_tls(**kwargs)
843^^^^^^^^^^^^^^^^^^^^^^^^^^
844File "/usr/local/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 376, in start_tls
845return self._stream.start_tls(ssl_context, server_hostname, timeout)
846^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
847File "/usr/local/lib/python3.11/site-packages/httpcore/_backends/sync.py", line 154, in start_tls
848with map_exceptions(exc_map):
849File "/usr/lib64/python3.11/contextlib.py", line 158, in __exit__
850self.gen.throw(typ, value, traceback)
851File "/usr/local/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
852raise to_exc(exc) from exc
853httpcore.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)
854
855The above exception was the direct cause of the following exception:
856
The additionalCAConfigMapRef seems to be ignored by the https.core python library. I can fix this, by setting some ENV-Variables in the ReplicaSet (not the deployment, because the Operator reconciles this):
- name: SSL_CERT_FILE
value: /etc/certs/ols-additional-ca/caCertFileName
- name: REQUESTS_CA_BUNDLE
value: /etc/certs/ols-additional-ca/caCertFileName
/etc/certs/ols-additional-ca/caCertFileName contains the content of the additionalCAConfigMapRef. With that change, everything works.
Being able to inject environment variables through the OLSConfig would align with common Operator patterns (similar to how resource requests/limits or node selectors can already be customized).
Proposal:
Extend the OLSConfig API with an optional field, for example:
spec:
ols:
deployment:
api:
env:
- name: HTTP_PROXY
value: http://proxy.example.com:8080/
- name: FEATURE_FLAG
value: "true"
console:
env:
- name: LOG_LEVEL
value: debug
The operator should then propagate these environment variables into the corresponding pods.
Benefits:
- Greater flexibility for admins to adapt Lightspeed to different environments.
- Avoids unsupported manual patching of Deployments.
- Consistent with Operator best practices.