Makes Ververica Platform resources Kubernetes-Native! Defines CustomResourceDefinitions for mapping resources to K8s!
Built for Ververica Platform version 2.x.
More about the Ververica Platform
Ververica Platform Docs
Since the resources names of K8s and the Ververica Platform somewhat clash, the
custom VP Resources will all be prefixed with Vp.
DeploymentTarget->VpDeploymentTargetDeployment->VpDeploymentNamespace->VpNamespaceSavepoint->VpSavepointEvent-> native K8sEvent
JobDeploymentDefaultsSecret ValueStatus
To avoid naming conflicts, and for simplicity, and VP metadata and spec fields
are nested under the top-level spec field of the K8s resource.
Look in docs/mappings for information on each supported resource.
Please have a look at the docs for information on getting started using
the operator.
This operator works with both the Community and Enterprise editions of the Ververica Platform, with the caveats:
VpNamespacesare not supported by the Community Edition, so the manager will not register those resources- The
spec.metadata.namespacefield must either be left unset or set explicitly todefaultfor allVpresources
Find out more about the editions here.
To run the binary directly, after building run ./bin/manager.
Flags:
--helpprints usage--vvp-url=http://localhost:8081the url, without trailing slash, for the Ververica Platform--vvp-edition=enterprisethe Ververica Platform Edition to support. See Editions for more.--debugdebug mode for logging--enable-leader-electionto ensure only one manager is active with a multi-replica deployment--metrics-addr=:8080address to bind metrics to--watch-namespace=all-namespacesthe namespace to watch resources on[--env-file]the path to an environment (.env) file to be loaded
For authorization with the AppManager's API, a token is needed. This can be provided in the environment on either a per-namespace or one-token-to-rule-them-all basis. If it is not provided in the environment, an "owner" token will be created for each namespace that resources are managed in.
Specifying in the environment is a good way to integrate with namespaces that aren't defined in Kubernetes.
Environment:
APPMANAGER_API_TOKEN_{NAMESPACE}a token to use for resources in a specific Ververica Platform namespace, upper-casedAPPMANAGER_API_TOKENif no namespace-specific token can be found, this value will be used.
Images are published to Docker Hub.
- The
latesttag always refers to the current HEAD in the master branch. - Each master commit hash is also tagged and published.
- Git tags are published with the same tag.
A Helm chart for the operator lives in ./charts/vp-k8s-operator,
which sets up a deployment with a metrics server, RBAC policies, CRDs, and, optionally,
an RBAC proxy for the metrics over HTTPS.
The CRDs are managed in a separate chart (./charts/vp-k8s-operator-crds), which also
needs to be installed.
Built using kubebuilder.
kind is used for running a local test cluster,
though something like minikube will also do.
More on the design of the controller and its resources can be found in docs/design.md.
Also built as a Go module - no vendor files here.
System Pre-requisites:
go>=1.14.xmake>=4kubebuilder==v2.2.0docker>=19kind>=0.6.0
makealias formanagermake managerbuilds the entire app binarymake runruns the entire app locallymake manifestsbuilds the CRDs from./config/crdmake installinstalls the CRDs from./config/crdon the clustermake deployinstalls the entire app on the clustermake docker-buildbuilds the docker imagemake docker-pushpushes the built docker imagemake generategenerates the controller code from the./apipackagemake swagger-gengenerates the swagger codemake lintruns linting on the source codemake fmtrunsgo fmton the packagemake testruns the test suites with coveragemake patch-imagesets the current version as the default deployment image tagmake kustomize-buildbuilds the default k8s resources for deployment
make test-cluster-createinitializes a cluster for testing, using kindmake test-cluster-deletedeletes the testing clustermake test-cluster-setupinstalls cert-manager, the Community VVP, the vp-k8s-crds, and the vp-k8s-operator on the test clustermake test-cluster-instal-chartbuilds the operator and installs it on the test cluster from the local chartmake test-cluster-instal-crdsinstalls the vp-k8s-operator CRDs on the test cluster from the local chart
To use the default test cluster, you'll need to store a KUBECONFIG env var pointed to it.
godotenv automatically loads this when running main.
The API Clients are auto-generated using the Swagger Codegen utility.
The appmanager-api Swagger file is from the live API documentation (available at ${VP_URL}/api/swagger),
but the generated client needs a few updates to work correctly.
The optional package is missing from many of the imports in the generated code, as must be added manually.
package ververicaplatformapi
import (
// ...
"github.com/antihax/optional"
// ...
)Affected files:
api_event_resource.goapi_job_resource.goapi_savepoint_resource.go
Type Changes:
model_pods.goneeds to be updated with the proper Kubernetes typesmodel_volume_and_mount.goneeds to be updated with the proper Kubernetes types
There is also a bug that cannot handle an empty Swagger type to represent the any type, so
you must manually change model_any.go to:
package appmanagerapi
type Any interface {}You'll also have to change any usages of this type in structs to be embedded, instead of by pointer ref, namely in:
model_json_patch_generic.go
The images are built in two steps:
- The
build.Dockerfileimage is a full development environment for running tests, linting, and building the source with the correct tooling. This can also be used for development if you so like, just override the entrypoint. - The build image is then passed as a build arg to the main
Dockerfile, which builds the manager binary and copies it over into an image for distribution.
Other OSS that influenced this project: