Skip to content

Conversation

@crstrn13
Copy link
Contributor

@crstrn13 crstrn13 commented Oct 22, 2025

Introduces a complete Tekton pipeline for provisioning and managing OCP clusters on AWS using OSIA. Includes provision-aws-ocp and delete-aws-ocp tasks that handle cluster lifecycle, along with a full deploy-ocp pipeline that provisions clusters, installs Kuadrant/RHCL, runs tests and performs cleanup

@zkraus
Copy link

zkraus commented Oct 27, 2025

@crstrn13 Could you please provide appropriate commit message and especially PR description?



# Run osia clean
osia clean \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you test this clean command? I think it will not work as it does not have any information on what to delete which is usually stored in the git directory.

Copy link
Contributor Author

@crstrn13 crstrn13 Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is working as it has the templates on the shared-worskpace from the installation stage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah so it stores the data on the osia-settings-volume, but how if it is readOnly 🤔
I will have to check how this works because I always assumed osia looks for directory of name of the cluster to gather information on what VM's, netowrks, dns recrods to delete.

Copy link
Contributor

@azgabur azgabur Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had talk with @mdujava and from what I understand, you need to have the output of osia install (the directory with cluster name) available to know what resources to delete. ... I now see that the task is in the same pipeline as deploy so it would have access to it. But if cleanup is set to false, there will be no nice way to delete the cluster. So I would either remove the clean option and change the delete-aws-ocp to always run even if the previous steps fail so the cleanup is always performed. Or add git support, meaning cloning the cluster-management repo and using it to persist the output as we currently do with kua-* clusters. And add standalone cleanup pipeline.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative solution might be to store the required files in AWS s3 bucket - but that's probably more complicated than to mimic what we do with kua-* clusters.

For the record, if the cluster is still up and running you can log into it and craft the files required for cluster removal. But if cluster is not accessible for whatever reason you need that files produced by osia install to delete the cluster - or you need to go to AWS and remove cluster resources from there manually.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To create the files manually - I meant that as an emergency only. For sure we need to store the files somewhere and use them for cluster removal. Whether it is git or s3 or elsewhere does not really matter that much. Git is supported now so we can use it, I have no issues with that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the cluster files are stored in the shared-workspace for the pipeline. More specifically here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@azgabur osia-default-settings volume is for the settings.yaml file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with shared-workspace is, that it will get cleaned up at the end of the pipeline run. So if the cleanup task wont run, the cluster files will be lost.

Copy link
Contributor Author

@crstrn13 crstrn13 Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's true. I will configure the git to work somehow. Up until now I was cleaning the resources manually, but that's time consuming and error prone.

Signed-off-by: Alexander Cristurean <[email protected]>
@gitguardian
Copy link

gitguardian bot commented Dec 1, 2025

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
While these secrets were previously flagged, we no longer have a reference to the
specific commits where they were detected. Once a secret has been leaked into a git
repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@crstrn13 crstrn13 force-pushed the aws_ocp_pipeline branch 2 times, most recently from 7fc9fd4 to 8edc028 Compare December 2, 2025 16:03
Signed-off-by: Alexander Cristurean <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants