-
Notifications
You must be signed in to change notification settings - Fork 8
OCP deploy on AWS through osia #137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@crstrn13 Could you please provide appropriate commit message and especially PR description? |
7896124 to
fff22cc
Compare
tasks/infra/delete-ocp-aws.yaml
Outdated
|
|
||
|
|
||
| # Run osia clean | ||
| osia clean \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you test this clean command? I think it will not work as it does not have any information on what to delete which is usually stored in the git directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is working as it has the templates on the shared-worskpace from the installation stage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah so it stores the data on the osia-settings-volume, but how if it is readOnly 🤔
I will have to check how this works because I always assumed osia looks for directory of name of the cluster to gather information on what VM's, netowrks, dns recrods to delete.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had talk with @mdujava and from what I understand, you need to have the output of osia install (the directory with cluster name) available to know what resources to delete. ... I now see that the task is in the same pipeline as deploy so it would have access to it. But if cleanup is set to false, there will be no nice way to delete the cluster. So I would either remove the clean option and change the delete-aws-ocp to always run even if the previous steps fail so the cleanup is always performed. Or add git support, meaning cloning the cluster-management repo and using it to persist the output as we currently do with kua-* clusters. And add standalone cleanup pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative solution might be to store the required files in AWS s3 bucket - but that's probably more complicated than to mimic what we do with kua-* clusters.
For the record, if the cluster is still up and running you can log into it and craft the files required for cluster removal. But if cluster is not accessible for whatever reason you need that files produced by osia install to delete the cluster - or you need to go to AWS and remove cluster resources from there manually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To create the files manually - I meant that as an emergency only. For sure we need to store the files somewhere and use them for cluster removal. Whether it is git or s3 or elsewhere does not really matter that much. Git is supported now so we can use it, I have no issues with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the cluster files are stored in the shared-workspace for the pipeline. More specifically here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@azgabur osia-default-settings volume is for the settings.yaml file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with shared-workspace is, that it will get cleaned up at the end of the pipeline run. So if the cleanup task wont run, the cluster files will be lost.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's true. I will configure the git to work somehow. Up until now I was cleaning the resources manually, but that's time consuming and error prone.
Signed-off-by: Alexander Cristurean <[email protected]>
fff22cc to
d4b3501
Compare
Signed-off-by: Alexander Cristurean <[email protected]>
3e05a67 to
371b73f
Compare
Signed-off-by: Alexander Cristurean <[email protected]>
935ad4a to
85d4250
Compare
️✅ There are no secrets present in this pull request anymore.If these secrets were true positive and are still valid, we highly recommend you to revoke them. 🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request. |
7fc9fd4 to
8edc028
Compare
Signed-off-by: Alexander Cristurean <[email protected]>
8edc028 to
e2dc730
Compare
Introduces a complete Tekton pipeline for provisioning and managing OCP clusters on AWS using OSIA. Includes provision-aws-ocp and delete-aws-ocp tasks that handle cluster lifecycle, along with a full deploy-ocp pipeline that provisions clusters, installs Kuadrant/RHCL, runs tests and performs cleanup