From 0a7bdd255049afac67a7bd5655489379dc13ab39 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Thu, 30 Oct 2025 12:50:06 +0600 Subject: [PATCH 01/13] elasticsearch ops-request Signed-off-by: Bonusree --- docs/guides/elasticsearch/restart/index.md | 267 ++++++++++++++++++ .../elasticsearch/update-version/index.md | 59 ++++ 2 files changed, 326 insertions(+) create mode 100644 docs/guides/elasticsearch/restart/index.md create mode 100644 docs/guides/elasticsearch/update-version/index.md diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md new file mode 100644 index 000000000..d4f60094a --- /dev/null +++ b/docs/guides/elasticsearch/restart/index.md @@ -0,0 +1,267 @@ +--- +title: Elasticsearch Restart +menu: + docs_{{ .version }}: + identifier: es-restart-elasticsearch + name: Restart + parent: es-elasticsearch-guides + weight: 15 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- +> New to KubeDB? Please start [here](/docs/README.md). + +# Restart Elasticsearch + +KubeDB supports restarting a Elasticsearch database using a `ElasticsearchOpsRequest`. Restarting can be +useful if some pods are stuck in a certain state or not functioning correctly. + +This guide will demonstrate how to restart a Elasticsearch cluster using an OpsRequest. + +--- + +## Before You Begin + +- You need a running Kubernetes cluster and a properly configured `kubectl` command-line tool. If you don’t have a cluster, you can create one using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install the KubeDB CLI on your workstation and the KubeDB operator in your cluster by following the [installation steps](/docs/setup/README.md). + +- For better isolation, this tutorial uses a separate namespace called `demo`: + +```bash +kubectl create ns demo +namespace/demo created +``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/Elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/Elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Deploy Elasticsearch + +In this section, we are going to deploy a Elasticsearch database using KubeDB. + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es + namespace: demo +spec: + version: "8.0.40" + replicas: 3 + storageType: Durable + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + deletionPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/restart/yamls/es.yaml +Elasticsearch.kubedb.com/Elasticsearch created +``` +let's wait until all pods are in the `Running` state, + +```shell +kubectl get pods -n demo +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 6m28s +es-1 2/2 Running 0 6m28s +es-2 2/2 Running 0 6m28s +``` + + + +# Apply Restart opsRequest + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: es + timeout: 3m + apply: Always +``` + +Here, + +- `spec.type` specifies the type of operation (Restart in this case). + +- `spec.databaseRef` references the Elasticsearch database. The OpsRequest must be created in the same namespace as the database. + +- `spec.timeout` the maximum time the operator will wait for the operation to finish before marking it as failed. + +- `spec.apply` determines whether to always apply the operation (Always) or if the database phase is ready (IfReady). + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/restart/yamls/restart.yaml +ElasticsearchOpsRequest.ops.kubedb.com/restart created +``` + +In a Elasticsearch cluster, all pods act as primary nodes. When you apply a restart OpsRequest, the KubeDB operator will restart the pods sequentially, one by one, to maintain cluster availability. + +Let's watch the rolling restart process with: +```shell +NAME READY STATUS RESTARTS AGE +es-0 2/2 Terminating 0 56m +es-1 2/2 Running 0 55m +es-2 2/2 Running 0 54m +``` + +```shell +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 112s +es-1 2/2 Terminating 0 55m +es-2 2/2 Running 0 56m + +``` +```shell +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 112s +es-1 2/2 Running 0 42s +es-2 2/2 Terminating 0 56m + +``` + +```shell +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +restart Restart Successful 64m + +$ kubectl get Elasticsearchopsrequest -n demo restart -oyaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + annotations: + kubectl.kubernetes.io/last-applied-configuration: | + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"ElasticsearchOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"es"},"timeout":"3m","type":"Restart"}} + creationTimestamp: "2025-10-17T05:45:40Z" + generation: 1 + name: restart + namespace: demo + resourceVersion: "22350" + uid: c6ef7130-9a31-4f64-ae49-1b4e332f0817 +spec: + apply: Always + databaseRef: + name: es + timeout: 3m + type: Restart +status: + conditions: + - lastTransitionTime: "2025-10-17T05:45:41Z" + message: 'Controller has started to Progress the ElasticsearchOpsRequest: demo/restart' + observedGeneration: 1 + reason: Running + status: "True" + type: Running + - lastTransitionTime: "2025-10-17T05:45:49Z" + message: evict pod; ConditionStatus:True; PodName:es-0 + observedGeneration: 1 + status: "True" + type: EvictPod--es-0 + - lastTransitionTime: "2025-10-17T05:45:49Z" + message: get pod; ConditionStatus:True; PodName:es-0 + observedGeneration: 1 + status: "True" + type: GetPod--es-0 + - lastTransitionTime: "2025-10-17T05:46:59Z" + message: evict pod; ConditionStatus:True; PodName:es-1 + observedGeneration: 1 + status: "True" + type: EvictPod--es-1 + - lastTransitionTime: "2025-10-17T05:46:59Z" + message: get pod; ConditionStatus:True; PodName:es-1 + observedGeneration: 1 + status: "True" + type: GetPod--es-1 + - lastTransitionTime: "2025-10-17T05:48:09Z" + message: evict pod; ConditionStatus:True; PodName:es-2 + observedGeneration: 1 + status: "True" + type: EvictPod--es-2 + - lastTransitionTime: "2025-10-17T05:48:09Z" + message: get pod; ConditionStatus:True; PodName:es-2 + observedGeneration: 1 + status: "True" + type: GetPod--es-2 + - lastTransitionTime: "2025-10-17T05:49:19Z" + message: 'Successfully started Elasticsearch pods for ElasticsearchOpsRequest: + demo/restart ' + observedGeneration: 1 + reason: RestartPodsSucceeded + status: "True" + type: Restart + - lastTransitionTime: "2025-10-17T05:49:19Z" + message: Controller has successfully restart the Elasticsearch replicas + observedGeneration: 1 + reason: Successful + status: "True" + type: Successful + observedGeneration: 1 + phase: Successful + +``` +**Verify Data Persistence** + +After the restart, reconnect to the database and verify that the previously created database still exists: + +```bash +$ kubectl exec -it -n demo es-0 -- mysql -u root --password='kP!VVJ2e~DUtcD*D' +Defaulted container "Elasticsearch" out of: Elasticsearch, px-coordinator, px-init (init) +mysql: [Warning] Using a password on the command line interface can be insecure. +Welcome to the MySQL monitor. Commands end with ; or \g. +Your MySQL connection id is 112 +Server version: 8.0.40-31.1 Percona XtraDB Cluster (GPL), Release rel31, Revision 4b32153, WSREP version 26.1.4.3 + +Copyright (c) 2009-2024 Percona LLC and/or its affiliates +Copyright (c) 2000, 2024, Oracle and/or its affiliates. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +mysql> show databases; ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| kubedb_system | +| mysql | +| performance_schema | +| shastriya | +| sys | ++--------------------+ +6 rows in set (0.02 sec) + +mysql> exit +Bye +``` +## Cleaning up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete Elasticsearchopsrequest -n demo restart +kubectl delete Elasticsearch -n demo Elasticsearch +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/Elasticsearch/index.md). +- Detail concepts of [ElasticsearchopsRequest object](/docs/guides/elasticsearch/concepts/opsrequest/index.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).. diff --git a/docs/guides/elasticsearch/update-version/index.md b/docs/guides/elasticsearch/update-version/index.md new file mode 100644 index 000000000..52dd38b2d --- /dev/null +++ b/docs/guides/elasticsearch/update-version/index.md @@ -0,0 +1,59 @@ +--- +title: Updating Elasticsearch Overview +menu: + docs_{{ .version }}: + identifier: guides-Elasticsearch-updating-overview + name: Overview + parent: guides-Elasticsearch-updating + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Updating Elasticsearch version + +This guide will give you an overview of how KubeDB ops manager updates the version of `Elasticsearch` database. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/Elasticsearch/concepts/Elasticsearch.md) + - [ElasticsearchOpsRequest](/docs/guides/Elasticsearch/concepts/opsrequest.md) + +## How update Process Works + +The following diagram shows how KubeDB KubeDB ops manager used to update the version of `Elasticsearch`. Open the image in a new tab to see the enlarged version. + +[//]: # (
) + +[//]: # ( Elasticsearch update Flow) + +[//]: # (
Fig: updating Process of Elasticsearch
) + +[//]: # (
) + +The updating process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` cr. + +2. `KubeDB-Provisioner` operator watches for the `Elasticsearch` cr. + +3. When it finds one, it creates a `PetSet` and related necessary stuff like secret, service, etc. + +4. Then, in order to update the version of the `Elasticsearch` database the user creates a `ElasticsearchOpsRequest` cr with the desired version. + +5. `KubeDB-ops-manager` operator watches for `ElasticsearchOpsRequest`. + +6. When it finds one, it Pauses the `Elasticsearch` object so that the `KubeDB-Provisioner` operator doesn't perform any operation on the `Elasticsearch` during the updating process. + +7. By looking at the target version from `ElasticsearchOpsRequest` cr, In case of major update `KubeDB-ops-manager` does some pre-update steps as we need old bin and lib files to update from current to target Elasticsearch version. +8. Then By looking at the target version from `ElasticsearchOpsRequest` cr, `KubeDB-ops-manager` operator updates the images of the `PetSet` for updating versions. + + +9. After successful upgradation of the `PetSet` and its `Pod` images, the `KubeDB-ops-manager` updates the image of the `Elasticsearch` object to reflect the updated cluster state. + +10. After successful upgradation of `Elasticsearch` object, the `KubeDB` ops manager resumes the `Elasticsearch` object so that the `KubeDB-provisioner` can resume its usual operations. + +In the next doc, we are going to show a step by step guide on updating of a Elasticsearch database using update operation. \ No newline at end of file From 0e8be988e8436d0745a98cb93f080675863e13c8 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Thu, 6 Nov 2025 14:17:49 +0600 Subject: [PATCH 02/13] updateversion Signed-off-by: Bonusree --- .../update-version/elasticsearch.yaml | 19 ++ .../update-version/update-version.yaml | 11 + .../elasticsearch/update-version/_index.md | 0 .../update-version/elasticsearch.md | 322 ++++++++++++++++++ .../update-version/{index.md => overview.md} | 0 5 files changed, 352 insertions(+) create mode 100644 docs/examples/elasticsearch/update-version/elasticsearch.yaml create mode 100644 docs/examples/elasticsearch/update-version/update-version.yaml create mode 100644 docs/guides/elasticsearch/update-version/_index.md create mode 100644 docs/guides/elasticsearch/update-version/elasticsearch.md rename docs/guides/elasticsearch/update-version/{index.md => overview.md} (100%) diff --git a/docs/examples/elasticsearch/update-version/elasticsearch.yaml b/docs/examples/elasticsearch/update-version/elasticsearch.yaml new file mode 100644 index 000000000..5deb703ed --- /dev/null +++ b/docs/examples/elasticsearch/update-version/elasticsearch.yaml @@ -0,0 +1,19 @@ +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-demo + namespace: demo +spec: + deletionPolicy: Delete + enableSSL: true + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: local-path + storageType: Durable + version: xpack-9.1.3 + #ghcr.io/kubedb/kubedb-provisioner:v0.59.0 diff --git a/docs/examples/elasticsearch/update-version/update-version.yaml b/docs/examples/elasticsearch/update-version/update-version.yaml new file mode 100644 index 000000000..29230f23d --- /dev/null +++ b/docs/examples/elasticsearch/update-version/update-version.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-demo-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: es-demo + updateVersion: + targetVersion: xpack-9.1.4 \ No newline at end of file diff --git a/docs/guides/elasticsearch/update-version/_index.md b/docs/guides/elasticsearch/update-version/_index.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/guides/elasticsearch/update-version/elasticsearch.md b/docs/guides/elasticsearch/update-version/elasticsearch.md new file mode 100644 index 000000000..67c4c1467 --- /dev/null +++ b/docs/guides/elasticsearch/update-version/elasticsearch.md @@ -0,0 +1,322 @@ +--- +title: Update Version of Elasticsearch +menu: + docs_{{ .version }}: + identifier: es-update-version-Elasticsearch + name: Elasticsearch + parent: es-update-version + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Update version of Elasticsearch + +This guide will show you how to use `KubeDB` Ops-manager operator to update the version of `Elasticsearch` Combined or Topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Updating Overview](/docs/guides/elasticsearch/update-version/elasticsearch.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/elasticsearch](/docs/examples/elasticsearch) directory of [kubedb/docs](https://github.com/kube/docs) repository. + +## Prepare Elasticsearch + +Now, we are going to deploy a `Elasticsearch` replicaset database with version `xpack-8.11.1`. + +### Deploy Elasticsearch + +In this section, we are going to deploy a Elasticsearch topology cluster. Then, in the next section we will update the version using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-demo + namespace: demo +spec: + deletionPolicy: Delete + enableSSL: true + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: local-path + storageType: Durable + version: xpack-9.1.3 + +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/update-version/Elasticsearch.yaml +Elasticsearch.kubedb.com/es-demo created +``` + +Now, wait until `es-demo` created has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo +NAME VERSION STATUS AGE +es-demo xpack-9.1.3 Ready 9m10s + +``` + +We are now ready to apply the `ElasticsearchOpsRequest` CR to update. + +### update Elasticsearch Version + +Here, we are going to update `Elasticsearch` from `xpack-9.1.3` to `xpack-9.1.4`. + +#### Create ElasticsearchOpsRequest: + +In order to update the version, we have to create a `ElasticsearchOpsRequest` CR with your desired version that is supported by `KubeDB`. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-demo-update + namespace: demo +spec: + type: UpdateVersion + databaseRef: + name: es-demo + updateVersion: + targetVersion: xpack-9.1.4 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing operation on `es-demo` Elasticsearch. +- `spec.type` specifies that we are going to perform `UpdateVersion` on our database. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `xpack-8.16.4`. + +> **Note:** If you want to update combined Elasticsearch, you just refer to the `Elasticsearch` combined object name in `spec.databaseRef.name`. To create a combined Elasticsearch, you can refer to the [Elasticsearch Combined](/docs/guides/elasticsearch/clustering/combined-cluster/index.md) guide. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/update-version/update-version.yaml +Elasticsearchopsrequest.ops.kubedb.com/Elasticsearch-update-version created +``` + +#### Verify Elasticsearch version updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the image of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +Elasticsearch-update-version UpdateVersion Successful 2m6s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to update the database version. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo es-demo-update +Name: es-demo-update +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-06T05:19:15Z + Generation: 1 + Resource Version: 609353 + UID: 722d8557-a6c6-4412-87d4-61faee8a3be2 +Spec: + Apply: IfReady + Database Ref: + Name: es-demo + Type: UpdateVersion + Update Version: + Target Version: xpack-9.1.4 +Status: + Conditions: + Last Transition Time: 2025-11-06T05:19:15Z + Message: Elasticsearch ops request is updating database version + Observed Generation: 1 + Reason: UpdateVersion + Status: True + Type: UpdateVersion + Last Transition Time: 2025-11-06T05:19:18Z + Message: Successfully updated PetSets + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2025-11-06T05:19:23Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-0 + Last Transition Time: 2025-11-06T05:19:23Z + Message: create es client; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-0 + Last Transition Time: 2025-11-06T05:19:23Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-demo-0 + Last Transition Time: 2025-11-06T05:19:23Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-0 + Last Transition Time: 2025-11-06T05:21:03Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-06T05:19:58Z + Message: re enable shard allocation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReEnableShardAllocation + Last Transition Time: 2025-11-06T05:20:03Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-1 + Last Transition Time: 2025-11-06T05:20:03Z + Message: create es client; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-1 + Last Transition Time: 2025-11-06T05:20:03Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-demo-1 + Last Transition Time: 2025-11-06T05:20:03Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-1 + Last Transition Time: 2025-11-06T05:20:33Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-2 + Last Transition Time: 2025-11-06T05:20:33Z + Message: create es client; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-2 + Last Transition Time: 2025-11-06T05:20:33Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-demo-2 + Last Transition Time: 2025-11-06T05:20:33Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-2 + Last Transition Time: 2025-11-06T05:21:08Z + Message: Successfully updated all nodes + Observed Generation: 1 + Reason: RestartPods + Status: True + Type: RestartPods + Last Transition Time: 2025-11-06T05:21:08Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 29m KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-demo + Warning pod exists; ConditionStatus:True; PodName:es-demo-0 29m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:True; PodName:es-demo-0 29m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-demo-0 29m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-demo-0 + Warning evict pod; ConditionStatus:True; PodName:es-demo-0 29m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:False 29m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 29m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 29m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-1 29m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:True; PodName:es-demo-1 29m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-demo-1 29m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-demo-1 + Warning evict pod; ConditionStatus:True; PodName:es-demo-1 29m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:False 29m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 28m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 28m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-2 28m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:True; PodName:es-demo-2 28m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-demo-2 28m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-demo-2 + Warning evict pod; ConditionStatus:True; PodName:es-demo-2 28m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:False 28m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 28m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 28m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Normal RestartPods 28m KubeDB Ops-manager Operator Successfully updated all nodes + Normal ResumeDatabase 28m KubeDB Ops-manager Operator Resuming Elasticsearch + Normal ResumeDatabase 28m KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-demo + Normal ResumeDatabase 28m KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-demo + Normal Successful 28m KubeDB Ops-manager Operator Successfully Updated Database + +``` + +Now, we are going to verify whether the `Elasticsearch` and the related `PetSets` and their `Pods` have the new version image. Let's check, + +```bash +$ kubectl get es -n demo es-demo -o=jsonpath='{.spec.version}{"\n"}' +xpack-9.1.4 + +$ kubectl get petset -n demo es-demo -o=jsonpath='{.spec.template.spec.containers[0].image}{"\n"}' +ghcr.io/appscode-images/elastic:9.1.4@sha256:e0b89e3ace47308fa5fa842823bc622add3733e47c1067cd1e6afed2cfd317ca + +$ kubectl get pods -n demo es-demo-0 -o=jsonpath='{.spec.containers[0].image}{"\n"}' +ghcr.io/appscode-images/elastic:9.1.4 + +``` + +You can see from above, our `Elasticsearch` has been updated with the new version. So, the updateVersion process is successfully completed. + +> **NOTE:** If you want to update Opensearch, you can follow the same steps as above but using `ElasticsearchOpsRequest` CRD. You can visit [OpenSearch ](/docs/guides/elasticsearch/quickstart/overview/opensearch) guide for more details. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete Elasticsearchopsrequest -n demo es-demo-update +kubectl delete es -n demo es-demo +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). +- Detail concepts of [ElasticsearchOpsRequest object](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md). +- Detailed concept of [Elasticesearch Version](/docs/guides/elasticsearch/concepts/catalog/index.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/update-version/index.md b/docs/guides/elasticsearch/update-version/overview.md similarity index 100% rename from docs/guides/elasticsearch/update-version/index.md rename to docs/guides/elasticsearch/update-version/overview.md From 20ce15bb8569f6c4117a037ac7462cd30b5de298 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Fri, 7 Nov 2025 10:54:34 +0600 Subject: [PATCH 03/13] rstart, scalling Signed-off-by: Bonusree --- docs/examples/elasticsearch/restart.yaml | 11 + docs/guides/elasticsearch/restart/index.md | 218 ++-- docs/guides/elasticsearch/scaling/_index.md | 10 + .../scaling/horizontal/combined.md | 969 ++++++++++++++++++ .../elasticsearch/scaling/horizontal/index.md | 0 .../scaling/horizontal/overview.md | 54 + .../elasticsearch/scaling/vertical/index.md | 0 .../elasticsearch/update-version/_index.md | 10 + .../update-version/elasticsearch.md | 6 +- .../elasticsearch/update-version/overview.md | 2 +- 10 files changed, 1180 insertions(+), 100 deletions(-) create mode 100644 docs/examples/elasticsearch/restart.yaml create mode 100644 docs/guides/elasticsearch/scaling/_index.md create mode 100644 docs/guides/elasticsearch/scaling/horizontal/combined.md create mode 100644 docs/guides/elasticsearch/scaling/horizontal/index.md create mode 100644 docs/guides/elasticsearch/scaling/horizontal/overview.md create mode 100644 docs/guides/elasticsearch/scaling/vertical/index.md diff --git a/docs/examples/elasticsearch/restart.yaml b/docs/examples/elasticsearch/restart.yaml new file mode 100644 index 000000000..5cbbf6a26 --- /dev/null +++ b/docs/examples/elasticsearch/restart.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: restart + namespace: demo +spec: + type: Restart + databaseRef: + name: es-demo + timeout: 10m + apply: Always \ No newline at end of file diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index d4f60094a..6491095d4 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -7,8 +7,8 @@ menu: parent: es-elasticsearch-guides weight: 15 menu_name: docs_{{ .version }} -section_menu_id: guides --- + > New to KubeDB? Please start [here](/docs/README.md). # Restart Elasticsearch @@ -43,35 +43,37 @@ In this section, we are going to deploy a Elasticsearch database using KubeDB. apiVersion: kubedb.com/v1 kind: Elasticsearch metadata: - name: es + name: es-demo namespace: demo spec: - version: "8.0.40" + deletionPolicy: Delete + enableSSL: true replicas: 3 - storageType: Durable storage: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - deletionPolicy: WipeOut + storageClassName: local-path + storageType: Durable + version: xpack-9.1.3 ``` Let's create the `Elasticsearch` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/restart/yamls/es.yaml +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/update-version/elasticsearch.yaml Elasticsearch.kubedb.com/Elasticsearch created ``` let's wait until all pods are in the `Running` state, ```shell kubectl get pods -n demo -NAME READY STATUS RESTARTS AGE -es-0 2/2 Running 0 6m28s -es-1 2/2 Running 0 6m28s -es-2 2/2 Running 0 6m28s +NAME READY STATUS RESTARTS AGE +es-demo-0 1/1 Running 0 6m28s +es-demo-1 1/1 Running 0 6m28s +es-demo-2 1/1 Running 0 6m28s ``` @@ -87,14 +89,14 @@ metadata: spec: type: Restart databaseRef: - name: es - timeout: 3m + name: es-demo + timeout: 10m apply: Always ``` Here, -- `spec.type` specifies the type of operation (Restart in this case). +- `spec.type` specifies the type of operation (Restart in this case). `Restart` is used to perform a smart restart of the Elasticsearch cluster. - `spec.databaseRef` references the Elasticsearch database. The OpsRequest must be created in the same namespace as the database. @@ -105,7 +107,7 @@ Here, Let's create the `ElasticsearchOpsRequest` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/restart/yamls/restart.yaml +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/restart.yaml ElasticsearchOpsRequest.ops.kubedb.com/restart created ``` @@ -113,24 +115,24 @@ In a Elasticsearch cluster, all pods act as primary nodes. When you apply a rest Let's watch the rolling restart process with: ```shell -NAME READY STATUS RESTARTS AGE -es-0 2/2 Terminating 0 56m -es-1 2/2 Running 0 55m -es-2 2/2 Running 0 54m +NAME READY STATUS RESTARTS AGE +es-demo-0 1/1 Terminating 0 56m +es-demo-1 1/1 Running 0 55m +es-demo-2 1/1 Running 0 54m ``` ```shell -NAME READY STATUS RESTARTS AGE -es-0 2/2 Running 0 112s -es-1 2/2 Terminating 0 55m -es-2 2/2 Running 0 56m +NAME READY STATUS RESTARTS AGE +es-demo-0 1/1 Running 0 112s +es-demo-1 1/1 Terminating 0 55m +es-demo-2 1/1 Running 0 56m ``` ```shell -NAME READY STATUS RESTARTS AGE -es-0 2/2 Running 0 112s -es-1 2/2 Running 0 42s -es-2 2/2 Terminating 0 56m +NAME READY STATUS RESTARTS AGE +es-demo-0 1/1 Running 0 112s +es-demo-1 1/1 Running 0 42s +es-demo-2 1/1 Terminating 0 56m ``` @@ -145,66 +147,85 @@ kind: ElasticsearchOpsRequest metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"ElasticsearchOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"es"},"timeout":"3m","type":"Restart"}} - creationTimestamp: "2025-10-17T05:45:40Z" + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"ElasticsearchOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"es-demo"},"timeout":"10m","type":"Restart"}} + creationTimestamp: "2025-11-06T08:21:19Z" generation: 1 name: restart namespace: demo - resourceVersion: "22350" - uid: c6ef7130-9a31-4f64-ae49-1b4e332f0817 + resourceVersion: "613519" + uid: e5eb346e-7ca8-4f1d-9ca5-0869f49a8134 spec: apply: Always databaseRef: - name: es - timeout: 3m + name: es-demo + timeout: 10m type: Restart status: conditions: - - lastTransitionTime: "2025-10-17T05:45:41Z" - message: 'Controller has started to Progress the ElasticsearchOpsRequest: demo/restart' + - lastTransitionTime: "2025-11-06T08:21:20Z" + message: Elasticsearch ops request is restarting nodes observedGeneration: 1 - reason: Running + reason: Restart status: "True" - type: Running - - lastTransitionTime: "2025-10-17T05:45:49Z" - message: evict pod; ConditionStatus:True; PodName:es-0 + type: Restart + - lastTransitionTime: "2025-11-06T08:21:28Z" + message: pod exists; ConditionStatus:True; PodName:es-demo-0 observedGeneration: 1 status: "True" - type: EvictPod--es-0 - - lastTransitionTime: "2025-10-17T05:45:49Z" - message: get pod; ConditionStatus:True; PodName:es-0 + type: PodExists--es-demo-0 + - lastTransitionTime: "2025-11-06T08:21:28Z" + message: create es client; ConditionStatus:True; PodName:es-demo-0 observedGeneration: 1 status: "True" - type: GetPod--es-0 - - lastTransitionTime: "2025-10-17T05:46:59Z" - message: evict pod; ConditionStatus:True; PodName:es-1 + type: CreateEsClient--es-demo-0 + - lastTransitionTime: "2025-11-06T08:21:28Z" + message: evict pod; ConditionStatus:True; PodName:es-demo-0 observedGeneration: 1 status: "True" - type: EvictPod--es-1 - - lastTransitionTime: "2025-10-17T05:46:59Z" - message: get pod; ConditionStatus:True; PodName:es-1 + type: EvictPod--es-demo-0 + - lastTransitionTime: "2025-11-06T08:22:53Z" + message: create es client; ConditionStatus:True observedGeneration: 1 status: "True" - type: GetPod--es-1 - - lastTransitionTime: "2025-10-17T05:48:09Z" - message: evict pod; ConditionStatus:True; PodName:es-2 + type: CreateEsClient + - lastTransitionTime: "2025-11-06T08:21:58Z" + message: pod exists; ConditionStatus:True; PodName:es-demo-1 observedGeneration: 1 status: "True" - type: EvictPod--es-2 - - lastTransitionTime: "2025-10-17T05:48:09Z" - message: get pod; ConditionStatus:True; PodName:es-2 + type: PodExists--es-demo-1 + - lastTransitionTime: "2025-11-06T08:21:58Z" + message: create es client; ConditionStatus:True; PodName:es-demo-1 observedGeneration: 1 status: "True" - type: GetPod--es-2 - - lastTransitionTime: "2025-10-17T05:49:19Z" - message: 'Successfully started Elasticsearch pods for ElasticsearchOpsRequest: - demo/restart ' + type: CreateEsClient--es-demo-1 + - lastTransitionTime: "2025-11-06T08:21:58Z" + message: evict pod; ConditionStatus:True; PodName:es-demo-1 observedGeneration: 1 - reason: RestartPodsSucceeded status: "True" - type: Restart - - lastTransitionTime: "2025-10-17T05:49:19Z" - message: Controller has successfully restart the Elasticsearch replicas + type: EvictPod--es-demo-1 + - lastTransitionTime: "2025-11-06T08:22:28Z" + message: pod exists; ConditionStatus:True; PodName:es-demo-2 + observedGeneration: 1 + status: "True" + type: PodExists--es-demo-2 + - lastTransitionTime: "2025-11-06T08:22:28Z" + message: create es client; ConditionStatus:True; PodName:es-demo-2 + observedGeneration: 1 + status: "True" + type: CreateEsClient--es-demo-2 + - lastTransitionTime: "2025-11-06T08:22:28Z" + message: evict pod; ConditionStatus:True; PodName:es-demo-2 + observedGeneration: 1 + status: "True" + type: EvictPod--es-demo-2 + - lastTransitionTime: "2025-11-06T08:22:58Z" + message: Successfully restarted all nodes + observedGeneration: 1 + reason: RestartNodes + status: "True" + type: RestartNodes + - lastTransitionTime: "2025-11-06T08:22:58Z" + message: Successfully completed the modification process. observedGeneration: 1 reason: Successful status: "True" @@ -216,52 +237,57 @@ status: **Verify Data Persistence** After the restart, reconnect to the database and verify that the previously created database still exists: +Connect to the Cluster: ```bash -$ kubectl exec -it -n demo es-0 -- mysql -u root --password='kP!VVJ2e~DUtcD*D' -Defaulted container "Elasticsearch" out of: Elasticsearch, px-coordinator, px-init (init) -mysql: [Warning] Using a password on the command line interface can be insecure. -Welcome to the MySQL monitor. Commands end with ; or \g. -Your MySQL connection id is 112 -Server version: 8.0.40-31.1 Percona XtraDB Cluster (GPL), Release rel31, Revision 4b32153, WSREP version 26.1.4.3 - -Copyright (c) 2009-2024 Percona LLC and/or its affiliates -Copyright (c) 2000, 2024, Oracle and/or its affiliates. - -Oracle is a registered trademark of Oracle Corporation and/or its -affiliates. Other names may be trademarks of their respective -owners. - -Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. - -mysql> show databases; -+--------------------+ -| Database | -+--------------------+ -| information_schema | -| kubedb_system | -| mysql | -| performance_schema | -| shastriya | -| sys | -+--------------------+ -6 rows in set (0.02 sec) - -mysql> exit -Bye +# Port-forward the service to local machine +$ kubectl port-forward -n demo svc/es-standalone 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 ``` + +```bash +# Get admin username & password from k8s secret +$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.username}' | base64 -d +elastic +$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.password}' | base64 -d +d9QpQKiTcLNZx_gA + +# Check cluster health +$ curl -XGET -k -u "elastic:d9QpQKiTcLNZx_gA" "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "es-demo", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "active_primary_shards" : 4, + "active_shards" : 8, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "unassigned_primary_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} + +``` + ## Cleaning up To clean up the Kubernetes resources created by this tutorial, run: ```bash kubectl delete Elasticsearchopsrequest -n demo restart -kubectl delete Elasticsearch -n demo Elasticsearch +kubectl delete Elasticsearch -n demo es-demo kubectl delete ns demo ``` ## Next Steps -- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/Elasticsearch/index.md). -- Detail concepts of [ElasticsearchopsRequest object](/docs/guides/elasticsearch/concepts/opsrequest/index.md). -- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md).. +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). +- Detail concepts of [ElasticsearchopsRequest object](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md) diff --git a/docs/guides/elasticsearch/scaling/_index.md b/docs/guides/elasticsearch/scaling/_index.md new file mode 100644 index 000000000..120519674 --- /dev/null +++ b/docs/guides/elasticsearch/scaling/_index.md @@ -0,0 +1,10 @@ +--- +title: Elasticsearch Scaling +menu: + docs_{{ .version }}: + identifier: es-scaling-elasticsearch + name: Scaling + parent: es-elasticsearch-guides + weight: 15 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/elasticsearch/scaling/horizontal/combined.md b/docs/guides/elasticsearch/scaling/horizontal/combined.md new file mode 100644 index 000000000..5763aabca --- /dev/null +++ b/docs/guides/elasticsearch/scaling/horizontal/combined.md @@ -0,0 +1,969 @@ +--- +title: Horizontal Scaling Combined Elasticsearch +menu: + docs_{{ .version }}: + identifier: es-horizontal-scaling-combined + name: Combined Cluster + parent: es-horizontal-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale Elasticsearch Combined Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the Elasticsearch combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [Combined](/docs/guides/elasticsearch/clustering/combined-cluster/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Horizontal Scaling Overview](/docs/guides/elasticsearch/scaling/horizontal/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/Elasticsearch](/docs/examples/elasticsearch) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Combined Cluster + +Here, we are going to deploy a `Elasticsearch` combined cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare Elasticsearch Combined cluster + +Now, we are going to deploy a `Elasticsearch` combined cluster with version `3.9.0`. + +### Deploy Elasticsearch combined cluster + +In this section, we are going to deploy a Elasticsearch combined cluster. Then, in the next section we will scale the cluster using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: Elasticsearch-dev + namespace: demo +spec: + replicas: 2 + version: 3.9.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/Elasticsearch-combined.yaml +Elasticsearch.kubedb.com/Elasticsearch-dev created +``` + +Now, wait until `Elasticsearch-dev` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo -w +NAME TYPE VERSION STATUS AGE +Elasticsearch-dev kubedb.com/v1 3.9.0 Provisioning 0s +Elasticsearch-dev kubedb.com/v1 3.9.0 Provisioning 24s +. +. +Elasticsearch-dev kubedb.com/v1 3.9.0 Ready 92s +``` + +Let's check the number of replicas has from Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +2 + +$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +2 +``` + +We can see from both command that the cluster has 2 replicas. + +Also, we can verify the replicas of the combined from an internal Elasticsearch command by exec into a replica. + +Now let's exec to a instance and run a Elasticsearch internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +We can see from the above output that the Elasticsearch has 2 nodes. + +We are now ready to apply the `ElasticsearchOpsRequest` CR to scale this cluster. + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the combined cluster to meet the desired number of replicas after scaling. + +#### Create ElasticsearchOpsRequest + +In order to scale up the replicas of the combined cluster, we have to create a `ElasticsearchOpsRequest` CR with our desired replicas. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-up-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: Elasticsearch-dev + horizontalScaling: + node: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `Elasticsearch-dev` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. +- `spec.horizontalScaling.node` specifies the desired replicas after scaling. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/horizontal-scaling/Elasticsearch-hscale-up-combined.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-up-combined created +``` + +#### Verify Combined cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ watch kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-up-combined HorizontalScaling Successful 106s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-up-combined +Name: esops-hscale-up-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T10:19:56Z + Generation: 1 + Resource Version: 353093 + UID: f91de2da-82c4-4175-aab4-de0f3e1ce498 +Spec: + Apply: IfReady + Database Ref: + Name: Elasticsearch-dev + Horizontal Scaling: + Node: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T10:19:57Z + Message: Elasticsearch ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T10:20:05Z + Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:20:05Z + Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:20:15Z + Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:20:20Z + Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:20:20Z + Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:21:00Z + Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:21:05Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T10:22:15Z + Message: Successfully Scaled Up Server Node + Observed Generation: 1 + Reason: ScaleUpCombined + Status: True + Type: ScaleUpCombined + Last Transition Time: 2024-08-02T10:21:10Z + Message: patch pet setElasticsearch-dev; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSetElasticsearch-dev + Last Transition Time: 2024-08-02T10:22:10Z + Message: node in cluster; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: NodeInCluster + Last Transition Time: 2024-08-02T10:22:15Z + Message: Successfully completed horizontally scale Elasticsearch cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 4m34s KubeDB Ops-manager Operator Start processing for ElasticsearchOpsRequest: demo/esops-hscale-up-combined + Normal Starting 4m34s KubeDB Ops-manager Operator Pausing Elasticsearch databse: demo/Elasticsearch-dev + Normal Successful 4m34s KubeDB Ops-manager Operator Successfully paused Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-up-combined + Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m26s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m26s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 4m21s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 + Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m16s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 4m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 4m11s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 4m6s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 + Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 3m31s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Normal RestartNodes 3m26s KubeDB Ops-manager Operator Successfully restarted all nodes + Warning patch pet setElasticsearch-dev; ConditionStatus:True 3m21s KubeDB Ops-manager Operator patch pet setElasticsearch-dev; ConditionStatus:True + Warning node in cluster; ConditionStatus:False 2m46s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False + Warning node in cluster; ConditionStatus:True 2m21s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True + Normal ScaleUpCombined 2m16s KubeDB Ops-manager Operator Successfully Scaled Up Server Node + Normal Starting 2m16s KubeDB Ops-manager Operator Resuming Elasticsearch database: demo/Elasticsearch-dev + Normal Successful 2m16s KubeDB Ops-manager Operator Successfully resumed Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-up-combined +``` + +Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +3 + +$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +3 +``` + +Now let's connect to a Elasticsearch instance and run a Elasticsearch internal command to check the number of replicas, +```bash +$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +Elasticsearch-dev-2.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 2 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +From all the above outputs we can see that the brokers of the combined Elasticsearch is `3`. That means we have successfully scaled up the replicas of the Elasticsearch combined cluster. + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the Elasticsearch combined cluster to meet the desired number of replicas after scaling. + +#### Create ElasticsearchOpsRequest + +In order to scale down the replicas of the Elasticsearch combined cluster, we have to create a `ElasticsearchOpsRequest` CR with our desired replicas. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-down-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: Elasticsearch-dev + horizontalScaling: + node: 2 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `Elasticsearch-dev` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. +- `spec.horizontalScaling.node` specifies the desired replicas after scaling. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/horizontal-scaling/Elasticsearch-hscale-down-combined.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-down-combined created +``` + +#### Verify Combined cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ watch kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-down-combined HorizontalScaling Successful 2m32s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-down-combined +Name: esops-hscale-down-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2024-08-02T10:46:39Z + Generation: 1 + Resource Version: 354924 + UID: f1a0b85d-1a86-463c-a3e4-72947badd108 +Spec: + Apply: IfReady + Database Ref: + Name: Elasticsearch-dev + Horizontal Scaling: + Node: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2024-08-02T10:46:39Z + Message: Elasticsearch ops-request has started to horizontally scaling the nodes + Observed Generation: 1 + Reason: HorizontalScaling + Status: True + Type: HorizontalScaling + Last Transition Time: 2024-08-02T10:47:07Z + Message: Successfully Scaled Down Server Node + Observed Generation: 1 + Reason: ScaleDownCombined + Status: True + Type: ScaleDownCombined + Last Transition Time: 2024-08-02T10:46:57Z + Message: reassign partitions; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReassignPartitions + Last Transition Time: 2024-08-02T10:46:57Z + Message: is pet set patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetSetPatched + Last Transition Time: 2024-08-02T10:46:57Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2024-08-02T10:46:58Z + Message: delete pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePvc + Last Transition Time: 2024-08-02T10:47:02Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-08-02T10:47:13Z + Message: successfully reconciled the Elasticsearch with modified node + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-08-02T10:47:18Z + Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: GetPod--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:47:18Z + Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: EvictPod--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:47:28Z + Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--Elasticsearch-dev-0 + Last Transition Time: 2024-08-02T10:47:33Z + Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: GetPod--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:47:33Z + Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: EvictPod--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:48:53Z + Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Observed Generation: 1 + Status: True + Type: CheckPodRunning--Elasticsearch-dev-1 + Last Transition Time: 2024-08-02T10:48:58Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2024-08-02T10:48:58Z + Message: Successfully completed horizontally scale Elasticsearch cluster + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 2m39s KubeDB Ops-manager Operator Start processing for ElasticsearchOpsRequest: demo/esops-hscale-down-combined + Normal Starting 2m39s KubeDB Ops-manager Operator Pausing Elasticsearch databse: demo/Elasticsearch-dev + Normal Successful 2m39s KubeDB Ops-manager Operator Successfully paused Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-down-combined + Warning reassign partitions; ConditionStatus:True 2m21s KubeDB Ops-manager Operator reassign partitions; ConditionStatus:True + Warning is pet set patched; ConditionStatus:True 2m21s KubeDB Ops-manager Operator is pet set patched; ConditionStatus:True + Warning get pod; ConditionStatus:True 2m21s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 2m20s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 2m20s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Normal ScaleDownCombined 2m11s KubeDB Ops-manager Operator Successfully Scaled Down Server Node + Normal UpdatePetSets 2m5s KubeDB Ops-manager Operator successfully reconciled the Elasticsearch with modified node + Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 2m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 2m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 115s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 + Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 110s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 105s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 105s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 100s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 + Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 25s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Normal RestartNodes 20s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal Starting 20s KubeDB Ops-manager Operator Resuming Elasticsearch database: demo/Elasticsearch-dev + Normal Successful 20s KubeDB Ops-manager Operator Successfully resumed Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-down-combined +``` + +Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +2 + +$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +2 +``` + +Now let's connect to a Elasticsearch instance and run a Elasticsearch internal command to check the number of replicas, + +```bash +$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties +Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( + Produce(0): 0 to 9 [usable: 9], + Fetch(1): 0 to 15 [usable: 15], + ListOffsets(2): 0 to 8 [usable: 8], + Metadata(3): 0 to 12 [usable: 12], + LeaderAndIsr(4): UNSUPPORTED, + StopReplica(5): UNSUPPORTED, + UpdateMetadata(6): UNSUPPORTED, + ControlledShutdown(7): UNSUPPORTED, + OffsetCommit(8): 0 to 8 [usable: 8], + OffsetFetch(9): 0 to 8 [usable: 8], + FindCoordinator(10): 0 to 4 [usable: 4], + JoinGroup(11): 0 to 9 [usable: 9], + Heartbeat(12): 0 to 4 [usable: 4], + LeaveGroup(13): 0 to 5 [usable: 5], + SyncGroup(14): 0 to 5 [usable: 5], + DescribeGroups(15): 0 to 5 [usable: 5], + ListGroups(16): 0 to 4 [usable: 4], + SaslHandshake(17): 0 to 1 [usable: 1], + ApiVersions(18): 0 to 3 [usable: 3], + CreateTopics(19): 0 to 7 [usable: 7], + DeleteTopics(20): 0 to 6 [usable: 6], + DeleteRecords(21): 0 to 2 [usable: 2], + InitProducerId(22): 0 to 4 [usable: 4], + OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], + AddPartitionsToTxn(24): 0 to 4 [usable: 4], + AddOffsetsToTxn(25): 0 to 3 [usable: 3], + EndTxn(26): 0 to 3 [usable: 3], + WriteTxnMarkers(27): 0 to 1 [usable: 1], + TxnOffsetCommit(28): 0 to 3 [usable: 3], + DescribeAcls(29): 0 to 3 [usable: 3], + CreateAcls(30): 0 to 3 [usable: 3], + DeleteAcls(31): 0 to 3 [usable: 3], + DescribeConfigs(32): 0 to 4 [usable: 4], + AlterConfigs(33): 0 to 2 [usable: 2], + AlterReplicaLogDirs(34): 0 to 2 [usable: 2], + DescribeLogDirs(35): 0 to 4 [usable: 4], + SaslAuthenticate(36): 0 to 2 [usable: 2], + CreatePartitions(37): 0 to 3 [usable: 3], + CreateDelegationToken(38): 0 to 3 [usable: 3], + RenewDelegationToken(39): 0 to 2 [usable: 2], + ExpireDelegationToken(40): 0 to 2 [usable: 2], + DescribeDelegationToken(41): 0 to 3 [usable: 3], + DeleteGroups(42): 0 to 2 [usable: 2], + ElectLeaders(43): 0 to 2 [usable: 2], + IncrementalAlterConfigs(44): 0 to 1 [usable: 1], + AlterPartitionReassignments(45): 0 [usable: 0], + ListPartitionReassignments(46): 0 [usable: 0], + OffsetDelete(47): 0 [usable: 0], + DescribeClientQuotas(48): 0 to 1 [usable: 1], + AlterClientQuotas(49): 0 to 1 [usable: 1], + DescribeUserScramCredentials(50): 0 [usable: 0], + AlterUserScramCredentials(51): 0 [usable: 0], + DescribeQuorum(55): 0 to 1 [usable: 1], + AlterPartition(56): UNSUPPORTED, + UpdateFeatures(57): 0 to 1 [usable: 1], + Envelope(58): UNSUPPORTED, + DescribeCluster(60): 0 [usable: 0], + DescribeProducers(61): 0 [usable: 0], + UnregisterBroker(64): 0 [usable: 0], + DescribeTransactions(65): 0 [usable: 0], + ListTransactions(66): 0 [usable: 0], + AllocateProducerIds(67): UNSUPPORTED, + ConsumerGroupHeartbeat(68): UNSUPPORTED +) +``` + +From all the above outputs we can see that the replicas of the combined cluster is `2`. That means we have successfully scaled down the replicas of the Elasticsearch combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete es -n demo Elasticsearch-dev +kubectl delete Elasticsearchopsrequest -n demo esops-hscale-up-combined esops-hscale-down-combined +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/Elasticsearch.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/_index.md). +- Monitor your Elasticsearch with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Elasticsearch with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/elasticsearch/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/scaling/horizontal/index.md b/docs/guides/elasticsearch/scaling/horizontal/index.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/guides/elasticsearch/scaling/horizontal/overview.md b/docs/guides/elasticsearch/scaling/horizontal/overview.md new file mode 100644 index 000000000..6864dc39f --- /dev/null +++ b/docs/guides/elasticsearch/scaling/horizontal/overview.md @@ -0,0 +1,54 @@ +--- +title: Elasticsearch Horizontal Scaling Overview +menu: + docs_{{ .version }}: + identifier: kf-horizontal-scaling-overview + name: Overview + parent: kf-horizontal-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Elasticsearch Horizontal Scaling + +This guide will give an overview on how KubeDB Ops-manager operator scales up or down `Elasticsearch` cluster replicas of various component such as Combined, Broker, Controller. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + +## How Horizontal Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator scales up or down `Elasticsearch` database components. Open the image in a new tab to see the enlarged version. + +
+  Horizontal scaling process of Elasticsearch +
Fig: Horizontal scaling process of Elasticsearch
+
+ +The Horizontal scaling process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CR. + +3. When the operator finds a `Elasticsearch` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to scale the various components of the `Elasticsearch` cluster, the user creates a `ElasticsearchOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CR. + +6. When it finds a `ElasticsearchOpsRequest` CR, it halts the `Elasticsearch` object which is referred from the `ElasticsearchOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Elasticsearch` object during the horizontal scaling process. + +7. Then the `KubeDB` Ops-manager operator will scale the related PetSet Pods to reach the expected number of replicas defined in the `ElasticsearchOpsRequest` CR. + +8. After the successfully scaling the replicas of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the number of replicas in the `Elasticsearch` object to reflect the updated state. + +9. After the successful scaling of the `Elasticsearch` replicas, the `KubeDB` Ops-manager operator resumes the `Elasticsearch` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on horizontal scaling of Elasticsearch cluster using `ElasticsearchOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/vertical/index.md b/docs/guides/elasticsearch/scaling/vertical/index.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/guides/elasticsearch/update-version/_index.md b/docs/guides/elasticsearch/update-version/_index.md index e69de29bb..21778a847 100644 --- a/docs/guides/elasticsearch/update-version/_index.md +++ b/docs/guides/elasticsearch/update-version/_index.md @@ -0,0 +1,10 @@ +--- +title: Elasticsearch Update Version +menu: + docs_{{ .version }}: + identifier: es-updateversion-elasticsearch + name: Update Version + parent: es-elasticsearch-guides + weight: 15 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/elasticsearch/update-version/elasticsearch.md b/docs/guides/elasticsearch/update-version/elasticsearch.md index 67c4c1467..e954d62e8 100644 --- a/docs/guides/elasticsearch/update-version/elasticsearch.md +++ b/docs/guides/elasticsearch/update-version/elasticsearch.md @@ -1,10 +1,10 @@ --- -title: Update Version of Elasticsearch +title: Update Version Elasticsearch menu: docs_{{ .version }}: - identifier: es-update-version-Elasticsearch + identifier: es-updateversion-Elasticsearch name: Elasticsearch - parent: es-update-version + parent: es-updateversion-elasticsearch weight: 30 menu_name: docs_{{ .version }} section_menu_id: guides diff --git a/docs/guides/elasticsearch/update-version/overview.md b/docs/guides/elasticsearch/update-version/overview.md index 52dd38b2d..ef9832c9d 100644 --- a/docs/guides/elasticsearch/update-version/overview.md +++ b/docs/guides/elasticsearch/update-version/overview.md @@ -4,7 +4,7 @@ menu: docs_{{ .version }}: identifier: guides-Elasticsearch-updating-overview name: Overview - parent: guides-Elasticsearch-updating + parent: es-updateversion-elasticsearch weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides From 4c581d99812879152793d772a7a3e036b500f612 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Tue, 11 Nov 2025 12:14:19 +0600 Subject: [PATCH 04/13] restart Signed-off-by: Bonusree --- docs/examples/elasticsearch/restart.yaml | 4 +- docs/guides/elasticsearch/restart/index.md | 177 +++++++++++---------- 2 files changed, 98 insertions(+), 83 deletions(-) diff --git a/docs/examples/elasticsearch/restart.yaml b/docs/examples/elasticsearch/restart.yaml index 5cbbf6a26..154d6b234 100644 --- a/docs/examples/elasticsearch/restart.yaml +++ b/docs/examples/elasticsearch/restart.yaml @@ -6,6 +6,6 @@ metadata: spec: type: Restart databaseRef: - name: es-demo - timeout: 10m + name: es-quickstart + timeout: 3m apply: Always \ No newline at end of file diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index 6491095d4..b616e9c97 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -7,16 +7,16 @@ menu: parent: es-elasticsearch-guides weight: 15 menu_name: docs_{{ .version }} +section_menu_id: guides --- - > New to KubeDB? Please start [here](/docs/README.md). # Restart Elasticsearch -KubeDB supports restarting a Elasticsearch database using a `ElasticsearchOpsRequest`. Restarting can be +KubeDB supports restarting an Elasticsearch database using a `ElasticsearchOpsRequest`. Restarting can be useful if some pods are stuck in a certain state or not functioning correctly. -This guide will demonstrate how to restart a Elasticsearch cluster using an OpsRequest. +This guide will demonstrate how to restart an Elasticsearch cluster using an OpsRequest. --- @@ -43,37 +43,37 @@ In this section, we are going to deploy a Elasticsearch database using KubeDB. apiVersion: kubedb.com/v1 kind: Elasticsearch metadata: - name: es-demo + name: es namespace: demo spec: - deletionPolicy: Delete + version: xpack-8.2.3 enableSSL: true replicas: 3 + storageType: Durable storage: + storageClassName: "local-path" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - storageClassName: local-path - storageType: Durable - version: xpack-9.1.3 + deletionPolicy: WipeOut ``` Let's create the `Elasticsearch` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/update-version/elasticsearch.yaml +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch-v1.yaml Elasticsearch.kubedb.com/Elasticsearch created ``` let's wait until all pods are in the `Running` state, ```shell kubectl get pods -n demo -NAME READY STATUS RESTARTS AGE -es-demo-0 1/1 Running 0 6m28s -es-demo-1 1/1 Running 0 6m28s -es-demo-2 1/1 Running 0 6m28s +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 6m28s +es-1 2/2 Running 0 6m28s +es-2 2/2 Running 0 6m28s ``` @@ -89,14 +89,14 @@ metadata: spec: type: Restart databaseRef: - name: es-demo - timeout: 10m + name: es + timeout: 3m apply: Always ``` Here, -- `spec.type` specifies the type of operation (Restart in this case). `Restart` is used to perform a smart restart of the Elasticsearch cluster. +- `spec.type` specifies the type of operation (Restart in this case). - `spec.databaseRef` references the Elasticsearch database. The OpsRequest must be created in the same namespace as the database. @@ -107,7 +107,7 @@ Here, Let's create the `ElasticsearchOpsRequest` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/restart.yaml +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/restart/yamls/restart.yaml ElasticsearchOpsRequest.ops.kubedb.com/restart created ``` @@ -115,24 +115,24 @@ In a Elasticsearch cluster, all pods act as primary nodes. When you apply a rest Let's watch the rolling restart process with: ```shell -NAME READY STATUS RESTARTS AGE -es-demo-0 1/1 Terminating 0 56m -es-demo-1 1/1 Running 0 55m -es-demo-2 1/1 Running 0 54m +NAME READY STATUS RESTARTS AGE +es-0 2/2 Terminating 0 56m +es-1 2/2 Running 0 55m +es-2 2/2 Running 0 54m ``` ```shell -NAME READY STATUS RESTARTS AGE -es-demo-0 1/1 Running 0 112s -es-demo-1 1/1 Terminating 0 55m -es-demo-2 1/1 Running 0 56m +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 112s +es-1 2/2 Terminating 0 55m +es-2 2/2 Running 0 56m ``` ```shell -NAME READY STATUS RESTARTS AGE -es-demo-0 1/1 Running 0 112s -es-demo-1 1/1 Running 0 42s -es-demo-2 1/1 Terminating 0 56m +NAME READY STATUS RESTARTS AGE +es-0 2/2 Running 0 112s +es-1 2/2 Running 0 42s +es-2 2/2 Terminating 0 56m ``` @@ -147,84 +147,84 @@ kind: ElasticsearchOpsRequest metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | - {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"ElasticsearchOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"es-demo"},"timeout":"10m","type":"Restart"}} - creationTimestamp: "2025-11-06T08:21:19Z" + {"apiVersion":"ops.kubedb.com/v1alpha1","kind":"ElasticsearchOpsRequest","metadata":{"annotations":{},"name":"restart","namespace":"demo"},"spec":{"apply":"Always","databaseRef":{"name":"es-quickstart"},"timeout":"3m","type":"Restart"}} + creationTimestamp: "2025-11-11T05:02:36Z" generation: 1 name: restart namespace: demo - resourceVersion: "613519" - uid: e5eb346e-7ca8-4f1d-9ca5-0869f49a8134 + resourceVersion: "749630" + uid: 52fe9376-cef4-4171-9ca7-8a0d1be902fb spec: apply: Always databaseRef: - name: es-demo - timeout: 10m + name: es-quickstart + timeout: 3m type: Restart status: conditions: - - lastTransitionTime: "2025-11-06T08:21:20Z" + - lastTransitionTime: "2025-11-11T05:02:36Z" message: Elasticsearch ops request is restarting nodes observedGeneration: 1 reason: Restart status: "True" type: Restart - - lastTransitionTime: "2025-11-06T08:21:28Z" - message: pod exists; ConditionStatus:True; PodName:es-demo-0 + - lastTransitionTime: "2025-11-11T05:02:44Z" + message: pod exists; ConditionStatus:True; PodName:es-quickstart-0 observedGeneration: 1 status: "True" - type: PodExists--es-demo-0 - - lastTransitionTime: "2025-11-06T08:21:28Z" - message: create es client; ConditionStatus:True; PodName:es-demo-0 + type: PodExists--es-quickstart-0 + - lastTransitionTime: "2025-11-11T05:02:44Z" + message: create es client; ConditionStatus:True; PodName:es-quickstart-0 observedGeneration: 1 status: "True" - type: CreateEsClient--es-demo-0 - - lastTransitionTime: "2025-11-06T08:21:28Z" - message: evict pod; ConditionStatus:True; PodName:es-demo-0 + type: CreateEsClient--es-quickstart-0 + - lastTransitionTime: "2025-11-11T05:02:44Z" + message: evict pod; ConditionStatus:True; PodName:es-quickstart-0 observedGeneration: 1 status: "True" - type: EvictPod--es-demo-0 - - lastTransitionTime: "2025-11-06T08:22:53Z" + type: EvictPod--es-quickstart-0 + - lastTransitionTime: "2025-11-11T05:03:55Z" message: create es client; ConditionStatus:True observedGeneration: 1 status: "True" type: CreateEsClient - - lastTransitionTime: "2025-11-06T08:21:58Z" - message: pod exists; ConditionStatus:True; PodName:es-demo-1 + - lastTransitionTime: "2025-11-11T05:03:09Z" + message: pod exists; ConditionStatus:True; PodName:es-quickstart-1 observedGeneration: 1 status: "True" - type: PodExists--es-demo-1 - - lastTransitionTime: "2025-11-06T08:21:58Z" - message: create es client; ConditionStatus:True; PodName:es-demo-1 + type: PodExists--es-quickstart-1 + - lastTransitionTime: "2025-11-11T05:03:09Z" + message: create es client; ConditionStatus:True; PodName:es-quickstart-1 observedGeneration: 1 status: "True" - type: CreateEsClient--es-demo-1 - - lastTransitionTime: "2025-11-06T08:21:58Z" - message: evict pod; ConditionStatus:True; PodName:es-demo-1 + type: CreateEsClient--es-quickstart-1 + - lastTransitionTime: "2025-11-11T05:03:09Z" + message: evict pod; ConditionStatus:True; PodName:es-quickstart-1 observedGeneration: 1 status: "True" - type: EvictPod--es-demo-1 - - lastTransitionTime: "2025-11-06T08:22:28Z" - message: pod exists; ConditionStatus:True; PodName:es-demo-2 + type: EvictPod--es-quickstart-1 + - lastTransitionTime: "2025-11-11T05:03:34Z" + message: pod exists; ConditionStatus:True; PodName:es-quickstart-2 observedGeneration: 1 status: "True" - type: PodExists--es-demo-2 - - lastTransitionTime: "2025-11-06T08:22:28Z" - message: create es client; ConditionStatus:True; PodName:es-demo-2 + type: PodExists--es-quickstart-2 + - lastTransitionTime: "2025-11-11T05:03:34Z" + message: create es client; ConditionStatus:True; PodName:es-quickstart-2 observedGeneration: 1 status: "True" - type: CreateEsClient--es-demo-2 - - lastTransitionTime: "2025-11-06T08:22:28Z" - message: evict pod; ConditionStatus:True; PodName:es-demo-2 + type: CreateEsClient--es-quickstart-2 + - lastTransitionTime: "2025-11-11T05:03:34Z" + message: evict pod; ConditionStatus:True; PodName:es-quickstart-2 observedGeneration: 1 status: "True" - type: EvictPod--es-demo-2 - - lastTransitionTime: "2025-11-06T08:22:58Z" + type: EvictPod--es-quickstart-2 + - lastTransitionTime: "2025-11-11T05:03:59Z" message: Successfully restarted all nodes observedGeneration: 1 reason: RestartNodes status: "True" type: RestartNodes - - lastTransitionTime: "2025-11-06T08:22:58Z" + - lastTransitionTime: "2025-11-11T05:03:59Z" message: Successfully completed the modification process. observedGeneration: 1 reason: Successful @@ -237,52 +237,67 @@ status: **Verify Data Persistence** After the restart, reconnect to the database and verify that the previously created database still exists: -Connect to the Cluster: +Let's port-forward the port `9200` to local machine: ```bash -# Port-forward the service to local machine -$ kubectl port-forward -n demo svc/es-standalone 9200 +$ kubectl port-forward -n demo svc/es-quickstart 9200 Forwarding from 127.0.0.1:9200 -> 9200 Forwarding from [::1]:9200 -> 9200 ``` +Now, our Elasticsearch cluster is accessible at `localhost:9200`. + +**Connection information:** + +- Address: `localhost:9200` +- Username: + ```bash -# Get admin username & password from k8s secret -$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.username}' | base64 -d +$ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.username}' | base64 -d elastic -$ kubectl get secret -n demo es-standalone-admin-cred -o jsonpath='{.data.password}' | base64 -d -d9QpQKiTcLNZx_gA +``` + +- Password: + +```bash +$ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.password}' | base64 -d +vIHoIfHn=!Z8F4gP +``` + +Now let's check the health of our Elasticsearch database. + +```bash +$ curl -XGET -k -u 'elastic:vIHoIfHn=!Z8F4gP' "https://localhost:9200/_cluster/health?pretty" -# Check cluster health -$ curl -XGET -k -u "elastic:d9QpQKiTcLNZx_gA" "https://localhost:9200/_cluster/health?pretty" { - "cluster_name" : "es-demo", + "cluster_name" : "es-quickstart", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, - "active_primary_shards" : 4, - "active_shards" : 8, + "active_primary_shards" : 3, + "active_shards" : 6, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, - "unassigned_primary_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } - ``` +From the health information above, we can see that our Elasticsearch cluster's status is `green` which means the cluster is healthy. + + ## Cleaning up To clean up the Kubernetes resources created by this tutorial, run: ```bash kubectl delete Elasticsearchopsrequest -n demo restart -kubectl delete Elasticsearch -n demo es-demo +kubectl delete Elasticsearch -n demo es kubectl delete ns demo ``` From 49ed6c654099845918d3f3c08fba32f1de018780 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Thu, 13 Nov 2025 17:10:41 +0600 Subject: [PATCH 05/13] horizontal scale Signed-off-by: Bonusree --- .../Elasticsearch-hscale-down-combined.yaml | 11 + .../Elasticsearch-hscale-up-combined.yaml | 11 + docs/guides/elasticsearch/restart/index.md | 2 +- .../scaling/horizontal/combined.md | 837 ++++-------------- .../scaling/horizontal/topology/_index.md | 0 .../scaling/horizontal/topology/hotwarm.md | 0 .../scaling/horizontal/topology/simple.md | 0 7 files changed, 199 insertions(+), 662 deletions(-) create mode 100644 docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-combined.yaml create mode 100644 docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-combined.yaml create mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/_index.md create mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md create mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/simple.md diff --git a/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-combined.yaml b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-combined.yaml new file mode 100644 index 000000000..b977a3e58 --- /dev/null +++ b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-combined.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-down-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es + horizontalScaling: + node: 2 \ No newline at end of file diff --git a/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-combined.yaml b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-combined.yaml new file mode 100644 index 000000000..66b5d466d --- /dev/null +++ b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-combined.yaml @@ -0,0 +1,11 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-combined + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es + horizontalScaling: + node: 3 \ No newline at end of file diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index b616e9c97..2d9b219c4 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -64,7 +64,7 @@ Let's create the `Elasticsearch` CR we have shown above, ```bash $ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/guides/elasticsearch/quickstart/overview/elasticsearch/yamls/elasticsearch-v1.yaml -Elasticsearch.kubedb.com/Elasticsearch created +Elasticsearch.kubedb.com/es created ``` let's wait until all pods are in the `Running` state, diff --git a/docs/guides/elasticsearch/scaling/horizontal/combined.md b/docs/guides/elasticsearch/scaling/horizontal/combined.md index 5763aabca..2c2c43770 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/combined.md +++ b/docs/guides/elasticsearch/scaling/horizontal/combined.md @@ -53,189 +53,85 @@ In this section, we are going to deploy a Elasticsearch combined cluster. Then, apiVersion: kubedb.com/v1 kind: Elasticsearch metadata: - name: Elasticsearch-dev + name: es namespace: demo spec: + version: xpack-9.1.4 + enableSSL: true replicas: 2 - version: 3.9.0 + storageType: Durable storage: + storageClassName: "local-path" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - storageClassName: standard - storageType: Durable deletionPolicy: WipeOut ``` Let's create the `Elasticsearch` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/Elasticsearch-combined.yaml -Elasticsearch.kubedb.com/Elasticsearch-dev created +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/overview/quickstart/elasticsearch/yamls/elasticsearch-v1.yaml +Elasticsearch.kubedb.com/es created ``` -Now, wait until `Elasticsearch-dev` has status `Ready`. i.e, +Now, wait until `es` has status `Ready`. i.e, ```bash -$ kubectl get es -n demo -w -NAME TYPE VERSION STATUS AGE -Elasticsearch-dev kubedb.com/v1 3.9.0 Provisioning 0s -Elasticsearch-dev kubedb.com/v1 3.9.0 Provisioning 24s -. -. -Elasticsearch-dev kubedb.com/v1 3.9.0 Ready 92s +$ kubectl get es -n demo +NAME VERSION STATUS AGE +es xpack-9.1.4 Ready 3m53s ``` Let's check the number of replicas has from Elasticsearch object, number of pods the petset have, ```bash -$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get elasticsearch -n demo es -o json | jq '.spec.replicas' 2 - -$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get petsets -n demo es -o json | jq '.spec.replicas' 2 + ``` We can see from both command that the cluster has 2 replicas. Also, we can verify the replicas of the combined from an internal Elasticsearch command by exec into a replica. -Now let's exec to a instance and run a Elasticsearch internal command to check the number of replicas, +Now lets check the number of replicas, ```bash -$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties -Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es' +NAME READY STATUS RESTARTS AGE +pod/es-0 1/1 Running 0 5m +pod/es-1 1/1 Running 0 4m54s + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es ClusterIP 10.43.72.228 9200/TCP 5m5s +service/es-master ClusterIP None 9300/TCP 5m5s +service/es-pods ClusterIP None 9200/TCP 5m5s + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es kubedb.com/elasticsearch 9.1.4 5m2s + +NAME TYPE DATA AGE +secret/es-apm-system-cred kubernetes.io/basic-auth 2 5m4s +secret/es-auth kubernetes.io/basic-auth 2 5m8s +secret/es-beats-system-cred kubernetes.io/basic-auth 2 5m4s +secret/es-ca-cert kubernetes.io/tls 2 5m9s +secret/es-client-cert kubernetes.io/tls 3 5m8s +secret/es-config Opaque 1 5m8s +secret/es-http-cert kubernetes.io/tls 3 5m8s +secret/es-kibana-system-cred kubernetes.io/basic-auth 2 5m4s +secret/es-logstash-system-cred kubernetes.io/basic-auth 2 5m4s +secret/es-remote-monitoring-user-cred kubernetes.io/basic-auth 2 5m4s +secret/es-transport-cert kubernetes.io/tls 3 5m8s + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +persistentvolumeclaim/data-es-0 Bound pvc-7c8cc17d-7427-4411-9262-f213e826540b 1Gi RWO local-path 5m5s +persistentvolumeclaim/data-es-1 Bound pvc-f2cf7ac9-b0c2-4c44-93dc-476cc06c25b4 1Gi RWO local-path 4m59s + ``` We can see from the above output that the Elasticsearch has 2 nodes. @@ -259,21 +155,21 @@ metadata: spec: type: HorizontalScaling databaseRef: - name: Elasticsearch-dev + name: es horizontalScaling: node: 3 ``` Here, -- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `Elasticsearch-dev` cluster. +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `es` cluster. - `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. - `spec.horizontalScaling.node` specifies the desired replicas after scaling. Let's create the `ElasticsearchOpsRequest` CR we have shown above, ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/horizontal-scaling/Elasticsearch-hscale-up-combined.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/horizontal/Elasticsearch-hscale-up-combined.yaml Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-up-combined created ``` @@ -284,9 +180,9 @@ If everything goes well, `KubeDB` Ops-manager operator will update the replicas Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, ```bash -$ watch kubectl get Elasticsearchopsrequest -n demo -NAME TYPE STATUS AGE -esops-hscale-up-combined HorizontalScaling Successful 106s +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-up-combined HorizontalScaling Successful 2m42s ``` We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. @@ -300,79 +196,49 @@ Annotations: API Version: ops.kubedb.com/v1alpha1 Kind: ElasticsearchOpsRequest Metadata: - Creation Timestamp: 2024-08-02T10:19:56Z + Creation Timestamp: 2025-11-13T10:25:18Z Generation: 1 - Resource Version: 353093 - UID: f91de2da-82c4-4175-aab4-de0f3e1ce498 + Resource Version: 810747 + UID: 29134aef-1379-4e4f-91c8-23b1cf74c784 Spec: Apply: IfReady Database Ref: - Name: Elasticsearch-dev + Name: es Horizontal Scaling: Node: 3 Type: HorizontalScaling Status: Conditions: - Last Transition Time: 2024-08-02T10:19:57Z - Message: Elasticsearch ops-request has started to horizontally scaling the nodes - Observed Generation: 1 - Reason: HorizontalScaling - Status: True - Type: HorizontalScaling - Last Transition Time: 2024-08-02T10:20:05Z - Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Observed Generation: 1 - Status: True - Type: GetPod--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:20:05Z - Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Observed Generation: 1 - Status: True - Type: EvictPod--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:20:15Z - Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Observed Generation: 1 - Status: True - Type: CheckPodRunning--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:20:20Z - Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Observed Generation: 1 - Status: True - Type: GetPod--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:20:20Z - Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Last Transition Time: 2025-11-13T10:25:58Z + Message: Elasticsearch ops request is horizontally scaling the nodes. Observed Generation: 1 + Reason: HorizontalScale Status: True - Type: EvictPod--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:21:00Z - Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Type: HorizontalScale + Last Transition Time: 2025-11-13T10:26:06Z + Message: patch pet set; ConditionStatus:True Observed Generation: 1 Status: True - Type: CheckPodRunning--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:21:05Z - Message: Successfully restarted all nodes + Type: PatchPetSet + Last Transition Time: 2025-11-13T10:26:26Z + Message: is node in cluster; ConditionStatus:True Observed Generation: 1 - Reason: RestartNodes Status: True - Type: RestartNodes - Last Transition Time: 2024-08-02T10:22:15Z - Message: Successfully Scaled Up Server Node + Type: IsNodeInCluster + Last Transition Time: 2025-11-13T10:26:31Z + Message: ScaleUp es nodes Observed Generation: 1 - Reason: ScaleUpCombined + Reason: HorizontalScaleCombinedNode Status: True - Type: ScaleUpCombined - Last Transition Time: 2024-08-02T10:21:10Z - Message: patch pet setElasticsearch-dev; ConditionStatus:True + Type: HorizontalScaleCombinedNode + Last Transition Time: 2025-11-13T10:26:36Z + Message: successfully updated Elasticsearch CR Observed Generation: 1 + Reason: UpdateDatabase Status: True - Type: PatchPetSetElasticsearch-dev - Last Transition Time: 2024-08-02T10:22:10Z - Message: node in cluster; ConditionStatus:True - Observed Generation: 1 - Status: True - Type: NodeInCluster - Last Transition Time: 2024-08-02T10:22:15Z - Message: Successfully completed horizontally scale Elasticsearch cluster + Type: UpdateDatabase + Last Transition Time: 2025-11-13T10:26:36Z + Message: Successfully Horizontally Scaled. Observed Generation: 1 Reason: Successful Status: True @@ -380,237 +246,32 @@ Status: Observed Generation: 1 Phase: Successful Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Starting 4m34s KubeDB Ops-manager Operator Start processing for ElasticsearchOpsRequest: demo/esops-hscale-up-combined - Normal Starting 4m34s KubeDB Ops-manager Operator Pausing Elasticsearch databse: demo/Elasticsearch-dev - Normal Successful 4m34s KubeDB Ops-manager Operator Successfully paused Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-up-combined - Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m26s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m26s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 4m21s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 - Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 4m16s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 4m11s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 4m11s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 4m6s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 - Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 3m31s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Normal RestartNodes 3m26s KubeDB Ops-manager Operator Successfully restarted all nodes - Warning patch pet setElasticsearch-dev; ConditionStatus:True 3m21s KubeDB Ops-manager Operator patch pet setElasticsearch-dev; ConditionStatus:True - Warning node in cluster; ConditionStatus:False 2m46s KubeDB Ops-manager Operator node in cluster; ConditionStatus:False - Warning node in cluster; ConditionStatus:True 2m21s KubeDB Ops-manager Operator node in cluster; ConditionStatus:True - Normal ScaleUpCombined 2m16s KubeDB Ops-manager Operator Successfully Scaled Up Server Node - Normal Starting 2m16s KubeDB Ops-manager Operator Resuming Elasticsearch database: demo/Elasticsearch-dev - Normal Successful 2m16s KubeDB Ops-manager Operator Successfully resumed Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-up-combined + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m54s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es + Warning patch pet set; ConditionStatus:True 2m46s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning is node in cluster; ConditionStatus:False 2m41s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:False + Warning is node in cluster; ConditionStatus:True 2m26s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:True + Normal HorizontalScaleCombinedNode 2m21s KubeDB Ops-manager Operator ScaleUp es nodes + Normal UpdateDatabase 2m16s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 2m16s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es + Normal ResumeDatabase 2m16s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es + Normal Successful 2m16s KubeDB Ops-manager Operator Successfully Horizontally Scaled Database +bonusree@bonusree-HP-ProBook-450-G4 ~> + ``` Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, ```bash -$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get Elasticsearch -n demo es -o json | jq '.spec.replicas' 3 -$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get petset -n demo es -o json | jq '.spec.replicas' 3 ``` -Now let's connect to a Elasticsearch instance and run a Elasticsearch internal command to check the number of replicas, -```bash -$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties -Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -Elasticsearch-dev-2.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 2 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -``` + From all the above outputs we can see that the brokers of the combined Elasticsearch is `3`. That means we have successfully scaled up the replicas of the Elasticsearch combined cluster. @@ -631,21 +292,21 @@ metadata: spec: type: HorizontalScaling databaseRef: - name: Elasticsearch-dev + name: es horizontalScaling: node: 2 ``` Here, -- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `Elasticsearch-dev` cluster. +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `es` cluster. - `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. - `spec.horizontalScaling.node` specifies the desired replicas after scaling. Let's create the `ElasticsearchOpsRequest` CR we have shown above, ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/scaling/horizontal-scaling/Elasticsearch-hscale-down-combined.yaml +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/horizontal/Elasticsearch-hscale-down-combined.yaml Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-down-combined created ``` @@ -656,15 +317,15 @@ If everything goes well, `KubeDB` Ops-manager operator will update the replicas Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, ```bash -$ watch kubectl get Elasticsearchopsrequest -n demo -NAME TYPE STATUS AGE -esops-hscale-down-combined HorizontalScaling Successful 2m32s +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-down-combined HorizontalScaling Successful 76s ``` We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. ```bash -$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-down-combined +$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-down-combined Name: esops-hscale-down-combined Namespace: demo Labels: @@ -672,100 +333,94 @@ Annotations: API Version: ops.kubedb.com/v1alpha1 Kind: ElasticsearchOpsRequest Metadata: - Creation Timestamp: 2024-08-02T10:46:39Z + Creation Timestamp: 2025-11-13T10:46:22Z Generation: 1 - Resource Version: 354924 - UID: f1a0b85d-1a86-463c-a3e4-72947badd108 + Resource Version: 811301 + UID: 558530d7-5d02-4757-b459-476129b411d6 Spec: Apply: IfReady Database Ref: - Name: Elasticsearch-dev + Name: es Horizontal Scaling: Node: 2 Type: HorizontalScaling Status: Conditions: - Last Transition Time: 2024-08-02T10:46:39Z - Message: Elasticsearch ops-request has started to horizontally scaling the nodes + Last Transition Time: 2025-11-13T10:46:22Z + Message: Elasticsearch ops request is horizontally scaling the nodes. Observed Generation: 1 - Reason: HorizontalScaling + Reason: HorizontalScale Status: True - Type: HorizontalScaling - Last Transition Time: 2024-08-02T10:47:07Z - Message: Successfully Scaled Down Server Node + Type: HorizontalScale + Last Transition Time: 2025-11-13T10:46:30Z + Message: create es client; ConditionStatus:True Observed Generation: 1 - Reason: ScaleDownCombined Status: True - Type: ScaleDownCombined - Last Transition Time: 2024-08-02T10:46:57Z - Message: reassign partitions; ConditionStatus:True + Type: CreateEsClient + Last Transition Time: 2025-11-13T10:46:30Z + Message: get voting config exclusion; ConditionStatus:True Observed Generation: 1 Status: True - Type: ReassignPartitions - Last Transition Time: 2024-08-02T10:46:57Z - Message: is pet set patched; ConditionStatus:True + Type: GetVotingConfigExclusion + Last Transition Time: 2025-11-13T10:46:31Z + Message: exclude node allocation; ConditionStatus:True Observed Generation: 1 Status: True - Type: IsPetSetPatched - Last Transition Time: 2024-08-02T10:46:57Z - Message: get pod; ConditionStatus:True - Observed Generation: 1 - Status: True - Type: GetPod - Last Transition Time: 2024-08-02T10:46:58Z - Message: delete pvc; ConditionStatus:True + Type: ExcludeNodeAllocation + Last Transition Time: 2025-11-13T10:46:31Z + Message: get used data nodes; ConditionStatus:True Observed Generation: 1 Status: True - Type: DeletePvc - Last Transition Time: 2024-08-02T10:47:02Z - Message: get pvc; ConditionStatus:True + Type: GetUsedDataNodes + Last Transition Time: 2025-11-13T10:46:31Z + Message: move data; ConditionStatus:True Observed Generation: 1 Status: True - Type: GetPvc - Last Transition Time: 2024-08-02T10:47:13Z - Message: successfully reconciled the Elasticsearch with modified node + Type: MoveData + Last Transition Time: 2025-11-13T10:46:31Z + Message: patch pet set; ConditionStatus:True Observed Generation: 1 - Reason: UpdatePetSets Status: True - Type: UpdatePetSets - Last Transition Time: 2024-08-02T10:47:18Z - Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Type: PatchPetSet + Last Transition Time: 2025-11-13T10:46:35Z + Message: get pod; ConditionStatus:True Observed Generation: 1 Status: True - Type: GetPod--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:47:18Z - Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Type: GetPod + Last Transition Time: 2025-11-13T10:46:35Z + Message: delete pvc; ConditionStatus:True Observed Generation: 1 Status: True - Type: EvictPod--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:47:28Z - Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 + Type: DeletePvc + Last Transition Time: 2025-11-13T10:46:40Z + Message: get pvc; ConditionStatus:True Observed Generation: 1 Status: True - Type: CheckPodRunning--Elasticsearch-dev-0 - Last Transition Time: 2024-08-02T10:47:33Z - Message: get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Type: GetPvc + Last Transition Time: 2025-11-13T10:46:45Z + Message: delete voting config exclusion; ConditionStatus:True Observed Generation: 1 Status: True - Type: GetPod--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:47:33Z - Message: evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Type: DeleteVotingConfigExclusion + Last Transition Time: 2025-11-13T10:46:45Z + Message: delete node allocation exclusion; ConditionStatus:True Observed Generation: 1 Status: True - Type: EvictPod--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:48:53Z - Message: check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 + Type: DeleteNodeAllocationExclusion + Last Transition Time: 2025-11-13T10:46:45Z + Message: ScaleDown es nodes Observed Generation: 1 + Reason: HorizontalScaleCombinedNode Status: True - Type: CheckPodRunning--Elasticsearch-dev-1 - Last Transition Time: 2024-08-02T10:48:58Z - Message: Successfully restarted all nodes + Type: HorizontalScaleCombinedNode + Last Transition Time: 2025-11-13T10:46:51Z + Message: successfully updated Elasticsearch CR Observed Generation: 1 - Reason: RestartNodes + Reason: UpdateDatabase Status: True - Type: RestartNodes - Last Transition Time: 2024-08-02T10:48:58Z - Message: Successfully completed horizontally scale Elasticsearch cluster + Type: UpdateDatabase + Last Transition Time: 2025-11-13T10:46:51Z + Message: Successfully Horizontally Scaled. Observed Generation: 1 Reason: Successful Status: True @@ -773,179 +428,41 @@ Status: Observed Generation: 1 Phase: Successful Events: - Type Reason Age From Message - ---- ------ ---- ---- ------- - Normal Starting 2m39s KubeDB Ops-manager Operator Start processing for ElasticsearchOpsRequest: demo/esops-hscale-down-combined - Normal Starting 2m39s KubeDB Ops-manager Operator Pausing Elasticsearch databse: demo/Elasticsearch-dev - Normal Successful 2m39s KubeDB Ops-manager Operator Successfully paused Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-down-combined - Warning reassign partitions; ConditionStatus:True 2m21s KubeDB Ops-manager Operator reassign partitions; ConditionStatus:True - Warning is pet set patched; ConditionStatus:True 2m21s KubeDB Ops-manager Operator is pet set patched; ConditionStatus:True - Warning get pod; ConditionStatus:True 2m21s KubeDB Ops-manager Operator get pod; ConditionStatus:True - Warning delete pvc; ConditionStatus:True 2m20s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True - Warning get pvc; ConditionStatus:False 2m20s KubeDB Ops-manager Operator get pvc; ConditionStatus:False - Warning get pod; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pod; ConditionStatus:True - Warning delete pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True - Warning get pvc; ConditionStatus:True 2m16s KubeDB Ops-manager Operator get pvc; ConditionStatus:True - Normal ScaleDownCombined 2m11s KubeDB Ops-manager Operator Successfully Scaled Down Server Node - Normal UpdatePetSets 2m5s KubeDB Ops-manager Operator successfully reconciled the Elasticsearch with modified node - Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 2m KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 2m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 115s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-0 - Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 110s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-0 - Warning get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 105s KubeDB Ops-manager Operator get pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Warning evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 105s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Warning check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 100s KubeDB Ops-manager Operator check pod running; ConditionStatus:False; PodName:Elasticsearch-dev-1 - Warning check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 25s KubeDB Ops-manager Operator check pod running; ConditionStatus:True; PodName:Elasticsearch-dev-1 - Normal RestartNodes 20s KubeDB Ops-manager Operator Successfully restarted all nodes - Normal Starting 20s KubeDB Ops-manager Operator Resuming Elasticsearch database: demo/Elasticsearch-dev - Normal Successful 20s KubeDB Ops-manager Operator Successfully resumed Elasticsearch database: demo/Elasticsearch-dev for ElasticsearchOpsRequest: esops-hscale-down-combined + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 112s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es + Warning create es client; ConditionStatus:True 104s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning get voting config exclusion; ConditionStatus:True 104s KubeDB Ops-manager Operator get voting config exclusion; ConditionStatus:True + Warning exclude node allocation; ConditionStatus:True 103s KubeDB Ops-manager Operator exclude node allocation; ConditionStatus:True + Warning get used data nodes; ConditionStatus:True 103s KubeDB Ops-manager Operator get used data nodes; ConditionStatus:True + Warning move data; ConditionStatus:True 103s KubeDB Ops-manager Operator move data; ConditionStatus:True + Warning patch pet set; ConditionStatus:True 103s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning get pod; ConditionStatus:True 99s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 99s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 99s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 94s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 94s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 94s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning create es client; ConditionStatus:True 89s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning delete voting config exclusion; ConditionStatus:True 89s KubeDB Ops-manager Operator delete voting config exclusion; ConditionStatus:True + Warning delete node allocation exclusion; ConditionStatus:True 89s KubeDB Ops-manager Operator delete node allocation exclusion; ConditionStatus:True + Normal HorizontalScaleCombinedNode 89s KubeDB Ops-manager Operator ScaleDown es nodes + Normal UpdateDatabase 83s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 83s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es + Normal ResumeDatabase 83s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es + Normal Successful 83s KubeDB Ops-manager Operator Successfully Horizontally Scaled Database ``` Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, ```bash -$ kubectl get Elasticsearch -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get Elasticsearch -n demo es -o json | jq '.spec.replicas' 2 -$ kubectl get petset -n demo Elasticsearch-dev -o json | jq '.spec.replicas' +$ kubectl get petset -n demo es -o json | jq '.spec.replicas' 2 ``` -Now let's connect to a Elasticsearch instance and run a Elasticsearch internal command to check the number of replicas, - -```bash -$ kubectl exec -it -n demo Elasticsearch-dev-0 -- Elasticsearch-broker-api-versions.sh --bootstrap-server localhost:9092 --command-config config/clientauth.properties -Elasticsearch-dev-0.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 0 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -Elasticsearch-dev-1.Elasticsearch-dev-pods.demo.svc.cluster.local:9092 (id: 1 rack: null) -> ( - Produce(0): 0 to 9 [usable: 9], - Fetch(1): 0 to 15 [usable: 15], - ListOffsets(2): 0 to 8 [usable: 8], - Metadata(3): 0 to 12 [usable: 12], - LeaderAndIsr(4): UNSUPPORTED, - StopReplica(5): UNSUPPORTED, - UpdateMetadata(6): UNSUPPORTED, - ControlledShutdown(7): UNSUPPORTED, - OffsetCommit(8): 0 to 8 [usable: 8], - OffsetFetch(9): 0 to 8 [usable: 8], - FindCoordinator(10): 0 to 4 [usable: 4], - JoinGroup(11): 0 to 9 [usable: 9], - Heartbeat(12): 0 to 4 [usable: 4], - LeaveGroup(13): 0 to 5 [usable: 5], - SyncGroup(14): 0 to 5 [usable: 5], - DescribeGroups(15): 0 to 5 [usable: 5], - ListGroups(16): 0 to 4 [usable: 4], - SaslHandshake(17): 0 to 1 [usable: 1], - ApiVersions(18): 0 to 3 [usable: 3], - CreateTopics(19): 0 to 7 [usable: 7], - DeleteTopics(20): 0 to 6 [usable: 6], - DeleteRecords(21): 0 to 2 [usable: 2], - InitProducerId(22): 0 to 4 [usable: 4], - OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], - AddPartitionsToTxn(24): 0 to 4 [usable: 4], - AddOffsetsToTxn(25): 0 to 3 [usable: 3], - EndTxn(26): 0 to 3 [usable: 3], - WriteTxnMarkers(27): 0 to 1 [usable: 1], - TxnOffsetCommit(28): 0 to 3 [usable: 3], - DescribeAcls(29): 0 to 3 [usable: 3], - CreateAcls(30): 0 to 3 [usable: 3], - DeleteAcls(31): 0 to 3 [usable: 3], - DescribeConfigs(32): 0 to 4 [usable: 4], - AlterConfigs(33): 0 to 2 [usable: 2], - AlterReplicaLogDirs(34): 0 to 2 [usable: 2], - DescribeLogDirs(35): 0 to 4 [usable: 4], - SaslAuthenticate(36): 0 to 2 [usable: 2], - CreatePartitions(37): 0 to 3 [usable: 3], - CreateDelegationToken(38): 0 to 3 [usable: 3], - RenewDelegationToken(39): 0 to 2 [usable: 2], - ExpireDelegationToken(40): 0 to 2 [usable: 2], - DescribeDelegationToken(41): 0 to 3 [usable: 3], - DeleteGroups(42): 0 to 2 [usable: 2], - ElectLeaders(43): 0 to 2 [usable: 2], - IncrementalAlterConfigs(44): 0 to 1 [usable: 1], - AlterPartitionReassignments(45): 0 [usable: 0], - ListPartitionReassignments(46): 0 [usable: 0], - OffsetDelete(47): 0 [usable: 0], - DescribeClientQuotas(48): 0 to 1 [usable: 1], - AlterClientQuotas(49): 0 to 1 [usable: 1], - DescribeUserScramCredentials(50): 0 [usable: 0], - AlterUserScramCredentials(51): 0 [usable: 0], - DescribeQuorum(55): 0 to 1 [usable: 1], - AlterPartition(56): UNSUPPORTED, - UpdateFeatures(57): 0 to 1 [usable: 1], - Envelope(58): UNSUPPORTED, - DescribeCluster(60): 0 [usable: 0], - DescribeProducers(61): 0 [usable: 0], - UnregisterBroker(64): 0 [usable: 0], - DescribeTransactions(65): 0 [usable: 0], - ListTransactions(66): 0 [usable: 0], - AllocateProducerIds(67): UNSUPPORTED, - ConsumerGroupHeartbeat(68): UNSUPPORTED -) -``` From all the above outputs we can see that the replicas of the combined cluster is `2`. That means we have successfully scaled down the replicas of the Elasticsearch combined cluster. @@ -954,16 +471,14 @@ From all the above outputs we can see that the replicas of the combined cluster To clean up the Kubernetes resources created by this tutorial, run: ```bash -kubectl delete es -n demo Elasticsearch-dev +kubectl delete es -n demo es kubectl delete Elasticsearchopsrequest -n demo esops-hscale-up-combined esops-hscale-down-combined kubectl delete ns demo ``` ## Next Steps -- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/Elasticsearch.md). +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). - Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/_index.md). - Monitor your Elasticsearch with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md). - -[//]: # (- Monitor your Elasticsearch with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/elasticsearch/monitoring/using-builtin-prometheus.md).) - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/_index.md b/docs/guides/elasticsearch/scaling/horizontal/topology/_index.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md b/docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md new file mode 100644 index 000000000..e69de29bb diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/simple.md b/docs/guides/elasticsearch/scaling/horizontal/topology/simple.md new file mode 100644 index 000000000..e69de29bb From 77c60f0edca6221a42ed6029a05ec5da3ac54eca Mon Sep 17 00:00:00 2001 From: Bonusree Date: Tue, 18 Nov 2025 15:55:51 +0600 Subject: [PATCH 06/13] topology Signed-off-by: Bonusree --- .../Elasticsearch-hscale-down-Topology.yaml | 14 + .../Elasticsearch-hscale-up-Topology.yaml | 14 + .../scalling/horizontal/topology.yaml | 37 ++ .../scaling/horizontal/overview.md | 11 +- .../scaling/horizontal/topology.md | 625 ++++++++++++++++++ .../scaling/horizontal/topology/_index.md | 0 .../scaling/horizontal/topology/hotwarm.md | 0 .../scaling/horizontal/topology/simple.md | 0 .../elasticsearch/update-version/overview.md | 6 +- 9 files changed, 700 insertions(+), 7 deletions(-) create mode 100644 docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-Topology.yaml create mode 100644 docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-Topology.yaml create mode 100644 docs/examples/elasticsearch/scalling/horizontal/topology.yaml create mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology.md delete mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/_index.md delete mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md delete mode 100644 docs/guides/elasticsearch/scaling/horizontal/topology/simple.md diff --git a/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-Topology.yaml b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-Topology.yaml new file mode 100644 index 000000000..62dd2a113 --- /dev/null +++ b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-down-Topology.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + master: 2 + ingest: 2 + data: 2 \ No newline at end of file diff --git a/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-Topology.yaml b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-Topology.yaml new file mode 100644 index 000000000..fae53ea35 --- /dev/null +++ b/docs/examples/elasticsearch/scalling/horizontal/Elasticsearch-hscale-up-Topology.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-up-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + master: 3 + ingest: 3 + data: 3 \ No newline at end of file diff --git a/docs/examples/elasticsearch/scalling/horizontal/topology.yaml b/docs/examples/elasticsearch/scalling/horizontal/topology.yaml new file mode 100644 index 000000000..a11ee8649 --- /dev/null +++ b/docs/examples/elasticsearch/scalling/horizontal/topology.yaml @@ -0,0 +1,37 @@ +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/horizontal/overview.md b/docs/guides/elasticsearch/scaling/horizontal/overview.md index 6864dc39f..b4d5bc64c 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/overview.md +++ b/docs/guides/elasticsearch/scaling/horizontal/overview.md @@ -26,10 +26,13 @@ This guide will give an overview on how KubeDB Ops-manager operator scales up or The following diagram shows how KubeDB Ops-manager operator scales up or down `Elasticsearch` database components. Open the image in a new tab to see the enlarged version. -
-  Horizontal scaling process of Elasticsearch -
Fig: Horizontal scaling process of Elasticsearch
-
+[//]: # (
) + +[//]: # (  Horizontal scaling process of Elasticsearch) + +[//]: # (
Fig: Horizontal scaling process of Elasticsearch
) + +[//]: # (
) The Horizontal scaling process consists of the following steps: diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology.md b/docs/guides/elasticsearch/scaling/horizontal/topology.md new file mode 100644 index 000000000..10f79a48e --- /dev/null +++ b/docs/guides/elasticsearch/scaling/horizontal/topology.md @@ -0,0 +1,625 @@ +--- +title: Horizontal Scaling Topology Elasticsearch +menu: + docs_{{ .version }}: + identifier: es-horizontal-scaling-Topology + name: Topology Cluster + parent: es-horizontal-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Horizontal Scale Elasticsearch Topology Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to scale the Elasticsearch Topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [Topology](/docs/guides/elasticsearch/clustering/Topology-cluster/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Horizontal Scaling Overview](/docs/guides/elasticsearch/scaling/horizontal/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/Elasticsearch](/docs/examples/elasticsearch) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Horizontal Scaling on Topology Cluster + +Here, we are going to deploy a `Elasticsearch` Topology cluster using a supported version by `KubeDB` operator. Then we are going to apply horizontal scaling on it. + +### Prepare Elasticsearch Topology cluster + +Now, we are going to deploy a `Elasticsearch` Topology cluster with version `xpack-8.11.1`. + +### Deploy Elasticsearch Topology cluster + +In this section, we are going to deploy a Elasticsearch Topology cluster. Then, in the next section we will scale the cluster using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-hscale-topology + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/clustering/topology.yaml +Elasticsearch.kubedb.com/es-hscale-topology created +``` + +Now, wait until `es-hscale-topology` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo +NAME VERSION STATUS AGE +es-hscale-topology xpack-8.11.1 Ready 3m53s +``` + +Let's check the number of replicas has from Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.master.replicas' +3 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.ingest.replicas' +3 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.data.replicas' +3 +``` + +We can see from both command that the cluster has 3 replicas. + +Also, we can verify the replicas of the Topology from an internal Elasticsearch command by exec into a replica. + +Now lets check the number of replicas, + +```bash +$ kubectl get all,secret,pvc -n demo -l 'app.kubernetes.io/instance=es-hscale-topology' +NAME READY STATUS RESTARTS AGE +pod/es-hscale-topology-data-0 1/1 Running 0 27m +pod/es-hscale-topology-data-1 1/1 Running 0 25m +pod/es-hscale-topology-data-2 1/1 Running 0 24m +pod/es-hscale-topology-ingest-0 1/1 Running 0 27m +pod/es-hscale-topology-ingest-1 1/1 Running 0 25m +pod/es-hscale-topology-ingest-2 1/1 Running 0 24m +pod/es-hscale-topology-master-0 1/1 Running 0 27m +pod/es-hscale-topology-master-1 1/1 Running 0 25m +pod/es-hscale-topology-master-2 1/1 Running 0 24m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/es-hscale-topology ClusterIP 10.43.33.118 9200/TCP 27m +service/es-hscale-topology-master ClusterIP None 9300/TCP 27m +service/es-hscale-topology-pods ClusterIP None 9200/TCP 27m + +NAME TYPE VERSION AGE +appbinding.appcatalog.appscode.com/es-hscale-topology kubedb.com/elasticsearch 8.11.1 27m + +NAME TYPE DATA AGE +secret/es-hscale-topology-apm-system-cred kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-auth kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-beats-system-cred kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-ca-cert kubernetes.io/tls 2 27m +secret/es-hscale-topology-client-cert kubernetes.io/tls 3 27m +secret/es-hscale-topology-config Opaque 1 27m +secret/es-hscale-topology-http-cert kubernetes.io/tls 3 27m +secret/es-hscale-topology-kibana-system-cred kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-logstash-system-cred kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-remote-monitoring-user-cred kubernetes.io/basic-auth 2 27m +secret/es-hscale-topology-transport-cert kubernetes.io/tls 3 27m + +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE +persistentvolumeclaim/data-es-hscale-topology-data-0 Bound pvc-ce9ce1ec-a2db-43c8-9d40-d158f53f25fe 1Gi RWO standard 27m +persistentvolumeclaim/data-es-hscale-topology-data-1 Bound pvc-babfc22c-1e29-44e3-a094-8fa48876db68 1Gi RWO standard 25m +persistentvolumeclaim/data-es-hscale-topology-data-2 Bound pvc-c0e64663-1cc4-420c-85b9-4f643c76f006 1Gi RWO standard 24m +persistentvolumeclaim/data-es-hscale-topology-ingest-0 Bound pvc-3de6c8f6-17aa-43d8-8c10-8cbd2dc543aa 1Gi RWO standard 27m +persistentvolumeclaim/data-es-hscale-topology-ingest-1 Bound pvc-d990c570-c687-4192-ad2e-bad127b7b5db 1Gi RWO standard 25m +persistentvolumeclaim/data-es-hscale-topology-ingest-2 Bound pvc-4540c342-811a-4b82-970e-0e6d29e80e9b 1Gi RWO standard 24m +persistentvolumeclaim/data-es-hscale-topology-master-0 Bound pvc-902a0ebb-b6fb-4106-8220-f137972a84be 1Gi RWO standard 27m +persistentvolumeclaim/data-es-hscale-topology-master-1 Bound pvc-f97215e6-1a91-4e77-8bfb-78d907828e51 1Gi RWO standard 25m +persistentvolumeclaim/data-es-hscale-topology-master-2 Bound pvc-a9160094-c08e-4d40-b4ea-ec5681f8be30 1Gi RWO standard 24m + +``` + +We can see from the above output that the Elasticsearch has 2 nodes. + +We are now ready to apply the `ElasticsearchOpsRequest` CR to scale this cluster. + + +### Scale Down Replicas + +Here, we are going to scale down the replicas of the Elasticsearch Topology cluster to meet the desired number of replicas after scaling. + +#### Create ElasticsearchOpsRequest + +In order to scale down the replicas of the Elasticsearch Topology cluster, we have to create a `ElasticsearchOpsRequest` CR with our desired replicas. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + master: 2 + ingest: 2 + data: 2 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling down operation on `es-hscale-topology` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. +- `verticalScaling.topology` - specifies the desired node resources for different type of node of the Elasticsearch running in cluster topology mode (ie. `Elasticsearch.spec.topology` is `not empty`). + - `topology.master` - specifies the desired resources for the master nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.data` - specifies the desired node resources for the data nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.ingest` - specifies the desired node resources for the ingest nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + +> Note: It is recommended not to use resources below the default one; `cpu: 500m, memory: 1Gi`. + + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/horizontal/Elasticsearch-hscale-down-Topology.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-down-topology created +``` + +#### Verify Topology cluster replicas scaled down successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-down-Topology HorizontalScaling Successful 76s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-down-topology +Name: esops-hscale-down-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-17T12:01:29Z + Generation: 1 + Resource Version: 11617 + UID: 4b4f9728-b31e-4336-a95c-cf34d97d8b4a +Spec: + Apply: IfReady + Database Ref: + Name: es-hscale-topology + Horizontal Scaling: + Topology: + Data: 2 + Ingest: 2 + Master: 2 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2025-11-17T12:01:29Z + Message: Elasticsearch ops request is horizontally scaling the nodes. + Observed Generation: 1 + Reason: HorizontalScale + Status: True + Type: HorizontalScale + Last Transition Time: 2025-11-17T12:01:37Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-17T12:01:37Z + Message: patch pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSet + Last Transition Time: 2025-11-17T12:01:42Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2025-11-17T12:01:42Z + Message: delete pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePvc + Last Transition Time: 2025-11-17T12:02:27Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2025-11-17T12:01:52Z + Message: ScaleDown es-hscale-topology-ingest nodes + Observed Generation: 1 + Reason: HorizontalScaleIngestNode + Status: True + Type: HorizontalScaleIngestNode + Last Transition Time: 2025-11-17T12:01:57Z + Message: exclude node allocation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ExcludeNodeAllocation + Last Transition Time: 2025-11-17T12:01:57Z + Message: get used data nodes; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetUsedDataNodes + Last Transition Time: 2025-11-17T12:01:57Z + Message: move data; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: MoveData + Last Transition Time: 2025-11-17T12:02:12Z + Message: delete node allocation exclusion; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeleteNodeAllocationExclusion + Last Transition Time: 2025-11-17T12:02:12Z + Message: ScaleDown es-hscale-topology-data nodes + Observed Generation: 1 + Reason: HorizontalScaleDataNode + Status: True + Type: HorizontalScaleDataNode + Last Transition Time: 2025-11-17T12:02:18Z + Message: get voting config exclusion; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetVotingConfigExclusion + Last Transition Time: 2025-11-17T12:02:32Z + Message: delete voting config exclusion; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeleteVotingConfigExclusion + Last Transition Time: 2025-11-17T12:02:32Z + Message: ScaleDown es-hscale-topology-master nodes + Observed Generation: 1 + Reason: HorizontalScaleMasterNode + Status: True + Type: HorizontalScaleMasterNode + Last Transition Time: 2025-11-17T12:02:37Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-17T12:02:38Z + Message: Successfully Horizontally Scaled. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 101s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-hscale-topology + Warning create es client; ConditionStatus:True 93s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning patch pet set; ConditionStatus:True 93s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning get pod; ConditionStatus:True 88s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 88s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 88s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 83s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 83s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 83s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning create es client; ConditionStatus:True 78s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Normal HorizontalScaleIngestNode 78s KubeDB Ops-manager Operator ScaleDown es-hscale-topology-ingest nodes + Warning create es client; ConditionStatus:True 73s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning exclude node allocation; ConditionStatus:True 73s KubeDB Ops-manager Operator exclude node allocation; ConditionStatus:True + Warning get used data nodes; ConditionStatus:True 73s KubeDB Ops-manager Operator get used data nodes; ConditionStatus:True + Warning move data; ConditionStatus:True 73s KubeDB Ops-manager Operator move data; ConditionStatus:True + Warning patch pet set; ConditionStatus:True 73s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning get pod; ConditionStatus:True 68s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 68s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 68s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 63s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 63s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 63s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning create es client; ConditionStatus:True 58s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning delete node allocation exclusion; ConditionStatus:True 58s KubeDB Ops-manager Operator delete node allocation exclusion; ConditionStatus:True + Normal HorizontalScaleDataNode 58s KubeDB Ops-manager Operator ScaleDown es-hscale-topology-data nodes + Warning create es client; ConditionStatus:True 53s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning get voting config exclusion; ConditionStatus:True 52s KubeDB Ops-manager Operator get voting config exclusion; ConditionStatus:True + Warning patch pet set; ConditionStatus:True 52s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning get pod; ConditionStatus:True 48s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 48s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:False 48s KubeDB Ops-manager Operator get pvc; ConditionStatus:False + Warning get pod; ConditionStatus:True 43s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning delete pvc; ConditionStatus:True 43s KubeDB Ops-manager Operator delete pvc; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning create es client; ConditionStatus:True 38s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning delete voting config exclusion; ConditionStatus:True 38s KubeDB Ops-manager Operator delete voting config exclusion; ConditionStatus:True + Normal HorizontalScaleMasterNode 38s KubeDB Ops-manager Operator ScaleDown es-hscale-topology-master nodes + Normal UpdateDatabase 33s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 33s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-hscale-topology + Normal ResumeDatabase 33s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-hscale-topology + Normal Successful 33s KubeDB Ops-manager Operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.master.replicas' +2 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.data.replicas' +2 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.ingest.replicas' +2 +``` +**Only ingest nodes after scaling down:** +```bash +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-ingest-hscale-down-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + ingest: 2 +``` +From all the above outputs we can see that the replicas of the Topology cluster is `2`. That means we have successfully scaled down the replicas of the Elasticsearch Topology cluster. + + + +## Scale Up Replicas + +Here, we are going to scale up the replicas of the Topology cluster to meet the desired number of replicas after scaling. + +#### Create ElasticsearchOpsRequest + +In order to scale up the replicas of the Topology cluster, we have to create a `ElasticsearchOpsRequest` CR with our desired replicas. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-hscale-up-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + master: 3 + ingest: 3 + data: 3 +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing horizontal scaling operation on `es-hscale-topology` cluster. +- `spec.type` specifies that we are performing `HorizontalScaling` on Elasticsearch. +- `verticalScaling.topology` - specifies the desired node resources for different type of node of the Elasticsearch running in cluster topology mode (ie. `Elasticsearch.spec.topology` is `not empty`). + - `topology.master` - specifies the desired resources for the master nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.data` - specifies the desired node resources for the data nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + - `topology.ingest` - specifies the desired node resources for the ingest nodes. It takes input same as the k8s [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-types). + +> Note: It is recommended not to use resources below the default one; `cpu: 500m, memory: 1Gi`. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/horizontal/Elasticsearch-hscale-up-Topology.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-hscale-up-topology created +``` + +#### Verify Topology cluster replicas scaled up successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the replicas of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +esops-hscale-up-topology HorizontalScaling Successful 13m +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequests -n demo esops-hscale-up-topology +Name: esops-hscale-up-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-17T12:12:44Z + Generation: 1 + Resource Version: 12241 + UID: 5342e779-62bc-4fe1-b91c-21b30c30cd39 +Spec: + Apply: IfReady + Database Ref: + Name: es-hscale-topology + Horizontal Scaling: + Topology: + Data: 3 + Ingest: 3 + Master: 3 + Type: HorizontalScaling +Status: + Conditions: + Last Transition Time: 2025-11-17T12:12:44Z + Message: Elasticsearch ops request is horizontally scaling the nodes. + Observed Generation: 1 + Reason: HorizontalScale + Status: True + Type: HorizontalScale + Last Transition Time: 2025-11-17T12:12:52Z + Message: patch pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPetSet + Last Transition Time: 2025-11-17T12:13:58Z + Message: is node in cluster; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsNodeInCluster + Last Transition Time: 2025-11-17T12:13:12Z + Message: ScaleUp es-hscale-topology-ingest nodes + Observed Generation: 1 + Reason: HorizontalScaleIngestNode + Status: True + Type: HorizontalScaleIngestNode + Last Transition Time: 2025-11-17T12:13:37Z + Message: ScaleUp es-hscale-topology-data nodes + Observed Generation: 1 + Reason: HorizontalScaleDataNode + Status: True + Type: HorizontalScaleDataNode + Last Transition Time: 2025-11-17T12:14:02Z + Message: ScaleUp es-hscale-topology-master nodes + Observed Generation: 1 + Reason: HorizontalScaleMasterNode + Status: True + Type: HorizontalScaleMasterNode + Last Transition Time: 2025-11-17T12:14:07Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-17T12:14:08Z + Message: Successfully Horizontally Scaled. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 6m15s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-hscale-topology + Warning patch pet set; ConditionStatus:True 6m7s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning is node in cluster; ConditionStatus:False 6m2s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:False + Warning is node in cluster; ConditionStatus:True 5m52s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:True + Normal HorizontalScaleIngestNode 5m47s KubeDB Ops-manager Operator ScaleUp es-hscale-topology-ingest nodes + Warning patch pet set; ConditionStatus:True 5m42s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning is node in cluster; ConditionStatus:False 5m37s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:False + Warning is node in cluster; ConditionStatus:True 5m27s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:True + Normal HorizontalScaleDataNode 5m22s KubeDB Ops-manager Operator ScaleUp es-hscale-topology-data nodes + Warning patch pet set; ConditionStatus:True 5m17s KubeDB Ops-manager Operator patch pet set; ConditionStatus:True + Warning is node in cluster; ConditionStatus:False 5m12s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:False + Warning is node in cluster; ConditionStatus:True 5m1s KubeDB Ops-manager Operator is node in cluster; ConditionStatus:True + Normal HorizontalScaleMasterNode 4m57s KubeDB Ops-manager Operator ScaleUp es-hscale-topology-master nodes + Normal UpdateDatabase 4m52s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 4m52s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-hscale-topology + Normal ResumeDatabase 4m52s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-hscale-topology + Normal Successful 4m51s KubeDB Ops-manager Operator Successfully Horizontally Scaled Database +``` + +Now, we are going to verify the number of replicas this cluster has from the Elasticsearch object, number of pods the petset have, + +```bash +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.master.replicas' +3 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.data.replicas' +3 +$ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.ingest.replicas' +3 +``` + +From all the above outputs we can see that the brokers of the Topology Elasticsearch is `3`. That means we have successfully scaled up the replicas of the Elasticsearch Topology cluster. + + +**Only ingest nodes after scaling up:** +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-ingest-hscale-up-topology + namespace: demo +spec: + type: HorizontalScaling + databaseRef: + name: es-hscale-topology + horizontalScaling: + topology: + ingest: 3 +``` + + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete es -n demo es-hscale-topology +kubectl delete Elasticsearchopsrequest -n demo esops-hscale-down-topology,esops-hscale-up-topology,esops-ingest-hscale-up-topology,esops-ingest-hscale-down-topology +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/_index.md). +- Monitor your Elasticsearch with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/_index.md b/docs/guides/elasticsearch/scaling/horizontal/topology/_index.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md b/docs/guides/elasticsearch/scaling/horizontal/topology/hotwarm.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology/simple.md b/docs/guides/elasticsearch/scaling/horizontal/topology/simple.md deleted file mode 100644 index e69de29bb..000000000 diff --git a/docs/guides/elasticsearch/update-version/overview.md b/docs/guides/elasticsearch/update-version/overview.md index ef9832c9d..2ccccf84c 100644 --- a/docs/guides/elasticsearch/update-version/overview.md +++ b/docs/guides/elasticsearch/update-version/overview.md @@ -19,8 +19,8 @@ This guide will give you an overview of how KubeDB ops manager updates the versi ## Before You Begin - You should be familiar with the following `KubeDB` concepts: - - [Elasticsearch](/docs/guides/Elasticsearch/concepts/Elasticsearch.md) - - [ElasticsearchOpsRequest](/docs/guides/Elasticsearch/concepts/opsrequest.md) + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) ## How update Process Works @@ -49,8 +49,8 @@ The updating process consists of the following steps: 6. When it finds one, it Pauses the `Elasticsearch` object so that the `KubeDB-Provisioner` operator doesn't perform any operation on the `Elasticsearch` during the updating process. 7. By looking at the target version from `ElasticsearchOpsRequest` cr, In case of major update `KubeDB-ops-manager` does some pre-update steps as we need old bin and lib files to update from current to target Elasticsearch version. -8. Then By looking at the target version from `ElasticsearchOpsRequest` cr, `KubeDB-ops-manager` operator updates the images of the `PetSet` for updating versions. +8. Then By looking at the target version from `ElasticsearchOpsRequest` cr, `KubeDB-ops-manager` operator updates the images of the `PetSet` for updating versions. 9. After successful upgradation of the `PetSet` and its `Pod` images, the `KubeDB-ops-manager` updates the image of the `Elasticsearch` object to reflect the updated cluster state. From f134bf8d80139ddaa5fa34af3259c50f0f886384 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Wed, 19 Nov 2025 18:45:56 +0600 Subject: [PATCH 07/13] vertical scaling Signed-off-by: Bonusree --- ...asticsearch-vertical-scaling-combined.yaml | 28 + ...asticsearch-vertical-scaling-topology.yaml | 18 + .../guides/elasticsearch/autoscaler/_index.md | 2 +- docs/guides/elasticsearch/backup/_index.md | 2 +- docs/guides/elasticsearch/cli/cli.md | 2 +- .../guides/elasticsearch/clustering/_index.md | 2 +- docs/guides/elasticsearch/concepts/_index.md | 2 +- .../elasticsearch/configuration/_index.md | 2 +- .../elasticsearch/custom-rbac/_index.md | 2 +- .../elasticsearch-dashboard/_index.md | 2 +- .../guides/elasticsearch/monitoring/_index.md | 2 +- .../elasticsearch/plugins-backup/_index.md | 2 +- docs/guides/elasticsearch/plugins/_index.md | 2 +- .../elasticsearch/private-registry/_index.md | 2 +- docs/guides/elasticsearch/restart/index.md | 2 +- .../guides/elasticsearch/rotateauth/_index.md | 2 +- docs/guides/elasticsearch/scaling/_index.md | 10 +- .../horizontal/{index.md => _index.md} | 0 .../scaling/vertical/{index.md => _index.md} | 0 .../scaling/vertical/combined.md | 313 ++++++++ .../scaling/vertical/overview.md | 54 ++ .../scaling/vertical/topology.md | 694 ++++++++++++++++++ .../elasticsearch/update-version/_index.md | 2 +- 23 files changed, 1127 insertions(+), 20 deletions(-) create mode 100644 docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-combined.yaml create mode 100644 docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-topology.yaml rename docs/guides/elasticsearch/scaling/horizontal/{index.md => _index.md} (100%) rename docs/guides/elasticsearch/scaling/vertical/{index.md => _index.md} (100%) create mode 100644 docs/guides/elasticsearch/scaling/vertical/combined.md create mode 100644 docs/guides/elasticsearch/scaling/vertical/overview.md create mode 100644 docs/guides/elasticsearch/scaling/vertical/topology.md diff --git a/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-combined.yaml b/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-combined.yaml new file mode 100644 index 000000000..24cf3201b --- /dev/null +++ b/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-combined.yaml @@ -0,0 +1,28 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: es-cluster + verticalScaling: + master: + resources: + limits: + cpu: 750m + memory: 800Mi + data: + resources: + requests: + cpu: 760m + memory: 900Mi + ingest: + resources: + limits: + cpu: 900m + memory: 1.2Gi + requests: + cpu: 800m + memory: 1Gi \ No newline at end of file diff --git a/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-topology.yaml b/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-topology.yaml new file mode 100644 index 000000000..7b0132522 --- /dev/null +++ b/docs/examples/elasticsearch/scalling/vertical/Elasticsearch-vertical-scaling-topology.yaml @@ -0,0 +1,18 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: es-combined + verticalScaling: + node: + resources: + limits: + cpu: 1500m + memory: 2Gi + requests: + cpu: 600m + memory: 2Gi diff --git a/docs/guides/elasticsearch/autoscaler/_index.md b/docs/guides/elasticsearch/autoscaler/_index.md index 0e47d380f..72148b618 100644 --- a/docs/guides/elasticsearch/autoscaler/_index.md +++ b/docs/guides/elasticsearch/autoscaler/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-auto-scaling name: Autoscaling parent: es-elasticsearch-guides - weight: 44 + weight: 145 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/backup/_index.md b/docs/guides/elasticsearch/backup/_index.md index 5e46daaaf..c4815c370 100644 --- a/docs/guides/elasticsearch/backup/_index.md +++ b/docs/guides/elasticsearch/backup/_index.md @@ -5,6 +5,6 @@ menu: identifier: guides-es-backup name: Backup & Restore parent: es-elasticsearch-guides - weight: 40 + weight: 85 menu_name: docs_{{ .version }} --- \ No newline at end of file diff --git a/docs/guides/elasticsearch/cli/cli.md b/docs/guides/elasticsearch/cli/cli.md index 540a6b911..6bc76310c 100644 --- a/docs/guides/elasticsearch/cli/cli.md +++ b/docs/guides/elasticsearch/cli/cli.md @@ -5,7 +5,7 @@ menu: identifier: es-cli-cli name: Quickstart parent: es-cli-elasticsearch - weight: 100 + weight: 155 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/elasticsearch/clustering/_index.md b/docs/guides/elasticsearch/clustering/_index.md index 8e6d3df87..b5326e9ba 100755 --- a/docs/guides/elasticsearch/clustering/_index.md +++ b/docs/guides/elasticsearch/clustering/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-clustering-elasticsearch name: Clustering parent: es-elasticsearch-guides - weight: 25 + weight: 35 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/concepts/_index.md b/docs/guides/elasticsearch/concepts/_index.md index ee9c9f11d..c3766b5a3 100755 --- a/docs/guides/elasticsearch/concepts/_index.md +++ b/docs/guides/elasticsearch/concepts/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-concepts-elasticsearch name: Concepts parent: es-elasticsearch-guides - weight: 20 + weight: 25 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/configuration/_index.md b/docs/guides/elasticsearch/configuration/_index.md index b66e9c446..cc4320e12 100755 --- a/docs/guides/elasticsearch/configuration/_index.md +++ b/docs/guides/elasticsearch/configuration/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-configuration name: Custom Configuration parent: es-elasticsearch-guides - weight: 30 + weight: 45 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/custom-rbac/_index.md b/docs/guides/elasticsearch/custom-rbac/_index.md index 0a63f6c38..6a27ce631 100755 --- a/docs/guides/elasticsearch/custom-rbac/_index.md +++ b/docs/guides/elasticsearch/custom-rbac/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-custom-rbac name: Custom RBAC parent: es-elasticsearch-guides - weight: 31 + weight: 55 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/elasticsearch-dashboard/_index.md b/docs/guides/elasticsearch/elasticsearch-dashboard/_index.md index defd392ff..4941abfbc 100644 --- a/docs/guides/elasticsearch/elasticsearch-dashboard/_index.md +++ b/docs/guides/elasticsearch/elasticsearch-dashboard/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-dashboard name: Elasticsearch Dashboard parent: es-elasticsearch-guides - weight: 32 + weight: 65 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/monitoring/_index.md b/docs/guides/elasticsearch/monitoring/_index.md index c4206a801..5f11444f5 100755 --- a/docs/guides/elasticsearch/monitoring/_index.md +++ b/docs/guides/elasticsearch/monitoring/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-monitoring-elasticsearch name: Monitoring parent: es-elasticsearch-guides - weight: 50 + weight: 135 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/plugins-backup/_index.md b/docs/guides/elasticsearch/plugins-backup/_index.md index 2b97dfb7c..d7ed6a2b1 100644 --- a/docs/guides/elasticsearch/plugins-backup/_index.md +++ b/docs/guides/elasticsearch/plugins-backup/_index.md @@ -5,6 +5,6 @@ menu: identifier: guides-es-plugins-backup name: Snapshot & Restore (Repository Plugins) parent: es-elasticsearch-guides - weight: 41 + weight: 71 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/plugins/_index.md b/docs/guides/elasticsearch/plugins/_index.md index efa9f0d86..0dda46150 100755 --- a/docs/guides/elasticsearch/plugins/_index.md +++ b/docs/guides/elasticsearch/plugins/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-plugin-elasticsearch name: Extensions & Plugins parent: es-elasticsearch-guides - weight: 60 + weight: 70 menu_name: docs_{{ .version }} --- \ No newline at end of file diff --git a/docs/guides/elasticsearch/private-registry/_index.md b/docs/guides/elasticsearch/private-registry/_index.md index d072bcd97..b6431d51c 100755 --- a/docs/guides/elasticsearch/private-registry/_index.md +++ b/docs/guides/elasticsearch/private-registry/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-private-registry-elasticsearch name: Private Registry parent: es-elasticsearch-guides - weight: 35 + weight: 75 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index 2d9b219c4..2026cc2e0 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -5,7 +5,7 @@ menu: identifier: es-restart-elasticsearch name: Restart parent: es-elasticsearch-guides - weight: 15 + weight: 115 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/elasticsearch/rotateauth/_index.md b/docs/guides/elasticsearch/rotateauth/_index.md index 608378e4d..4b985eccb 100644 --- a/docs/guides/elasticsearch/rotateauth/_index.md +++ b/docs/guides/elasticsearch/rotateauth/_index.md @@ -5,7 +5,7 @@ menu: identifier: es-rotateauth-elasticsearch name: Rotate Authentication parent: es-elasticsearch-guides - weight: 45 + weight: 125 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/scaling/_index.md b/docs/guides/elasticsearch/scaling/_index.md index 120519674..f213eeec4 100644 --- a/docs/guides/elasticsearch/scaling/_index.md +++ b/docs/guides/elasticsearch/scaling/_index.md @@ -1,10 +1,10 @@ --- title: Elasticsearch Scaling menu: - docs_{{ .version }}: - identifier: es-scaling-elasticsearch - name: Scaling - parent: es-elasticsearch-guides - weight: 15 +docs_{{ .version }}: +identifier: es-scaling-elasticsearch +name: Scaling +parent: es-elasticsearch-guides +weight: 105 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/scaling/horizontal/index.md b/docs/guides/elasticsearch/scaling/horizontal/_index.md similarity index 100% rename from docs/guides/elasticsearch/scaling/horizontal/index.md rename to docs/guides/elasticsearch/scaling/horizontal/_index.md diff --git a/docs/guides/elasticsearch/scaling/vertical/index.md b/docs/guides/elasticsearch/scaling/vertical/_index.md similarity index 100% rename from docs/guides/elasticsearch/scaling/vertical/index.md rename to docs/guides/elasticsearch/scaling/vertical/_index.md diff --git a/docs/guides/elasticsearch/scaling/vertical/combined.md b/docs/guides/elasticsearch/scaling/vertical/combined.md new file mode 100644 index 000000000..e696f1e12 --- /dev/null +++ b/docs/guides/elasticsearch/scaling/vertical/combined.md @@ -0,0 +1,313 @@ +--- +title: Vertical Scaling Elasticsearch Combined Cluster +menu: + docs_{{ .version }}: + identifier: kf-vertical-scaling-combined + name: Combined Cluster + parent: kf-vertical-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale Elasticsearch Combined Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a Elasticsearch combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [Combined](/docs/guides/elasticsearch/clustering/combined-cluster/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Vertical Scaling Overview](/docs/guides/elasticsearch/scaling/vertical/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/elasticsearch](/docs/examples/elasticsearch) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Combined Cluster + +Here, we are going to deploy a `Elasticsearch` combined cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Elasticsearch Combined Cluster + +Now, we are going to deploy a `Elasticsearch` combined cluster database with version `xpack-8.11.1`. + +### Deploy Elasticsearch Combined Cluster + +In this section, we are going to deploy a Elasticsearch combined cluster. Then, in the next section we will update the resources of the database using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-combined + namespace: demo +spec: + version: xpack-8.11.1 + enableSSL: true + replicas: 1 + storageType: Durable + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + deletionPolicy: WipeOut + +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/clustering/multi-node-es.yaml +Elasticsearch.kubedb.com/es-combined created +``` + +Now, wait until `es-combined` has status `Ready`. i.e, + +```bash +$ kubectl get elasticsearch -n demo -w +NAME VERSION STATUS AGE +es-combined xpack-8.11.1 Ready 3h17m + +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "500m", + "memory": "1536Mi" + } +} + +``` +This is the default resources of the Elasticsearch combined cluster set by the `KubeDB` operator. + +We are now ready to apply the `ElasticsearchOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the combined cluster to meet the desired resources after scaling. + +#### Create ElasticsearchOpsRequest + +In order to update the resources of the database, we have to create a `ElasticsearchOpsRequest` CR with our desired resources. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: vscale-combined + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: es-combined + verticalScaling: + node: + resources: + limits: + cpu: 1500m + memory: 2Gi + requests: + cpu: 600m + memory: 2Gi + +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `es-combined` cluster. +- `spec.type` specifies that we are performing `VerticalScaling` on Elasticsearch. +- `spec.VerticalScaling.node` specifies the desired resources after scaling. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/clustering/topology-es.yaml +``` + +#### Verify Elasticsearch Combined cluster resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +vscale-combined VerticalScaling Successful 2m38s + +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo vscale-combined +Name: vscale-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-19T08:55:15Z + Generation: 1 + Resource Version: 66012 + UID: bb814c10-12af-438e-9553-5565120bbdb9 +Spec: + Apply: IfReady + Database Ref: + Name: es-combined + Type: VerticalScaling + Vertical Scaling: + Node: + Resources: + Limits: + Cpu: 1500m + Memory: 2Gi + Requests: + Cpu: 600m + Memory: 2Gi +Status: + Conditions: + Last Transition Time: 2025-11-19T08:55:15Z + Message: Elasticsearch ops request is vertically scaling the nodes + Observed Generation: 1 + Reason: VerticalScale + Status: True + Type: VerticalScale + Last Transition Time: 2025-11-19T08:55:27Z + Message: successfully reconciled the Elasticsearch resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2025-11-19T08:55:32Z + Message: pod exists; ConditionStatus:True; PodName:es-combined-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-combined-0 + Last Transition Time: 2025-11-19T08:55:32Z + Message: create es client; ConditionStatus:True; PodName:es-combined-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-combined-0 + Last Transition Time: 2025-11-19T08:55:32Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-combined-0 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-combined-0 + Last Transition Time: 2025-11-19T08:55:32Z + Message: evict pod; ConditionStatus:True; PodName:es-combined-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-combined-0 + Last Transition Time: 2025-11-19T08:55:57Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-19T08:55:57Z + Message: re enable shard allocation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReEnableShardAllocation + Last Transition Time: 2025-11-19T08:56:02Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-19T08:56:07Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-19T08:56:07Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 2m6s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-combined + Normal UpdatePetSets 114s KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Warning pod exists; ConditionStatus:True; PodName:es-combined-0 109s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-combined-0 + Warning create es client; ConditionStatus:True; PodName:es-combined-0 109s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-combined-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-combined-0 109s KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-combined-0 + Warning evict pod; ConditionStatus:True; PodName:es-combined-0 109s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-combined-0 + Warning create es client; ConditionStatus:False 104s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 84s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 84s KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Normal RestartNodes 79s KubeDB Ops-manager Operator Successfully restarted all nodes + Normal UpdateDatabase 74s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 74s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-combined + Normal ResumeDatabase 74s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-combined + Normal Successful 74s KubeDB Ops-manager Operator Successfully Updated Database + +``` + +Now, we are going to verify from one of the Pod yaml whether the resources of the combined cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo es-combined-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "1500m", + "memory": "2Gi" + }, + "requests": { + "cpu": "600m", + "memory": "2Gi" + } +} + +``` + +The above output verifies that we have successfully scaled up the resources of the Elasticsearch combined cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo es-combined +kubectl delete Elasticsearchopsrequest -n demo vscale-combined +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/_index.md). +- Monitor your Elasticsearch database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Elasticsearch database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/elasticsearch/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/scaling/vertical/overview.md b/docs/guides/elasticsearch/scaling/vertical/overview.md new file mode 100644 index 000000000..7e3712500 --- /dev/null +++ b/docs/guides/elasticsearch/scaling/vertical/overview.md @@ -0,0 +1,54 @@ +--- +title: Elasticsearch Vertical Scaling Overview +menu: +docs_{{ .version }}: +identifier: kf-vertical-scaling-overview +name: Overview +parent: kf-vertical-scaling +weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Elasticsearch Vertical Scaling + +This guide will give an overview on how KubeDB Ops-manager operator updates the resources(for example CPU and Memory etc.) of the `Elasticsearch`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: +- [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) +- [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + +## How Vertical Scaling Process Works + +The following diagram shows how KubeDB Ops-manager operator updates the resources of the `Elasticsearch`. Open the image in a new tab to see the enlarged version. + +{{/*
*/}} +{{/*   Vertical scaling process of Elasticsearch*/}} +{{/*
Fig: Vertical scaling process of Elasticsearch
*/}} +{{/*
*/}} + +The vertical scaling process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CR. + +3. When the operator finds a `Elasticsearch` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to update the resources(for example `CPU`, `Memory` etc.) of the `Elasticsearch` cluster, the user creates a `ElasticsearchOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CR. + +6. When it finds a `ElasticsearchOpsRequest` CR, it halts the `Elasticsearch` object which is referred from the `ElasticsearchOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Elasticsearch` object during the vertical scaling process. + +7. Then the `KubeDB` Ops-manager operator will update resources of the PetSet Pods to reach desired state. + +8. After the successful update of the resources of the PetSet's replica, the `KubeDB` Ops-manager operator updates the `Elasticsearch` object to reflect the updated state. + +9. After the successful update of the `Elasticsearch` resources, the `KubeDB` Ops-manager operator resumes the `Elasticsearch` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on updating resources of Elasticsearch database using `ElasticsearchOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/vertical/topology.md b/docs/guides/elasticsearch/scaling/vertical/topology.md new file mode 100644 index 000000000..49800b2d5 --- /dev/null +++ b/docs/guides/elasticsearch/scaling/vertical/topology.md @@ -0,0 +1,694 @@ +--- +title: Vertical Scaling Elasticsearch Topology Cluster +menu: + docs_{{ .version }}: + identifier: es-vertical-scaling-topology + name: Topology Cluster + parent: es-vertical-scaling + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Vertical Scale Elasticsearch Topology Cluster + +This guide will show you how to use `KubeDB` Ops-manager operator to update the resources of a Elasticsearch topology cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/Elasticsearch/concepts/Elasticsearch.md) + - [Topology](/docs/guides/Elasticsearch/clustering/topology-cluster/index.md) + - [ElasticsearchOpsRequest](/docs/guides/Elasticsearch/concepts/elasticsearch-ops-request.md) + - [Vertical Scaling Overview](/docs/guides/Elasticsearch/scaling/vertical-scaling/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/Elasticsearch](/docs/examples/elasticsearch) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Apply Vertical Scaling on Topology Cluster + +Here, we are going to deploy a `Elasticsearch` topology cluster using a supported version by `KubeDB` operator. Then we are going to apply vertical scaling on it. + +### Prepare Elasticsearch Topology Cluster + +Now, we are going to deploy a `Elasticsearch` topology cluster database with version `xpack-8.11.1`. + +### Deploy Elasticsearch Topology Cluster + +In this section, we are going to deploy a Elasticsearch topology cluster. Then, in the next section we will update the resources of the database using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/Elasticsearch-topology.yaml +Elasticsearch.kubedb.com/es-cluster created +``` + +Now, wait until `es-cluster` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +es-cluster xpack-8.11.1 Ready 53m + +``` + +Let's check the Pod containers resources for both `data`,`ingest` and `master` of the Elasticsearch topology cluster. Run the following command to get the resources of the `broker` and `controller` containers of the Elasticsearch topology cluster + +```bash +$ kubectl get pod -n demo es-cluster-data-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "500m", + "memory": "1536Mi" + } +} +$ kubectl get pod -n demo es-cluster-ingest-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "500m", + "memory": "1536Mi" + } +} + +$ kubectl get pod -n demo es-cluster-master-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "1536Mi" + }, + "requests": { + "cpu": "500m", + "memory": "1536Mi" + } +} + +``` +This is the default resources of the Elasticsearch topology cluster set by the `KubeDB` operator. + +We are now ready to apply the `ElasticsearchOpsRequest` CR to update the resources of this database. + +### Vertical Scaling + +Here, we are going to update the resources of the topology cluster to meet the desired resources after scaling. + +#### Create ElasticsearchOpsRequest + +In order to update the resources of the database, we have to create a `ElasticsearchOpsRequest` CR with our desired resources. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: vscale-topology + namespace: demo +spec: + type: VerticalScaling + databaseRef: + name: es-cluster + verticalScaling: + master: + resources: + limits: + cpu: 750m + memory: 800Mi + data: + resources: + requests: + cpu: 760m + memory: 900Mi + ingest: + resources: + limits: + cpu: 900m + memory: 1.2Gi + requests: + cpu: 800m + memory: 1Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing vertical scaling operation on `es-cluster` cluster. +- `spec.type` specifies that we are performing `VerticalScaling` on Elasticsearch. +- `spec.VerticalScaling.node` specifies the desired resources after scaling. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/scaling/vertical/Elasticsearch-vertical-scaling-topology.yaml +Elasticsearchopsrequest.ops.kubedb.com/vscale-topology created +``` + +#### Verify Elasticsearch Topology cluster resources updated successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the resources of `Elasticsearch` object and related `PetSets` and `Pods`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +vscale-topology VerticalScaling Successful 18m + +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to scale the cluster. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo vscale-topology +Name: vscale-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-19T11:55:28Z + Generation: 1 + Resource Version: 71748 + UID: be8b4117-90d3-4122-8705-993ce8621635 +Spec: + Apply: IfReady + Database Ref: + Name: es-cluster + Type: VerticalScaling + Vertical Scaling: + Data: + Resources: + Requests: + Cpu: 760m + Memory: 900Mi + Ingest: + Resources: + Limits: + Cpu: 900m + Memory: 1.2Gi + Requests: + Cpu: 800m + Memory: 1Gi + Master: + Resources: + Limits: + Cpu: 750m + Memory: 800Mi +Status: + Conditions: + Last Transition Time: 2025-11-19T11:55:29Z + Message: Elasticsearch ops request is vertically scaling the nodes + Observed Generation: 1 + Reason: VerticalScale + Status: True + Type: VerticalScale + Last Transition Time: 2025-11-19T11:55:50Z + Message: successfully reconciled the Elasticsearch resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2025-11-19T11:55:55Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-ingest-0 + Last Transition Time: 2025-11-19T11:55:55Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-ingest-0 + Last Transition Time: 2025-11-19T11:55:55Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-ingest-0 + Last Transition Time: 2025-11-19T11:56:50Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-ingest-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-ingest-0 + Last Transition Time: 2025-11-19T12:03:25Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-19T11:56:35Z + Message: re enable shard allocation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReEnableShardAllocation + Last Transition Time: 2025-11-19T11:56:40Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-ingest-1 + Last Transition Time: 2025-11-19T11:56:40Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-ingest-1 + Last Transition Time: 2025-11-19T11:56:40Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-ingest-1 + Last Transition Time: 2025-11-19T11:57:35Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-ingest-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-ingest-1 + Last Transition Time: 2025-11-19T11:57:25Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-ingest-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-ingest-2 + Last Transition Time: 2025-11-19T11:57:25Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-ingest-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-ingest-2 + Last Transition Time: 2025-11-19T11:57:25Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-2 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-ingest-2 + Last Transition Time: 2025-11-19T11:57:25Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-ingest-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-ingest-2 + Last Transition Time: 2025-11-19T11:58:10Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-data-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-data-0 + Last Transition Time: 2025-11-19T11:58:10Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-data-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-data-0 + Last Transition Time: 2025-11-19T11:58:10Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-data-0 + Last Transition Time: 2025-11-19T11:59:10Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-data-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-data-0 + Last Transition Time: 2025-11-19T11:58:35Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-data-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-data-1 + Last Transition Time: 2025-11-19T11:58:35Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-data-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-data-1 + Last Transition Time: 2025-11-19T11:58:35Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-1 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-data-1 + Last Transition Time: 2025-11-19T11:58:35Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-data-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-data-1 + Last Transition Time: 2025-11-19T11:59:00Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-data-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-data-2 + Last Transition Time: 2025-11-19T11:59:00Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-data-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-data-2 + Last Transition Time: 2025-11-19T11:59:00Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-2 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-data-2 + Last Transition Time: 2025-11-19T11:59:00Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-data-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-data-2 + Last Transition Time: 2025-11-19T11:59:25Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-master-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-master-0 + Last Transition Time: 2025-11-19T11:59:25Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-master-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-master-0 + Last Transition Time: 2025-11-19T11:59:25Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-master-0 + Last Transition Time: 2025-11-19T12:00:25Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-master-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-master-0 + Last Transition Time: 2025-11-19T12:00:15Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-master-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-master-1 + Last Transition Time: 2025-11-19T12:00:15Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-master-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-master-1 + Last Transition Time: 2025-11-19T12:00:15Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-1 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-master-1 + Last Transition Time: 2025-11-19T12:00:15Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-master-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-master-1 + Last Transition Time: 2025-11-19T12:01:05Z + Message: pod exists; ConditionStatus:True; PodName:es-cluster-master-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-cluster-master-2 + Last Transition Time: 2025-11-19T12:01:05Z + Message: create es client; ConditionStatus:True; PodName:es-cluster-master-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-cluster-master-2 + Last Transition Time: 2025-11-19T12:01:05Z + Message: disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-2 + Observed Generation: 1 + Status: True + Type: DisableShardAllocation--es-cluster-master-2 + Last Transition Time: 2025-11-19T12:01:05Z + Message: evict pod; ConditionStatus:True; PodName:es-cluster-master-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-cluster-master-2 + Last Transition Time: 2025-11-19T12:02:10Z + Message: Successfully restarted all nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-19T12:02:15Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-19T12:02:15Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 19m KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-cluster + Normal UpdatePetSets 19m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 19m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 19m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 19m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-0 19m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:False 19m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Normal UpdatePetSets 19m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Normal UpdatePetSets 19m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Normal UpdatePetSets 18m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Warning create es client; ConditionStatus:True 18m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 18m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 18m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 18m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 18m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-1 18m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning evict pod; ConditionStatus:False; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator evict pod; ConditionStatus:False; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:False 18m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-0 18m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-0 + Warning create es client; ConditionStatus:False 18m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 17m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-2 17m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-2 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-2 17m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-2 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning create es client; ConditionStatus:True 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 17m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning create es client; ConditionStatus:False 17m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning evict pod; ConditionStatus:False; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:False; PodName:es-cluster-ingest-1 + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-1 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-1 + Warning create es client; ConditionStatus:False 17m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 17m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-0 17m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-0 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 17m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-0 17m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-0 + Warning create es client; ConditionStatus:False 17m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 17m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 17m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-ingest-2 16m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-ingest-2 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-2 16m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-ingest-2 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-ingest-2 + Warning create es client; ConditionStatus:False 16m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 16m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-1 16m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-1 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-1 16m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-1 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-1 + Warning create es client; ConditionStatus:False 16m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 16m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-2 16m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-2 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-2 + Warning create es client; ConditionStatus:True 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-2 16m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-2 + Warning re enable shard allocation; ConditionStatus:True 16m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-2 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-2 + Warning create es client; ConditionStatus:False 16m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 + Warning evict pod; ConditionStatus:False; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:False; PodName:es-cluster-data-0 + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-0 16m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-0 + Warning create es client; ConditionStatus:False 16m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 15m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-0 15m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-0 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 15m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-0 15m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-0 + Warning create es client; ConditionStatus:False 15m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 15m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-1 15m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-1 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-1 15m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-1 15m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-1 + Warning create es client; ConditionStatus:False 15m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 15m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-data-2 15m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-data-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-data-2 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-data-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-2 15m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-data-2 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-data-2 15m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-data-2 + Warning create es client; ConditionStatus:False 15m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 15m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-1 15m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-1 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-1 15m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-1 15m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-1 + Warning create es client; ConditionStatus:True 15m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 15m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning create es client; ConditionStatus:False 14m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 + Warning evict pod; ConditionStatus:False; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator evict pod; ConditionStatus:False; PodName:es-cluster-master-0 + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-0 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-0 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-0 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-0 14m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-0 + Warning create es client; ConditionStatus:False 14m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 14m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 14m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-2 14m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-2 14m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-2 14m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-2 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-2 14m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-2 + Warning create es client; ConditionStatus:False 14m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 13m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 13m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-1 13m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-1 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-1 13m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-1 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-1 13m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-1 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-1 13m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-1 + Warning create es client; ConditionStatus:False 13m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 13m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 13m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Normal RestartNodes 13m KubeDB Ops-manager Operator Successfully restarted all nodes + Normal UpdateDatabase 13m KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 13m KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-cluster + Normal ResumeDatabase 13m KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-cluster + Normal Successful 13m KubeDB Ops-manager Operator Successfully Updated Database + Warning create es client; ConditionStatus:True 12m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 12m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-cluster-master-2 12m KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-cluster-master-2 + Warning create es client; ConditionStatus:True; PodName:es-cluster-master-2 12m KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-cluster-master-2 + Warning disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-2 12m KubeDB Ops-manager Operator disable shard allocation; ConditionStatus:True; PodName:es-cluster-master-2 + Warning evict pod; ConditionStatus:True; PodName:es-cluster-master-2 12m KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-cluster-master-2 + Warning create es client; ConditionStatus:False 12m KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 11m KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning re enable shard allocation; ConditionStatus:True 11m KubeDB Ops-manager Operator re enable shard allocation; ConditionStatus:True + Normal RestartNodes 11m KubeDB Ops-manager Operator Successfully restarted all nodes + +``` +Now, we are going to verify from one of the Pod yaml whether the resources of the topology cluster has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo es-cluster-ingest-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "900m", + "memory": "1288490188800m" + }, + "requests": { + "cpu": "800m", + "memory": "1Gi" + } +} +$ kubectl get pod -n demo es-cluster-data-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "memory": "900Mi" + }, + "requests": { + "cpu": "760m", + "memory": "900Mi" + } +} +$ kubectl get pod -n demo es-cluster-master-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "750m", + "memory": "800Mi" + }, + "requests": { + "cpu": "750m", + "memory": "800Mi" + } +} + +``` + +The above output verifies that we have successfully scaled up the resources of the Elasticsearch topology cluster. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete es -n demo es-cluster +kubectl delete Elasticsearchopsrequest -n demo vscale-topology +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/Elasticsearch/concepts/Elasticsearch.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/Elasticsearch/clustering/_index.md). +- Monitor your Elasticsearch database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/Elasticsearch/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Elasticsearch database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/Elasticsearch/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/update-version/_index.md b/docs/guides/elasticsearch/update-version/_index.md index 21778a847..f06ae940b 100644 --- a/docs/guides/elasticsearch/update-version/_index.md +++ b/docs/guides/elasticsearch/update-version/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-updateversion-elasticsearch name: Update Version parent: es-elasticsearch-guides - weight: 15 + weight: 95 menu_name: docs_{{ .version }} --- From bc12184ebbfb38dc21e6f4240813d3e023f06b00 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Thu, 20 Nov 2025 17:37:32 +0600 Subject: [PATCH 08/13] volume expansion topology Signed-off-by: Bonusree --- ...asticsearch-volume-expansion-topology.yaml | 14 + .../elasticsearch/plugins-backup/_index.md | 2 +- docs/guides/elasticsearch/plugins/_index.md | 2 +- docs/guides/elasticsearch/scaling/_index.md | 14 +- .../scaling/horizontal/_index.md | 10 + .../elasticsearch/scaling/vertical/_index.md | 10 + .../elasticsearch/volume-expantion/_index.md | 10 + .../volume-expantion/combined.md | 312 ++++++++ .../volume-expantion/overview.md | 56 ++ .../volume-expantion/topology.md | 742 ++++++++++++++++++ 10 files changed, 1163 insertions(+), 9 deletions(-) create mode 100644 docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-topology.yaml create mode 100644 docs/guides/elasticsearch/volume-expantion/_index.md create mode 100644 docs/guides/elasticsearch/volume-expantion/combined.md create mode 100644 docs/guides/elasticsearch/volume-expantion/overview.md create mode 100644 docs/guides/elasticsearch/volume-expantion/topology.md diff --git a/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-topology.yaml b/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-topology.yaml new file mode 100644 index 000000000..191463551 --- /dev/null +++ b/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-topology.yaml @@ -0,0 +1,14 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: volume-expansion-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: es-cluster + volumeExpansion: + mode: "Online" + master: 5Gi + data: 5Gi + ingest: 4Gi \ No newline at end of file diff --git a/docs/guides/elasticsearch/plugins-backup/_index.md b/docs/guides/elasticsearch/plugins-backup/_index.md index d7ed6a2b1..1dc860c7c 100644 --- a/docs/guides/elasticsearch/plugins-backup/_index.md +++ b/docs/guides/elasticsearch/plugins-backup/_index.md @@ -5,6 +5,6 @@ menu: identifier: guides-es-plugins-backup name: Snapshot & Restore (Repository Plugins) parent: es-elasticsearch-guides - weight: 71 + weight: 155 menu_name: docs_{{ .version }} --- diff --git a/docs/guides/elasticsearch/plugins/_index.md b/docs/guides/elasticsearch/plugins/_index.md index 0dda46150..daf12a425 100755 --- a/docs/guides/elasticsearch/plugins/_index.md +++ b/docs/guides/elasticsearch/plugins/_index.md @@ -5,6 +5,6 @@ menu: identifier: es-plugin-elasticsearch name: Extensions & Plugins parent: es-elasticsearch-guides - weight: 70 + weight: 165 menu_name: docs_{{ .version }} --- \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/_index.md b/docs/guides/elasticsearch/scaling/_index.md index f213eeec4..9a04cb916 100644 --- a/docs/guides/elasticsearch/scaling/_index.md +++ b/docs/guides/elasticsearch/scaling/_index.md @@ -1,10 +1,10 @@ --- -title: Elasticsearch Scaling +title: Elasticsearch Scalling menu: -docs_{{ .version }}: -identifier: es-scaling-elasticsearch -name: Scaling -parent: es-elasticsearch-guides -weight: 105 + docs_{{ .version }}: + identifier: es-scalling-elasticsearch + name: Scalling + parent: es-elasticsearch-guides + weight: 105 menu_name: docs_{{ .version }} ---- +--- \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/horizontal/_index.md b/docs/guides/elasticsearch/scaling/horizontal/_index.md index e69de29bb..d4644f51e 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/_index.md +++ b/docs/guides/elasticsearch/scaling/horizontal/_index.md @@ -0,0 +1,10 @@ +--- +title: Elasticsearch Horizontal Scaling +menu: + docs_{{ .version }}: + identifier: es-horizontal-scalling-elasticsearch + name: Horizontal Scaling + parent: es-scalling-elasticsearch + weight: 10 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/elasticsearch/scaling/vertical/_index.md b/docs/guides/elasticsearch/scaling/vertical/_index.md index e69de29bb..56102a43f 100644 --- a/docs/guides/elasticsearch/scaling/vertical/_index.md +++ b/docs/guides/elasticsearch/scaling/vertical/_index.md @@ -0,0 +1,10 @@ +--- +title: Elasticsearch Vertical Scaling +menu: + docs_{{ .version }}: + identifier: es-vertical-scalling-elasticsearch + name: Vertical Scaling + parent: es-scalling-elasticsearch + weight: 20 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/elasticsearch/volume-expantion/_index.md b/docs/guides/elasticsearch/volume-expantion/_index.md new file mode 100644 index 000000000..c9a842b9e --- /dev/null +++ b/docs/guides/elasticsearch/volume-expantion/_index.md @@ -0,0 +1,10 @@ +--- +title: Elasticsearch Voulume Expansion +menu: + docs_{{ .version }}: + identifier: es-voulume-expansion-elasticsearch + name: Voulume Expansion + parent: es-elasticsearch-guides + weight: 110 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/elasticsearch/volume-expantion/combined.md b/docs/guides/elasticsearch/volume-expantion/combined.md new file mode 100644 index 000000000..cf762607e --- /dev/null +++ b/docs/guides/elasticsearch/volume-expantion/combined.md @@ -0,0 +1,312 @@ +--- +title: Kafka Combined Volume Expansion +menu: + docs_{{ .version }}: + identifier: kf-volume-expansion-combined + name: Combined + parent: kf-volume-expansion + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Combined Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Kafka Combined Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Volume Expansion Overview](/docs/guides/kafka/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Combined Kafka Cluster + +Here, we are going to deploy a `Kafka` combined using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to expand its volume. + +### Prepare Kafka Combined CLuster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Kafka` combined cluster with version `3.9.0`. + +### Deploy Kafka + +In this section, we are going to deploy a Kafka combined cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: kafka-dev + namespace: demo +spec: + replicas: 2 + version: 3.9.0 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: standard + storageType: Durable + deletionPolicy: WipeOut +``` + +Let's create the `Kafka` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-combined.yaml +kafka.kubedb.com/kafka-dev created +``` + +Now, wait until `kafka-dev` has status `Ready`. i.e, + +```bash +$ kubectl get kf -n demo -w +NAME TYPE VERSION STATUS AGE +kafka-dev kubedb.com/v1 3.9.0 Provisioning 0s +kafka-dev kubedb.com/v1 3.9.0 Provisioning 24s +. +. +kafka-dev kubedb.com/v1 3.9.0 Ready 92s +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-23778f6015324895 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 33s +pvc-30b34f642f994e13 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 58s +``` + +You can see the petset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `KafkaOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the kafka combined cluster. + +#### Create KafkaOpsRequest + +In order to expand the volume of the database, we have to create a `KafkaOpsRequest` CR with our desired volume size. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: KafkaOpsRequest +metadata: + name: kf-volume-exp-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: kafka-dev + volumeExpansion: + node: 2Gi + mode: Online +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `kafka-dev`. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.node` specifies the desired volume size. + +Let's create the `KafkaOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml +kafkaopsrequest.ops.kubedb.com/kf-volume-exp-combined created +``` + +#### Verify Kafka Combined volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Kafka` object and related `PetSets` and `Persistent Volumes`. + +Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, + +```bash +$ kubectl get kafkaopsrequest -n demo +NAME TYPE STATUS AGE +kf-volume-exp-combined VolumeExpansion Successful 2m4s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. + +```bash +$ kubectl describe kafkaopsrequest -n demo kf-volume-exp-combined +Name: kf-volume-exp-combined +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2024-07-30T10:45:57Z + Generation: 1 + Resource Version: 91816 + UID: 0febb459-3373-4f75-b7da-46391edf557f +Spec: + Apply: IfReady + Database Ref: + Name: kafka-dev + Type: VolumeExpansion + Volume Expansion: + Mode: Online + Node: 2Gi +Status: + Conditions: + Last Transition Time: 2024-07-30T10:45:57Z + Message: Kafka ops-request has started to expand volume of kafka nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2024-07-30T10:46:05Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2024-07-30T10:46:05Z + Message: is petset deleted; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPetsetDeleted + Last Transition Time: 2024-07-30T10:46:15Z + Message: successfully deleted the petSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2024-07-30T10:46:20Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2024-07-30T10:46:20Z + Message: is pvc patched; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IsPvcPatched + Last Transition Time: 2024-07-30T10:46:25Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2024-07-30T10:46:40Z + Message: successfully updated combined node PVC sizes + Observed Generation: 1 + Reason: UpdateCombinedNodePVCs + Status: True + Type: UpdateCombinedNodePVCs + Last Transition Time: 2024-07-30T10:46:45Z + Message: successfully reconciled the Kafka resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2024-07-30T10:46:51Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2024-07-30T10:46:51Z + Message: Successfully completed volumeExpansion for kafka + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Starting 24m KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kf-volume-exp-combined + Normal Starting 24m KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev + Normal Successful 24m KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined + Warning get pet set; ConditionStatus:True 24m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning is petset deleted; ConditionStatus:True 24m KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True + Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 23m KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True + Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Normal UpdateCombinedNodePVCs 23m KubeDB Ops-manager Operator successfully updated combined node PVC sizes + Normal UpdatePetSets 23m KubeDB Ops-manager Operator successfully reconciled the Kafka resources + Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 23m KubeDB Ops-manager Operator PetSet is recreated + Normal Starting 23m KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev + Normal Successful 23m KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"2Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE +pvc-23778f6015324895 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 7m2s +pvc-30b34f642f994e13 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 7m9s +``` + +The above output verifies that we have successfully expanded the volume of the Kafka. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete kafkaopsrequest -n demo kf-volume-exp-combined +kubectl delete kf -n demo kafka-dev +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). +- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). +- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). + +[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/volume-expantion/overview.md b/docs/guides/elasticsearch/volume-expantion/overview.md new file mode 100644 index 000000000..e597957f4 --- /dev/null +++ b/docs/guides/elasticsearch/volume-expantion/overview.md @@ -0,0 +1,56 @@ +--- +title: Elasticsearch Volume Expansion Overview +menu: +docs_{{ .version }}: +identifier: kf-volume-expansion-overview +name: Overview +parent: kf-volume-expansion +weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Elasticsearch Volume Expansion + +This guide will give an overview on how KubeDB Ops-manager operator expand the volume of various component of `Elasticsearch` like:. (Combined and Topology). + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: +- [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) +- [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + +## How Volume Expansion Process Works + +The following diagram shows how KubeDB Ops-manager operator expand the volumes of `Elasticsearch` database components. Open the image in a new tab to see the enlarged version. + +{{/*
*/}} +{{/*   Volume Expansion process of Elasticsearch*/}} +{{/*
Fig: Volume Expansion process of Elasticsearch
*/}} +{{/*
*/}} + +The Volume Expansion process consists of the following steps: + +1. At first, a user creates a `Elasticsearch` Custom Resource (CR). + +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CR. + +3. When the operator finds a `Elasticsearch` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Each PetSet creates a Persistent Volume according to the Volume Claim Template provided in the petset configuration. This Persistent Volume will be expanded by the `KubeDB` Ops-manager operator. + +5. Then, in order to expand the volume of the various components (ie. Combined, Broker, Controller) of the `Elasticsearch`, the user creates a `ElasticsearchOpsRequest` CR with desired information. + +6. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CR. + +7. When it finds a `ElasticsearchOpsRequest` CR, it halts the `Elasticsearch` object which is referred from the `ElasticsearchOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Elasticsearch` object during the volume expansion process. + +8. Then the `KubeDB` Ops-manager operator will expand the persistent volume to reach the expected size defined in the `ElasticsearchOpsRequest` CR. + +9. After the successful Volume Expansion of the related PetSet Pods, the `KubeDB` Ops-manager operator updates the new volume size in the `Elasticsearch` object to reflect the updated state. + +10. After the successful Volume Expansion of the `Elasticsearch` components, the `KubeDB` Ops-manager operator resumes the `Elasticsearch` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step-by-step guide on Volume Expansion of various Elasticsearch database components using `ElasticsearchOpsRequest` CRD. diff --git a/docs/guides/elasticsearch/volume-expantion/topology.md b/docs/guides/elasticsearch/volume-expantion/topology.md new file mode 100644 index 000000000..797ddcbef --- /dev/null +++ b/docs/guides/elasticsearch/volume-expantion/topology.md @@ -0,0 +1,742 @@ +--- +title: Elasticsearch Topology Volume Expansion +menu: + docs_{{ .version }}: + identifier: es-volume-expansion-topology + name: Topology + parent: es-voulume-expansion-elasticsearch + weight: 30 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Elasticsearch Topology Volume Expansion + +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Elasticsearch Topology Cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- You must have a `StorageClass` that supports volume expansion. + +- Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). + +- You should be familiar with the following `KubeDB` concepts: + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [Topology](/docs/guides/elasticsearch/clustering/topology-cluster/_index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Volume Expansion Overview](/docs/guides/elasticsearch/volume-expansion/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> Note: The yaml files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Expand Volume of Topology Elasticsearch Cluster + +Here, we are going to deploy a `Elasticsearch` topology using a supported version by `KubeDB` operator. Then we are going to apply `ElasticsearchOpsRequest` to expand its volume. + +### Prepare Elasticsearch Topology Cluster + +At first verify that your cluster has a storage class, that supports volume expansion. Let's check, + +```bash +$ kubectl get storageclass +NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE +standard (default) kubernetes.io/gce-pd Delete Immediate true 2m49s +``` + +We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. + +Now, we are going to deploy a `Elasticsearch` combined cluster with version `xpack-8.11.1`. + +### Deploy Elasticsearch + +In this section, we are going to deploy a Elasticsearch topology cluster for broker and controller with 1GB volume. Then, in the next section we will expand its volume to 2GB using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-cluster + namespace: demo +spec: + enableSSL: true + version: xpack-8.11.1 + storageType: Durable + topology: + master: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + + data: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + ingest: + replicas: 3 + storage: + storageClassName: "standard" + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}docs/examples/elasticsearch/clustering/topology-es.yaml +Elasticsearch.kubedb.com/es-cluster created +``` + +Now, wait until `es-cluster` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo +NAME VERSION STATUS AGE +es-cluster xpack-8.11.1 Ready 22h + +``` + +Let's check volume size from petset, and from the persistent volume, + +```bash +$ kubectl get petset -n demo es-cluster-data -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" +$ kubectl get petset -n demo es-cluster-master -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" +$ kubectl get petset -n demo es-cluster-ingest -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"1Gi" +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-11b48c6e-d996-45a7-8ba2-f8d71a655912 1Gi RWO Delete Bound demo/data-es-cluster-ingest-2 local-path 22h +pvc-1904104c-bbf2-4754-838a-8a647b2bd23e 1Gi RWO Delete Bound demo/data-es-cluster-data-2 local-path 22h +pvc-19aa694a-29c0-43d9-a495-c84c77df2dd8 1Gi RWO Delete Bound demo/data-es-cluster-master-0 local-path 22h +pvc-33702b18-7e98-41b7-9b19-73762cb4f86a 1Gi RWO Delete Bound demo/data-es-cluster-master-1 local-path 22h +pvc-8604968f-f433-4931-82bc-8d240d6f52d8 1Gi RWO Delete Bound demo/data-es-cluster-data-0 local-path 22h +pvc-ae5ccc43-d078-4816-a553-8a3cd1f674be 1Gi RWO Delete Bound demo/data-es-cluster-ingest-0 local-path 22h +pvc-b4225042-c69f-41df-99b2-1b3191057a85 1Gi RWO Delete Bound demo/data-es-cluster-data-1 local-path 22h +pvc-bd4b7d5a-8494-4ee2-a25c-697a6f23cb79 1Gi RWO Delete Bound demo/data-es-cluster-ingest-1 local-path 22h +pvc-c9057b3b-4412-467f-8ae5-f6414e0059c3 1Gi RWO Delete Bound demo/data-es-cluster-master-2 local-path 22h +``` + +You can see the petsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. + +We are now ready to apply the `ElasticsearchOpsRequest` CR to expand the volume of this database. + +### Volume Expansion + +Here, we are going to expand the volume of the Elasticsearch topology cluster. + +#### Create ElasticsearchOpsRequest + +In order to expand the volume of the database, we have to create a `ElasticsearchOpsRequest` CR with our desired volume size. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: volume-expansion-topology + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: es-cluster + volumeExpansion: + mode: "Online" + master: 5Gi + data: 5Gi + ingest: 4Gi +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `es-cluster`. +- `spec.type` specifies that we are performing `VolumeExpansion` on our database. +- `spec.volumeExpansion.data` specifies the desired volume size for data node. +- `spec.volumeExpansion.master` specifies the desired volume size for master node. +- `spec.volumeExpansion.ingest` specifies the desired volume size for ingest node. + +> If you want to expand the volume of only one node, you can specify the desired volume size for that node only. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/volume-expansion/elasticsearch-volume-expansion-topology.yaml +Elasticsearchopsrequest.ops.kubedb.com/volume-expansion-topology created +``` + +#### Verify Elasticsearch Topology volume expanded successfully + +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Elasticsearch` object and related `PetSets` and `Persistent Volumes`. + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +volume-expansion-topology VolumeExpansion Successful 44m + +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to expand the volume of Elasticsearch. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo volume-expansion-topology +Name: volume-expansion-topology +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-20T10:07:17Z + Generation: 1 + Resource Version: 115931 + UID: 38107c4f-4249-4597-b8b4-06a445891872 +Spec: + Apply: IfReady + Database Ref: + Name: es-cluster + Type: VolumeExpansion + Volume Expansion: + Data: 5Gi + Ingest: 4Gi + Master: 5Gi + Mode: Offline +Status: + Conditions: + Last Transition Time: 2025-11-20T10:07:17Z + Message: Elasticsearch ops request is expanding volume of the Elasticsearch nodes. + Observed Generation: 1 + Reason: VolumeExpansion + Status: True + Type: VolumeExpansion + Last Transition Time: 2025-11-20T10:07:25Z + Message: get pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPetSet + Last Transition Time: 2025-11-20T10:07:25Z + Message: delete pet set; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: deletePetSet + Last Transition Time: 2025-11-20T10:07:55Z + Message: successfully deleted the PetSets with orphan propagation policy + Observed Generation: 1 + Reason: OrphanPetSetPods + Status: True + Type: OrphanPetSetPods + Last Transition Time: 2025-11-20T10:08:00Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2025-11-20T10:08:00Z + Message: patch opsrequest; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchOpsrequest + Last Transition Time: 2025-11-20T10:20:20Z + Message: create db client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateDbClient + Last Transition Time: 2025-11-20T10:08:00Z + Message: delete pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePod + Last Transition Time: 2025-11-20T10:08:05Z + Message: get pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPvc + Last Transition Time: 2025-11-20T10:19:55Z + Message: compare storage; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CompareStorage + Last Transition Time: 2025-11-20T10:11:05Z + Message: create pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreatePod + Last Transition Time: 2025-11-20T10:11:40Z + Message: patch pvc; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchPvc + Last Transition Time: 2025-11-20T10:13:55Z + Message: successfully updated ingest node PVC sizes + Observed Generation: 1 + Reason: VolumeExpansionIngestNode + Status: True + Type: VolumeExpansionIngestNode + Last Transition Time: 2025-11-20T10:14:00Z + Message: db operation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DbOperation + Last Transition Time: 2025-11-20T10:17:15Z + Message: successfully updated data node PVC sizes + Observed Generation: 1 + Reason: VolumeExpansionDataNode + Status: True + Type: VolumeExpansionDataNode + Last Transition Time: 2025-11-20T10:20:25Z + Message: successfully updated master node PVC sizes + Observed Generation: 1 + Reason: VolumeExpansionMasterNode + Status: True + Type: VolumeExpansionMasterNode + Last Transition Time: 2025-11-20T10:21:02Z + Message: successfully reconciled the Elasticsearch resources + Observed Generation: 1 + Reason: UpdatePetSets + Status: True + Type: UpdatePetSets + Last Transition Time: 2025-11-20T10:21:07Z + Message: PetSet is recreated + Observed Generation: 1 + Reason: ReadyPetSets + Status: True + Type: ReadyPetSets + Last Transition Time: 2025-11-20T10:21:12Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-20T10:21:12Z + Message: Successfully completed the modification process. + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 45m KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-cluster + Warning get pet set; ConditionStatus:True 45m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning delete pet set; ConditionStatus:True 45m KubeDB Ops-manager Operator delete pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning delete pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator delete pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning delete pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator delete pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 44m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 44m KubeDB Ops-manager Operator successfully deleted the PetSets with orphan propagation policy + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 44m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 44m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 44m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 44m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 44m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 44m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 44m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 44m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 44m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 44m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 43m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 43m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 42m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 42m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 41m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 41m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 41m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 41m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 41m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 41m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 40m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 40m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 40m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 40m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 40m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 40m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 40m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 40m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 40m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 40m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 40m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 39m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 39m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 39m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 39m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 39m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 39m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 39m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 39m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 39m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 39m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 38m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 38m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Normal VolumeExpansionIngestNode 38m KubeDB Ops-manager Operator successfully updated ingest node PVC sizes + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 38m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 38m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 38m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning delete pod; ConditionStatus:True 38m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 38m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 38m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 38m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 38m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 38m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 38m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 37m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 37m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 37m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 37m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 37m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 37m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning delete pod; ConditionStatus:True 37m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 37m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 37m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 37m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 36m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 36m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 36m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 36m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 36m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 36m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 36m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 36m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 36m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning delete pod; ConditionStatus:True 36m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 36m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 36m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 36m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 35m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 35m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 35m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 35m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 35m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 35m KubeDB Ops-manager Operator db operation; ConditionStatus:True + Normal VolumeExpansionDataNode 35m KubeDB Ops-manager Operator successfully updated data node PVC sizes + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 35m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 35m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 35m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 35m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 35m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 35m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 34m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 34m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 34m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 34m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 34m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 34m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 34m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 34m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 34m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 34m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 34m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 33m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 33m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 33m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 33m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 33m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 33m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning get pod; ConditionStatus:True 33m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 33m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 33m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning delete pod; ConditionStatus:True 33m KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 32m KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 32m KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 32m KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 32m KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 32m KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 32m KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 32m KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 32m KubeDB Ops-manager Operator create db client; ConditionStatus:True + Normal VolumeExpansionMasterNode 32m KubeDB Ops-manager Operator successfully updated master node PVC sizes + Normal UpdatePetSets 31m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Warning get pet set; ConditionStatus:True 31m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 31m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 31m KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 31m KubeDB Ops-manager Operator PetSet is recreated + Normal UpdateDatabase 31m KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 31m KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-cluster + Normal ResumeDatabase 31m KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-cluster + Normal Successful 31m KubeDB Ops-manager Operator Successfully Updated Database + Normal UpdatePetSets 31m KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + +``` + +Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, + +```bash +$ kubectl get petset -n demo es-cluster-data -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"5Gi" +$ kubectl get petset -n demo es-cluster-master -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"5Gi" +$ kubectl get petset -n demo es-cluster-ingest -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"4Gi" + +$ kubectl get pv -n demo +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-37f7398d-0251-4d3c-a439-d289b8cec6d2 5Gi RWO Delete Bound demo/data-es-cluster-master-2 standard 111m +pvc-3a5d2b3e-dd39-4468-a8da-5274992a6502 5Gi RWO Delete Bound demo/data-es-cluster-master-0 standard 111m +pvc-3cf21868-4b51-427b-b7ef-d0d26c753c8b 5Gi RWO Delete Bound demo/data-es-cluster-master-1 standard 111m +pvc-56e6ed8f-a729-4532-bdec-92b8101f7813 5Gi RWO Delete Bound demo/data-es-cluster-data-2 standard 111m +pvc-783d51f7-3bf2-4121-8f18-357d14d003ad 4Gi RWO Delete Bound demo/data-es-cluster-ingest-0 standard 111m +pvc-81d6c1d3-0aa6-4190-9ee0-dd4a8d62b6b3 4Gi RWO Delete Bound demo/data-es-cluster-ingest-2 standard 111m +pvc-942c6dce-4701-4e1a-b6f9-bf7d4ab56a11 5Gi RWO Delete Bound demo/data-es-cluster-data-1 standard 111m +pvc-b706647d-c9ba-4296-94aa-2f6ef2230b6e 4Gi RWO Delete Bound demo/data-es-cluster-ingest-1 standard 111m +pvc-c274f913-5452-47e1-ab42-ba584bdae297 5Gi RWO Delete Bound demo/data-es-cluster-data-0 standard 111m +``` + +The above output verifies that we have successfully expanded the volume of the Elasticsearch. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete Elasticsearchopsrequest -n demo volume-expansion-topology +kubectl delete es -n demo es-cluster +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch/index.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/topology-cluster/simple-dedicated-cluster/index.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). From 840ef4098b775368027022251898d6cc025541b0 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Thu, 20 Nov 2025 18:40:03 +0600 Subject: [PATCH 09/13] combined volume expansion Signed-off-by: Bonusree --- ...asticsearch-volume-expansion-combined.yaml | 12 + .../volume-expansion-topo-data.yaml | 12 + .../volume-expantion/combined.md | 299 +++++++++++------- .../volume-expantion/topology.md | 25 +- 4 files changed, 224 insertions(+), 124 deletions(-) create mode 100644 docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-combined.yaml create mode 100644 docs/examples/elasticsearch/volume-expantion/volume-expansion-topo-data.yaml diff --git a/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-combined.yaml b/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-combined.yaml new file mode 100644 index 000000000..646a9378e --- /dev/null +++ b/docs/examples/elasticsearch/volume-expantion/elasticsearch-volume-expansion-combined.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: es-volume-expansion-combined + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: es-combined + volumeExpansion: + mode: "Online" + node: 4Gi \ No newline at end of file diff --git a/docs/examples/elasticsearch/volume-expantion/volume-expansion-topo-data.yaml b/docs/examples/elasticsearch/volume-expantion/volume-expansion-topo-data.yaml new file mode 100644 index 000000000..de94dfe30 --- /dev/null +++ b/docs/examples/elasticsearch/volume-expantion/volume-expansion-topo-data.yaml @@ -0,0 +1,12 @@ +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: volume-expansion-data-nodes + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: es-cluster + volumeExpansion: + mode: "Online" + data: 5Gi \ No newline at end of file diff --git a/docs/guides/elasticsearch/volume-expantion/combined.md b/docs/guides/elasticsearch/volume-expantion/combined.md index cf762607e..beb87126f 100644 --- a/docs/guides/elasticsearch/volume-expantion/combined.md +++ b/docs/guides/elasticsearch/volume-expantion/combined.md @@ -1,20 +1,20 @@ --- -title: Kafka Combined Volume Expansion +title: Elasticsearch Combined Volume Expansion menu: docs_{{ .version }}: - identifier: kf-volume-expansion-combined + identifier: es-volume-expansion-combined name: Combined - parent: kf-volume-expansion - weight: 30 + parent: es-voulume-expansion-elasticsearch + weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides --- > New to KubeDB? Please start [here](/docs/README.md). -# Kafka Combined Volume Expansion +# Elasticsearch Combined Volume Expansion -This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Kafka Combined Cluster. +This guide will show you how to use `KubeDB` Ops-manager operator to expand the volume of a Elasticsearch Combined Cluster. ## Before You Begin @@ -25,10 +25,10 @@ This guide will show you how to use `KubeDB` Ops-manager operator to expand the - Install `KubeDB` Provisioner and Ops-manager operator in your cluster following the steps [here](/docs/setup/README.md). - You should be familiar with the following `KubeDB` concepts: - - [Kafka](/docs/guides/kafka/concepts/kafka.md) - - [Combined](/docs/guides/kafka/clustering/combined-cluster/index.md) - - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) - - [Volume Expansion Overview](/docs/guides/kafka/volume-expansion/overview.md) + - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) + - [Combined](/docs/guides/elasticsearch/clustering/combined-cluster/index.md) + - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) + - [Volume Expansion Overview](/docs/guides/elasticsearch/volume-expansion/overview.md) To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. @@ -37,13 +37,13 @@ $ kubectl create ns demo namespace/demo created ``` -> Note: The yaml files used in this tutorial are stored in [docs/examples/kafka](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/kafka) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). +> Note: The yaml files used in this tutorial are stored in [docs/examples/elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). -## Expand Volume of Combined Kafka Cluster +## Expand Volume of Combined Elasticsearch Cluster -Here, we are going to deploy a `Kafka` combined using a supported version by `KubeDB` operator. Then we are going to apply `KafkaOpsRequest` to expand its volume. +Here, we are going to deploy a `Elasticsearch` combined using a supported version by `KubeDB` operator. Then we are going to apply `ElasticsearchOpsRequest` to expand its volume. -### Prepare Kafka Combined CLuster +### Prepare Elasticsearch Combined CLuster At first verify that your cluster has a storage class, that supports volume expansion. Let's check, @@ -55,197 +55,231 @@ standard (default) kubernetes.io/gce-pd Delete Immediate We can see from the output the `standard` storage class has `ALLOWVOLUMEEXPANSION` field as true. So, this storage class supports volume expansion. We can use it. -Now, we are going to deploy a `Kafka` combined cluster with version `3.9.0`. +Now, we are going to deploy a `Elasticsearch` combined cluster with version `xpack-8.11.1`. -### Deploy Kafka +### Deploy Elasticsearch -In this section, we are going to deploy a Kafka combined cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `KafkaOpsRequest` CRD. Below is the YAML of the `Kafka` CR that we are going to create, +In this section, we are going to deploy a Elasticsearch combined cluster with 1GB volume. Then, in the next section we will expand its volume to 2GB using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, ```yaml apiVersion: kubedb.com/v1 -kind: Kafka +kind: Elasticsearch metadata: - name: kafka-dev + name: es-combined namespace: demo spec: - replicas: 2 - version: 3.9.0 + version: xpack-8.11.1 + enableSSL: true + replicas: 1 + storageType: Durable storage: + storageClassName: "standard" accessModes: - - ReadWriteOnce + - ReadWriteOnce resources: requests: storage: 1Gi - storageClassName: standard - storageType: Durable deletionPolicy: WipeOut + ``` -Let's create the `Kafka` CR we have shown above, +Let's create the `Elasticsearch` CR we have shown above, ```bash -$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-combined.yaml -kafka.kubedb.com/kafka-dev created +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/clustering/multi-nodes-es.yaml +Elasticsearch.kubedb.com/es-combined created ``` -Now, wait until `kafka-dev` has status `Ready`. i.e, +Now, wait until `es-combined` has status `Ready`. i.e, ```bash -$ kubectl get kf -n demo -w -NAME TYPE VERSION STATUS AGE -kafka-dev kubedb.com/v1 3.9.0 Provisioning 0s -kafka-dev kubedb.com/v1 3.9.0 Provisioning 24s -. -. -kafka-dev kubedb.com/v1 3.9.0 Ready 92s +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +es-combined xpack-8.11.1 Ready 75s + ``` Let's check volume size from petset, and from the persistent volume, ```bash -$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +$ kubectl get petset -n demo es-combined -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' "1Gi" - $ kubectl get pv -n demo -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-23778f6015324895 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 33s -pvc-30b34f642f994e13 1Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 58s +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-edeeff75-9823-4aeb-9189-37adad567ec7 1Gi RWO Delete Bound demo/data-es-combined-0 longhorn 2m21s + ``` You can see the petset has 1GB storage, and the capacity of all the persistent volumes are also 1GB. -We are now ready to apply the `KafkaOpsRequest` CR to expand the volume of this database. +We are now ready to apply the `ElasticsearchOpsRequest` CR to expand the volume of this database. ### Volume Expansion -Here, we are going to expand the volume of the kafka combined cluster. +Here, we are going to expand the volume of the Elasticsearch combined cluster. -#### Create KafkaOpsRequest +#### Create ElasticsearchOpsRequest -In order to expand the volume of the database, we have to create a `KafkaOpsRequest` CR with our desired volume size. Below is the YAML of the `KafkaOpsRequest` CR that we are going to create, +In order to expand the volume of the database, we have to create a `ElasticsearchOpsRequest` CR with our desired volume size. Below is the YAML of the `ElasticsearchOpsRequest` CR that we are going to create, ```yaml apiVersion: ops.kubedb.com/v1alpha1 -kind: KafkaOpsRequest +kind: ElasticsearchOpsRequest metadata: - name: kf-volume-exp-combined + name: es-volume-expansion-combined namespace: demo spec: type: VolumeExpansion databaseRef: - name: kafka-dev + name: es-combined volumeExpansion: - node: 2Gi - mode: Online + mode: "Online" + node: 4Gi ``` Here, -- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `kafka-dev`. +- `spec.databaseRef.name` specifies that we are performing volume expansion operation on `es-combined`. - `spec.type` specifies that we are performing `VolumeExpansion` on our database. - `spec.volumeExpansion.node` specifies the desired volume size. -Let's create the `KafkaOpsRequest` CR we have shown above, +Let's create the `ElasticsearchOpsRequest` CR we have shown above, ```bash -$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/kafka/volume-expansion/kafka-volume-expansion-combined.yaml -kafkaopsrequest.ops.kubedb.com/kf-volume-exp-combined created +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/volume-expansion/elasticsearch-volume-expansion-combined.yaml +Elasticsearchopsrequest.ops.kubedb.com/es-volume-exp-combined created ``` -#### Verify Kafka Combined volume expanded successfully +#### Verify Elasticsearch Combined volume expanded successfully -If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Kafka` object and related `PetSets` and `Persistent Volumes`. +If everything goes well, `KubeDB` Ops-manager operator will update the volume size of `Elasticsearch` object and related `PetSets` and `Persistent Volumes`. -Let's wait for `KafkaOpsRequest` to be `Successful`. Run the following command to watch `KafkaOpsRequest` CR, +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CR, ```bash -$ kubectl get kafkaopsrequest -n demo +$ kubectl get Elasticsearchopsrequest -n demo NAME TYPE STATUS AGE -kf-volume-exp-combined VolumeExpansion Successful 2m4s +es-volume-exp-combined VolumeExpansion Successful 2m4s ``` -We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed to expand the volume of the database. ```bash -$ kubectl describe kafkaopsrequest -n demo kf-volume-exp-combined -Name: kf-volume-exp-combined +$ kubectl describe Elasticsearchopsrequest -n demo es-volume-expansion-combined +Name: es-volume-expansion-combined Namespace: demo Labels: Annotations: API Version: ops.kubedb.com/v1alpha1 -Kind: KafkaOpsRequest +Kind: ElasticsearchOpsRequest Metadata: - Creation Timestamp: 2024-07-30T10:45:57Z + Creation Timestamp: 2025-11-20T12:19:05Z Generation: 1 - Resource Version: 91816 - UID: 0febb459-3373-4f75-b7da-46391edf557f + Resource Version: 127891 + UID: 4199c88c-d3c4-44d0-8084-efdaa49b9c03 Spec: Apply: IfReady Database Ref: - Name: kafka-dev + Name: es-combined Type: VolumeExpansion Volume Expansion: - Mode: Online - Node: 2Gi + Mode: Offline + Node: 4Gi Status: Conditions: - Last Transition Time: 2024-07-30T10:45:57Z - Message: Kafka ops-request has started to expand volume of kafka nodes. + Last Transition Time: 2025-11-20T12:19:05Z + Message: Elasticsearch ops request is expanding volume of the Elasticsearch nodes. Observed Generation: 1 Reason: VolumeExpansion Status: True Type: VolumeExpansion - Last Transition Time: 2024-07-30T10:46:05Z + Last Transition Time: 2025-11-20T12:19:13Z Message: get pet set; ConditionStatus:True Observed Generation: 1 Status: True Type: GetPetSet - Last Transition Time: 2024-07-30T10:46:05Z - Message: is petset deleted; ConditionStatus:True + Last Transition Time: 2025-11-20T12:19:13Z + Message: delete pet set; ConditionStatus:True Observed Generation: 1 Status: True - Type: IsPetsetDeleted - Last Transition Time: 2024-07-30T10:46:15Z - Message: successfully deleted the petSets with orphan propagation policy + Type: deletePetSet + Last Transition Time: 2025-11-20T12:19:23Z + Message: successfully deleted the PetSets with orphan propagation policy Observed Generation: 1 Reason: OrphanPetSetPods Status: True Type: OrphanPetSetPods - Last Transition Time: 2024-07-30T10:46:20Z + Last Transition Time: 2025-11-20T12:19:28Z + Message: get pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetPod + Last Transition Time: 2025-11-20T12:19:28Z + Message: patch opsrequest; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: PatchOpsrequest + Last Transition Time: 2025-11-20T12:20:23Z + Message: create db client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateDbClient + Last Transition Time: 2025-11-20T12:19:28Z + Message: db operation; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DbOperation + Last Transition Time: 2025-11-20T12:19:28Z + Message: delete pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: DeletePod + Last Transition Time: 2025-11-20T12:19:33Z Message: get pvc; ConditionStatus:True Observed Generation: 1 Status: True Type: GetPvc - Last Transition Time: 2024-07-30T10:46:20Z - Message: is pvc patched; ConditionStatus:True + Last Transition Time: 2025-11-20T12:19:33Z + Message: patch pvc; ConditionStatus:True Observed Generation: 1 Status: True - Type: IsPvcPatched - Last Transition Time: 2024-07-30T10:46:25Z + Type: PatchPvc + Last Transition Time: 2025-11-20T12:19:58Z Message: compare storage; ConditionStatus:True Observed Generation: 1 Status: True Type: CompareStorage - Last Transition Time: 2024-07-30T10:46:40Z + Last Transition Time: 2025-11-20T12:19:58Z + Message: create pod; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreatePod + Last Transition Time: 2025-11-20T12:20:28Z Message: successfully updated combined node PVC sizes Observed Generation: 1 - Reason: UpdateCombinedNodePVCs + Reason: VolumeExpansionCombinedNode Status: True - Type: UpdateCombinedNodePVCs - Last Transition Time: 2024-07-30T10:46:45Z - Message: successfully reconciled the Kafka resources + Type: VolumeExpansionCombinedNode + Last Transition Time: 2025-11-20T12:20:37Z + Message: successfully reconciled the Elasticsearch resources Observed Generation: 1 Reason: UpdatePetSets Status: True Type: UpdatePetSets - Last Transition Time: 2024-07-30T10:46:51Z + Last Transition Time: 2025-11-20T12:20:42Z Message: PetSet is recreated Observed Generation: 1 Reason: ReadyPetSets Status: True Type: ReadyPetSets - Last Transition Time: 2024-07-30T10:46:51Z - Message: Successfully completed volumeExpansion for kafka + Last Transition Time: 2025-11-20T12:20:48Z + Message: successfully updated Elasticsearch CR + Observed Generation: 1 + Reason: UpdateDatabase + Status: True + Type: UpdateDatabase + Last Transition Time: 2025-11-20T12:20:48Z + Message: Successfully completed the modification process. Observed Generation: 1 Reason: Successful Status: True @@ -255,58 +289,77 @@ Status: Events: Type Reason Age From Message ---- ------ ---- ---- ------- - Normal Starting 24m KubeDB Ops-manager Operator Start processing for KafkaOpsRequest: demo/kf-volume-exp-combined - Normal Starting 24m KubeDB Ops-manager Operator Pausing Kafka databse: demo/kafka-dev - Normal Successful 24m KubeDB Ops-manager Operator Successfully paused Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined - Warning get pet set; ConditionStatus:True 24m KubeDB Ops-manager Operator get pet set; ConditionStatus:True - Warning is petset deleted; ConditionStatus:True 24m KubeDB Ops-manager Operator is petset deleted; ConditionStatus:True - Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True - Normal OrphanPetSetPods 23m KubeDB Ops-manager Operator successfully deleted the petSets with orphan propagation policy - Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True - Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True - Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True - Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True - Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True - Warning is pvc patched; ConditionStatus:True 23m KubeDB Ops-manager Operator is pvc patched; ConditionStatus:True - Warning get pvc; ConditionStatus:True 23m KubeDB Ops-manager Operator get pvc; ConditionStatus:True - Warning compare storage; ConditionStatus:True 23m KubeDB Ops-manager Operator compare storage; ConditionStatus:True - Normal UpdateCombinedNodePVCs 23m KubeDB Ops-manager Operator successfully updated combined node PVC sizes - Normal UpdatePetSets 23m KubeDB Ops-manager Operator successfully reconciled the Kafka resources - Warning get pet set; ConditionStatus:True 23m KubeDB Ops-manager Operator get pet set; ConditionStatus:True - Normal ReadyPetSets 23m KubeDB Ops-manager Operator PetSet is recreated - Normal Starting 23m KubeDB Ops-manager Operator Resuming Kafka database: demo/kafka-dev - Normal Successful 23m KubeDB Ops-manager Operator Successfully resumed Kafka database: demo/kafka-dev for KafkaOpsRequest: kf-volume-exp-combined + Normal PauseDatabase 114s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-combined + Warning get pet set; ConditionStatus:True 106s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Warning delete pet set; ConditionStatus:True 106s KubeDB Ops-manager Operator delete pet set; ConditionStatus:True + Warning get pet set; ConditionStatus:True 101s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal OrphanPetSetPods 96s KubeDB Ops-manager Operator successfully deleted the PetSets with orphan propagation policy + Warning get pod; ConditionStatus:True 91s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 91s KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning create db client; ConditionStatus:True 91s KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 91s KubeDB Ops-manager Operator db operation; ConditionStatus:True + Warning delete pod; ConditionStatus:True 91s KubeDB Ops-manager Operator delete pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 86s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 86s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning patch pvc; ConditionStatus:True 86s KubeDB Ops-manager Operator patch pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:False 86s KubeDB Ops-manager Operator compare storage; ConditionStatus:False + Warning get pod; ConditionStatus:True 81s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 81s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 76s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 76s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 71s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 71s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 66s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 66s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning get pod; ConditionStatus:True 61s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pvc; ConditionStatus:True 61s KubeDB Ops-manager Operator get pvc; ConditionStatus:True + Warning compare storage; ConditionStatus:True 61s KubeDB Ops-manager Operator compare storage; ConditionStatus:True + Warning create pod; ConditionStatus:True 61s KubeDB Ops-manager Operator create pod; ConditionStatus:True + Warning patch opsrequest; ConditionStatus:True 61s KubeDB Ops-manager Operator patch opsrequest; ConditionStatus:True + Warning get pod; ConditionStatus:True 56s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:False 56s KubeDB Ops-manager Operator create db client; ConditionStatus:False + Warning get pod; ConditionStatus:True 51s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 46s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 41s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning get pod; ConditionStatus:True 36s KubeDB Ops-manager Operator get pod; ConditionStatus:True + Warning create db client; ConditionStatus:True 36s KubeDB Ops-manager Operator create db client; ConditionStatus:True + Warning db operation; ConditionStatus:True 36s KubeDB Ops-manager Operator db operation; ConditionStatus:True + Normal VolumeExpansionCombinedNode 31s KubeDB Ops-manager Operator successfully updated combined node PVC sizes + Normal UpdatePetSets 22s KubeDB Ops-manager Operator successfully reconciled the Elasticsearch resources + Warning get pet set; ConditionStatus:True 17s KubeDB Ops-manager Operator get pet set; ConditionStatus:True + Normal ReadyPetSets 17s KubeDB Ops-manager Operator PetSet is recreated + Normal UpdateDatabase 11s KubeDB Ops-manager Operator successfully updated Elasticsearch CR + Normal ResumeDatabase 11s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-combined + Normal ResumeDatabase 11s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-combined + Normal Successful 11s KubeDB Ops-manager Operator Successfully Updated Database + ``` Now, we are going to verify from the `Petset`, and the `Persistent Volumes` whether the volume of the database has expanded to meet the desired state, Let's check, ```bash -$ kubectl get petset -n demo kafka-dev -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' -"2Gi" +$ kubectl get petset -n demo es-combined -o json | jq '.spec.volumeClaimTemplates[].spec.resources.requests.storage' +"4Gi" $ kubectl get pv -n demo -NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE -pvc-23778f6015324895 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-1 standard 7m2s -pvc-30b34f642f994e13 2Gi RWO Delete Bound demo/kafka-dev-data-kafka-dev-0 standard 7m9s +NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE +pvc-edeeff75-9823-4aeb-9189-37adad567ec7 4Gi RWO Delete Bound demo/data-es-combined-0 longhorn 13m ``` -The above output verifies that we have successfully expanded the volume of the Kafka. +The above output verifies that we have successfully expanded the volume of the Elasticsearch. ## Cleaning Up To clean up the Kubernetes resources created by this tutorial, run: ```bash -kubectl delete kafkaopsrequest -n demo kf-volume-exp-combined -kubectl delete kf -n demo kafka-dev +kubectl delete Elasticsearchopsrequest -n demo es-volume-expansion-combined +kubectl delete es -n demo es-combined kubectl delete ns demo ``` ## Next Steps -- Detail concepts of [Kafka object](/docs/guides/kafka/concepts/kafka.md). -- Different Kafka topology clustering modes [here](/docs/guides/kafka/clustering/_index.md). -- Monitor your Kafka database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/kafka/monitoring/using-prometheus-operator.md). - -[//]: # (- Monitor your Kafka database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/kafka/monitoring/using-builtin-prometheus.md).) +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/topology-cluster/index.md). - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/volume-expantion/topology.md b/docs/guides/elasticsearch/volume-expantion/topology.md index 797ddcbef..c13b38621 100644 --- a/docs/guides/elasticsearch/volume-expantion/topology.md +++ b/docs/guides/elasticsearch/volume-expantion/topology.md @@ -223,7 +223,7 @@ Spec: Data: 5Gi Ingest: 4Gi Master: 5Gi - Mode: Offline + Mode: online Status: Conditions: Last Transition Time: 2025-11-20T10:07:17Z @@ -725,6 +725,29 @@ pvc-c274f913-5452-47e1-ab42-ba584bdae297 5Gi RWO Delete The above output verifies that we have successfully expanded the volume of the Elasticsearch. + +**Only Data Node Expansion:** +Only data node volume expansion can be done by creating an `ElasticsearchOpsRequest` manifest like below, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: volume-expansion-data-nodes + namespace: demo +spec: + type: VolumeExpansion + databaseRef: + name: es-cluster + volumeExpansion: + mode: "Online" + data: 5Gi +``` + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/elasticsearch/volume-expantion/volume-expansion-topo-data.yaml +Elasticsearchopsrequest.ops.kubedb.com/volume-expansion-data-nodes created +``` ## Cleaning Up To clean up the Kubernetes resources created by this tutorial, run: From eef6674efccf2034302e4dc9d98188d67bd65d97 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Fri, 21 Nov 2025 18:22:35 +0600 Subject: [PATCH 10/13] error fix Signed-off-by: Bonusree --- .../update-version/elasticsearch.yaml | 2 +- docs/guides/elasticsearch/restart/index.md | 2 +- .../scaling/horizontal/combined.md | 8 ++++---- .../scaling/horizontal/overview.md | 4 ++-- .../scaling/horizontal/topology.md | 4 ++-- .../elasticsearch/scaling/vertical/combined.md | 6 +++--- .../elasticsearch/scaling/vertical/overview.md | 4 ++-- .../elasticsearch/scaling/vertical/topology.md | 2 +- .../update-version/elasticsearch.md | 2 +- .../elasticsearch/volume-expantion/topology.md | 18 +++++++++--------- 10 files changed, 26 insertions(+), 26 deletions(-) diff --git a/docs/examples/elasticsearch/update-version/elasticsearch.yaml b/docs/examples/elasticsearch/update-version/elasticsearch.yaml index 5deb703ed..5308b788b 100644 --- a/docs/examples/elasticsearch/update-version/elasticsearch.yaml +++ b/docs/examples/elasticsearch/update-version/elasticsearch.yaml @@ -13,7 +13,7 @@ spec: resources: requests: storage: 1Gi - storageClassName: local-path + storageClassName: standard storageType: Durable version: xpack-9.1.3 #ghcr.io/kubedb/kubedb-provisioner:v0.59.0 diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index 2026cc2e0..9390e730a 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -51,7 +51,7 @@ spec: replicas: 3 storageType: Durable storage: - storageClassName: "local-path" + storageClassName: "standard" accessModes: - ReadWriteOnce resources: diff --git a/docs/guides/elasticsearch/scaling/horizontal/combined.md b/docs/guides/elasticsearch/scaling/horizontal/combined.md index 2c2c43770..8939c46fd 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/combined.md +++ b/docs/guides/elasticsearch/scaling/horizontal/combined.md @@ -4,7 +4,7 @@ menu: docs_{{ .version }}: identifier: es-horizontal-scaling-combined name: Combined Cluster - parent: es-horizontal-scaling + parent: es-horizontal-scalling-elasticsearch weight: 20 menu_name: docs_{{ .version }} section_menu_id: guides @@ -61,7 +61,7 @@ spec: replicas: 2 storageType: Durable storage: - storageClassName: "local-path" + storageClassName: "standard" accessModes: - ReadWriteOnce resources: @@ -129,8 +129,8 @@ secret/es-remote-monitoring-user-cred kubernetes.io/basic-auth 2 5m4s secret/es-transport-cert kubernetes.io/tls 3 5m8s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE -persistentvolumeclaim/data-es-0 Bound pvc-7c8cc17d-7427-4411-9262-f213e826540b 1Gi RWO local-path 5m5s -persistentvolumeclaim/data-es-1 Bound pvc-f2cf7ac9-b0c2-4c44-93dc-476cc06c25b4 1Gi RWO local-path 4m59s +persistentvolumeclaim/data-es-0 Bound pvc-7c8cc17d-7427-4411-9262-f213e826540b 1Gi RWO standard 5m5s +persistentvolumeclaim/data-es-1 Bound pvc-f2cf7ac9-b0c2-4c44-93dc-476cc06c25b4 1Gi RWO standard 4m59s ``` diff --git a/docs/guides/elasticsearch/scaling/horizontal/overview.md b/docs/guides/elasticsearch/scaling/horizontal/overview.md index b4d5bc64c..99b10243f 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/overview.md +++ b/docs/guides/elasticsearch/scaling/horizontal/overview.md @@ -2,9 +2,9 @@ title: Elasticsearch Horizontal Scaling Overview menu: docs_{{ .version }}: - identifier: kf-horizontal-scaling-overview + identifier: es-horizontal-scalling-overview name: Overview - parent: kf-horizontal-scaling + parent: es-horizontal-scalling-elasticsearch weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology.md b/docs/guides/elasticsearch/scaling/horizontal/topology.md index 10f79a48e..c4c29484c 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/topology.md +++ b/docs/guides/elasticsearch/scaling/horizontal/topology.md @@ -4,8 +4,8 @@ menu: docs_{{ .version }}: identifier: es-horizontal-scaling-Topology name: Topology Cluster - parent: es-horizontal-scaling - weight: 20 + parent: es-horizontal-scalling-elasticsearch + weight: 30 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/elasticsearch/scaling/vertical/combined.md b/docs/guides/elasticsearch/scaling/vertical/combined.md index e696f1e12..eb45ecf9c 100644 --- a/docs/guides/elasticsearch/scaling/vertical/combined.md +++ b/docs/guides/elasticsearch/scaling/vertical/combined.md @@ -2,10 +2,10 @@ title: Vertical Scaling Elasticsearch Combined Cluster menu: docs_{{ .version }}: - identifier: kf-vertical-scaling-combined + identifier: es-vertical-scaling-combined name: Combined Cluster - parent: kf-vertical-scaling - weight: 30 + parent: es-vertical-scalling-elasticsearch + weight: 20 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/elasticsearch/scaling/vertical/overview.md b/docs/guides/elasticsearch/scaling/vertical/overview.md index 7e3712500..3c83a42b7 100644 --- a/docs/guides/elasticsearch/scaling/vertical/overview.md +++ b/docs/guides/elasticsearch/scaling/vertical/overview.md @@ -2,9 +2,9 @@ title: Elasticsearch Vertical Scaling Overview menu: docs_{{ .version }}: -identifier: kf-vertical-scaling-overview +identifier: es-vertical-scalling-overview name: Overview -parent: kf-vertical-scaling +parent: es-vertical-scalling-elasticsearch weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides diff --git a/docs/guides/elasticsearch/scaling/vertical/topology.md b/docs/guides/elasticsearch/scaling/vertical/topology.md index 49800b2d5..73dc38707 100644 --- a/docs/guides/elasticsearch/scaling/vertical/topology.md +++ b/docs/guides/elasticsearch/scaling/vertical/topology.md @@ -4,7 +4,7 @@ menu: docs_{{ .version }}: identifier: es-vertical-scaling-topology name: Topology Cluster - parent: es-vertical-scaling + parent: es-vertical-scalling-elasticsearch weight: 30 menu_name: docs_{{ .version }} section_menu_id: guides diff --git a/docs/guides/elasticsearch/update-version/elasticsearch.md b/docs/guides/elasticsearch/update-version/elasticsearch.md index e954d62e8..8f99b59bf 100644 --- a/docs/guides/elasticsearch/update-version/elasticsearch.md +++ b/docs/guides/elasticsearch/update-version/elasticsearch.md @@ -60,7 +60,7 @@ spec: resources: requests: storage: 1Gi - storageClassName: local-path + storageClassName: standard storageType: Durable version: xpack-9.1.3 diff --git a/docs/guides/elasticsearch/volume-expantion/topology.md b/docs/guides/elasticsearch/volume-expantion/topology.md index c13b38621..e6bf35509 100644 --- a/docs/guides/elasticsearch/volume-expantion/topology.md +++ b/docs/guides/elasticsearch/volume-expantion/topology.md @@ -129,15 +129,15 @@ $ kubectl get petset -n demo es-cluster-ingest -o json | jq '.spec.volumeClaimTe "1Gi" $ kubectl get pv -n demo NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE -pvc-11b48c6e-d996-45a7-8ba2-f8d71a655912 1Gi RWO Delete Bound demo/data-es-cluster-ingest-2 local-path 22h -pvc-1904104c-bbf2-4754-838a-8a647b2bd23e 1Gi RWO Delete Bound demo/data-es-cluster-data-2 local-path 22h -pvc-19aa694a-29c0-43d9-a495-c84c77df2dd8 1Gi RWO Delete Bound demo/data-es-cluster-master-0 local-path 22h -pvc-33702b18-7e98-41b7-9b19-73762cb4f86a 1Gi RWO Delete Bound demo/data-es-cluster-master-1 local-path 22h -pvc-8604968f-f433-4931-82bc-8d240d6f52d8 1Gi RWO Delete Bound demo/data-es-cluster-data-0 local-path 22h -pvc-ae5ccc43-d078-4816-a553-8a3cd1f674be 1Gi RWO Delete Bound demo/data-es-cluster-ingest-0 local-path 22h -pvc-b4225042-c69f-41df-99b2-1b3191057a85 1Gi RWO Delete Bound demo/data-es-cluster-data-1 local-path 22h -pvc-bd4b7d5a-8494-4ee2-a25c-697a6f23cb79 1Gi RWO Delete Bound demo/data-es-cluster-ingest-1 local-path 22h -pvc-c9057b3b-4412-467f-8ae5-f6414e0059c3 1Gi RWO Delete Bound demo/data-es-cluster-master-2 local-path 22h +pvc-11b48c6e-d996-45a7-8ba2-f8d71a655912 1Gi RWO Delete Bound demo/data-es-cluster-ingest-2 standard 22h +pvc-1904104c-bbf2-4754-838a-8a647b2bd23e 1Gi RWO Delete Bound demo/data-es-cluster-data-2 standard 22h +pvc-19aa694a-29c0-43d9-a495-c84c77df2dd8 1Gi RWO Delete Bound demo/data-es-cluster-master-0 standard 22h +pvc-33702b18-7e98-41b7-9b19-73762cb4f86a 1Gi RWO Delete Bound demo/data-es-cluster-master-1 standard 22h +pvc-8604968f-f433-4931-82bc-8d240d6f52d8 1Gi RWO Delete Bound demo/data-es-cluster-data-0 standard 22h +pvc-ae5ccc43-d078-4816-a553-8a3cd1f674be 1Gi RWO Delete Bound demo/data-es-cluster-ingest-0 standard 22h +pvc-b4225042-c69f-41df-99b2-1b3191057a85 1Gi RWO Delete Bound demo/data-es-cluster-data-1 standard 22h +pvc-bd4b7d5a-8494-4ee2-a25c-697a6f23cb79 1Gi RWO Delete Bound demo/data-es-cluster-ingest-1 standard 22h +pvc-c9057b3b-4412-467f-8ae5-f6414e0059c3 1Gi RWO Delete Bound demo/data-es-cluster-master-2 standard 22h ``` You can see the petsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. From 7350e2f5bd7dfe713cdb9df6b0a23e5b63df040c Mon Sep 17 00:00:00 2001 From: Bonusree Date: Tue, 25 Nov 2025 17:40:00 +0600 Subject: [PATCH 11/13] figures added Signed-off-by: Bonusree --- .../scaling/horizontal/combined.md | 2 +- .../scaling/horizontal/overview.md | 2 +- .../scaling/horizontal/topology.md | 2 +- .../scaling/vertical/overview.md | 18 +++++++++--------- .../volume-expantion/combined.md | 2 +- .../volume-expantion/overview.md | 18 +++++++++--------- .../elasticsearch/es-vertical-scaling.png | Bin 0 -> 58824 bytes .../elasticsearch/es-volume-expansion.png | Bin 0 -> 58868 bytes .../elasticsearch/horizontal_scaling.jpg | Bin 0 -> 33972 bytes 9 files changed, 22 insertions(+), 22 deletions(-) create mode 100644 docs/images/elasticsearch/es-vertical-scaling.png create mode 100644 docs/images/elasticsearch/es-volume-expansion.png create mode 100644 docs/images/elasticsearch/horizontal_scaling.jpg diff --git a/docs/guides/elasticsearch/scaling/horizontal/combined.md b/docs/guides/elasticsearch/scaling/horizontal/combined.md index 8939c46fd..605e1ce7b 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/combined.md +++ b/docs/guides/elasticsearch/scaling/horizontal/combined.md @@ -43,7 +43,7 @@ Here, we are going to deploy a `Elasticsearch` combined cluster using a support ### Prepare Elasticsearch Combined cluster -Now, we are going to deploy a `Elasticsearch` combined cluster with version `3.9.0`. +Now, we are going to deploy a `Elasticsearch` combined cluster with version `xpack-9.1.4`. ### Deploy Elasticsearch combined cluster diff --git a/docs/guides/elasticsearch/scaling/horizontal/overview.md b/docs/guides/elasticsearch/scaling/horizontal/overview.md index 99b10243f..3c4033673 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/overview.md +++ b/docs/guides/elasticsearch/scaling/horizontal/overview.md @@ -28,7 +28,7 @@ The following diagram shows how KubeDB Ops-manager operator scales up or down `E [//]: # (
) -[//]: # (  Horizontal scaling process of Elasticsearch) +[//]: # (  Horizontal scaling process of Elasticsearch) [//]: # (
Fig: Horizontal scaling process of Elasticsearch
) diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology.md b/docs/guides/elasticsearch/scaling/horizontal/topology.md index c4c29484c..416fc1a13 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/topology.md +++ b/docs/guides/elasticsearch/scaling/horizontal/topology.md @@ -24,7 +24,7 @@ This guide will show you how to use `KubeDB` Ops-manager operator to scale the E - You should be familiar with the following `KubeDB` concepts: - [Elasticsearch](/docs/guides/elasticsearch/concepts/elasticsearch/index.md) - - [Topology](/docs/guides/elasticsearch/clustering/Topology-cluster/index.md) + - [Topology](/docs/guides/elasticsearch/clustering/topology-cluster/index.md) - [ElasticsearchOpsRequest](/docs/guides/elasticsearch/concepts/elasticsearch-ops-request/index.md) - [Horizontal Scaling Overview](/docs/guides/elasticsearch/scaling/horizontal/overview.md) diff --git a/docs/guides/elasticsearch/scaling/vertical/overview.md b/docs/guides/elasticsearch/scaling/vertical/overview.md index 3c83a42b7..91d482ade 100644 --- a/docs/guides/elasticsearch/scaling/vertical/overview.md +++ b/docs/guides/elasticsearch/scaling/vertical/overview.md @@ -1,11 +1,11 @@ --- title: Elasticsearch Vertical Scaling Overview menu: -docs_{{ .version }}: -identifier: es-vertical-scalling-overview -name: Overview -parent: es-vertical-scalling-elasticsearch -weight: 10 + docs_{{ .version }} + identifier: es-vertical-scalling-overview + name: Overview + parent: es-vertical-scalling-elasticsearch + weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -26,10 +26,10 @@ This guide will give an overview on how KubeDB Ops-manager operator updates the The following diagram shows how KubeDB Ops-manager operator updates the resources of the `Elasticsearch`. Open the image in a new tab to see the enlarged version. -{{/*
*/}} -{{/*   Vertical scaling process of Elasticsearch*/}} -{{/*
Fig: Vertical scaling process of Elasticsearch
*/}} -{{/*
*/}} +
+   Vertical scaling process of Elasticsearch +
Fig: Vertical scaling process of Elasticsearch
+
The vertical scaling process consists of the following steps: diff --git a/docs/guides/elasticsearch/volume-expantion/combined.md b/docs/guides/elasticsearch/volume-expantion/combined.md index beb87126f..0a09fe8a5 100644 --- a/docs/guides/elasticsearch/volume-expantion/combined.md +++ b/docs/guides/elasticsearch/volume-expantion/combined.md @@ -5,7 +5,7 @@ menu: identifier: es-volume-expansion-combined name: Combined parent: es-voulume-expansion-elasticsearch - weight: 10 + weight: 20 menu_name: docs_{{ .version }} section_menu_id: guides --- diff --git a/docs/guides/elasticsearch/volume-expantion/overview.md b/docs/guides/elasticsearch/volume-expantion/overview.md index e597957f4..089bd91cc 100644 --- a/docs/guides/elasticsearch/volume-expantion/overview.md +++ b/docs/guides/elasticsearch/volume-expantion/overview.md @@ -1,11 +1,11 @@ --- title: Elasticsearch Volume Expansion Overview menu: -docs_{{ .version }}: -identifier: kf-volume-expansion-overview -name: Overview -parent: kf-volume-expansion -weight: 10 + docs_{{ .version }}: + identifier: es-volume-expansion-overview + name: Overview + parent: es-voulume-expansion-elasticsearch + weight: 10 menu_name: docs_{{ .version }} section_menu_id: guides --- @@ -26,10 +26,10 @@ This guide will give an overview on how KubeDB Ops-manager operator expand the v The following diagram shows how KubeDB Ops-manager operator expand the volumes of `Elasticsearch` database components. Open the image in a new tab to see the enlarged version. -{{/*
*/}} -{{/*   Volume Expansion process of Elasticsearch*/}} -{{/*
Fig: Volume Expansion process of Elasticsearch
*/}} -{{/*
*/}} +
+   Volume Expansion process of Elasticsearch +
Fig: Volume Expansion process of Elasticsearch
+
The Volume Expansion process consists of the following steps: diff --git a/docs/images/elasticsearch/es-vertical-scaling.png b/docs/images/elasticsearch/es-vertical-scaling.png new file mode 100644 index 0000000000000000000000000000000000000000..79ea77332f1d4d0551875ea5ad670629d0bd8ff0 GIT binary patch literal 58824 zcmeFZWmuJK*ETv)RHQ@&q@+|pq#GopLmH$*y1S83B&0p{ zvLwAer;e_bgFO!=B`WE^Pt9*{Z)>9apR-%q(dqp2iNp0FbnwvhI{J3>jC2gww?#_I z{jdGG`1S3s-~Q_&3I;a+y!qEVbCZ9rWUXUsXQ1d}ZNNh*Z(wKTV5?{F&l#={`u9l$ zZ4Gqn4fJ`L7?@aT85wDrSh?x{x38|R@m~iBJKKxv^RhBCGU_qwGte5cF*DGzaI&z_ zau{&1(&{j>vodmW7%*@$>HX(K|2pJCX`YcSWOl+*2j4X`ooU9xS|2g2FFaPKM zidGJK#@CO+%k-b!{_C@U_T{F(9$j;je+|k%Z~i?h|2g7cL-wy7|J$_u-=8Xdz5h&; zwS%qsKPI5BM{i(mU}0cse?3Wz|DGg$JuZ6_dvk;Tnj(I4`~NvbV0&EVI+jK}l+LvJ z28KEg=Ju4l26nWL2DbJldOGH`cJRZ*(ukCnl=1)YaQ{9ZH~oLR)c=b|v;J=eerE@nE(A;@Q#b$!QR-)mRG>a(!tKwz<^W;>{W;RddL6kD=__kCgA_kAj8`H z&nW+oJ8{;r{%^|wcH{cTe64I1t*p#>1$8VPbx5h?4J`EyYz=Imlj>XP@d)vYvYN}X zDk-?|lRDTL*z&yL7qt_SRW){I(IeHh)v?s)Q8du8Amz8VwiC3{Hz2h)u&_3VCBef@ z|G!`0|M*b@Z14-cg+_?^JUrX2+0Pp-HSzoK*8`jm@`;_%g- zZM!WRhV|U{Nu(vuOy9ppAx|-;jxnj$3(#)#ul~E+nb0x7Lh^#PT+*i9Dw(JHuK@qt zmjKd$tY0C&QczHTA)09zI^)mI^nxg?btHv-kJeU_tDSl{_wcvzXNY@vqI6Xw|8bS? z>6Sz2{`oCwk3Z=4^>2Onx-8eP?|ZOPk*~f2)p5DF3 zTtl|bi(SzUYa=|xqpQQYXsD-BqgZ4;GRrN8S$N5mUg8SKFJd4SIo>ta|?B(U9dZneiu1h`x`a^ttPP@e}W}_Y= z9@qWDjfoe^1y8-#r|Zww$4a>{hA#~f2u+O2R5S#_`LIQIr_N<>b!{y+HdatTfcNTr zPb!ves@jf{kI?w6tal3)E%p=sY#0)s9d1IX*pTN^X`z_J`28fJDPGjJK9QQvrxObyeQOeOo)hJ zH5 zgs)%sTOVnEi;eAE*tM9bsIIALNJ=Du)&71A%w=+5V4&fo)BhnQ3uUGGn68zT=lSu@ zV5ZE~$x>o_d%HwB6P)Vh%Xi1yv*cl7*@_}ULUCLUt1bQym6VkBmU=yn>C4sPzkV%q zJy=!9l*S-nS>B#)23xAAsNlQU56{gt)6&v%b32Di{Ah00)6*kSo4&f3xe^w=-mbPh zuWIdn}deN z{A71wzAd!3von@K=SO=uWp4sctzkDow4@_EFqm9=M#j(9*4DPR7ucxTiZyS)due-| zJmcl{n67gnaohfGcY5~*!d@PHWM8#SwMKKJF@^hGvXzyUX1{xs-qsv(fByW*RW5uW z02?|wx|+k)Tp)c{R~Igg%6694^z`)M;h{GcC1s|s@9m9-t-8IQ0?k@nS~dI2bC+b< zgr0TQuN+nEE6y1%`^1o zGVJz+ic1C~)sgP&&EoR>#Kgpe*|6(VK!B;s^$k{9PN|ifM!-E&wB-)3Dy3!*n=&(( zZf{$6w-@WR@s?2|-qWF#_9K$~&d<;3G^$>q=%^kJq`z)xXt0&K zLuDi+P|4CNDh|s>wQ61V79aYVb`B0Il)m^COvtWg!EL+yx4o}VHd-=+)7;#=thm@D zI}Ym$EO))1=rBo>lanKlThC?_y01Ox=I$O||nB_+)#*5ENM?vOCN zZBnX1dR=>x_Y&hapS69JLj{W{k z12f<8^?9mjXhBYn>sI{{j9&tWqIwK}Pa@w8n08ZBQ)IQifS8bwkg4hFd!(Bh8XEdc z(Q5hA#d>$hT6%l8N_%AQ4VcPv@3eLJYRqOQYGg(MT$cGOf zys;D%24ToVL_}O&TySu3c%~ZOArP)XtEMmUi@mcHgTBZS;%_HR5+Jyk{e^VjzD=sf zor~VrRaGJn`xX=PK)_odIX(TE)F7&lSML&EPT(RFKLo>#aNS~+cRWrn{Y>S+7l(<# zJ#r9*R8}5sHC|1dQet$c7=aHG6l{(RhXIcnbBdPy$q=-tNrjCXA@N30ai~z657Y1A z!-ve)f0~w;jk`$&lKVP3atx{=${5@G<8?py!C?BVnMzSIT)-u>A~{6~`_J^p&SV!N z%*4+7+b$ZIm%fm7Jq5Q(O})BNIYUQF8zGT?=N|Dnd#b-sN>Wl1qux(gk}w;kM!jlk zYP`AfNQiy;s89W03{Qr?1U!kdrN{oNNt>NSC7^))`qRgB)%WV5ABcI{ZaLSEsqa-2 zZOjDRK-B%s$;km9VyLJ05DRO2i?2bSX$;~j2a&X_>}NwgUWkh57oq+5txgr^3A^ed z2?DKs*al{nKNkAFC(z;i@ZC?@HB1j6ZyA~VJJ|jt-6zG<-igi?H9Jk zwQ-eaimryL>G3`*L9&gnLWw;Jq*lG$spPScdDOIzdrQS}OPLaA*-gheRGxgXX3f|m!f%Yayt(C#{ncbfs z2NrFEmeaL-Nlu}yVIOBuH)lRXTqmbZouQ==SV9VBt2ts4YM1s6d;(nFeY^7NC!6Nk z=ng`6(Fk!QjLE1X3L#5eGqrj=w~(TkJ&0=c{Eg(5)>OV^5glYk2n4Y>Uu7m$R@NoY zlScj#mgiMhakC#RG1<;NZ!as+1m9{X2DkZAt?7x8}_!3V33ji9%!wH^T+B94gAsX`gJ3maQ*oKD|D`Udt6P^>vQ-^AiJ& zm=NdT)WlNvvzG+;+aI}IeVwbnD&Fn=i_O=FdObWzrGw4o{gg~rPYtH6w=7?BrZvLI z#Gj&~A3HoymGF2j95#Bh4&$V1knEM8T(%;0ggkY`0|9*|H}|ndHdfXhXYO@;RD`A< z@z3Rtk4osUCsB87@mJPNvgaebh+=Tr+e@uxa>IXV?d_B?GJ8l1-)NfO2n>7)ffeJ& zxW$aslOWTHijCvb63+_{!cg)Vw^Eb!Dx1d_ZFdlVg!`H?whYZrPkyz=4J(M!1rV9* zH;tCR+o>PldY}-_WM%%1_3+@;){-0f4ve-cVQOe-yMyO{6rTE zL?7`=6#b*9ua5RO+brBXlNaaba1zA(NBsAUQa5O&q;9!I1jsS2W-e>dn^5P}Ss9QQ zvEXv=jU$qHZr{GG%T^O!BWEAJICHqaySgj@IrvMSxM0kU`SFSAMKu$H!;N<}hZgcw zxx#a|O~UKbRh{`Nm*j{M-1iMrU($C$?mU|8$9Rg9X^9z`VKD3fX4CrS$DxzH>Y@Ee zx7A$u2Cby%nGW@>q-IDd?AJEN``>Usj>I|?H`*TyPKg~Y&|F)lMj*71d4l?J@ zevb}MpZ@X56MR%MK2uy$up=;=?-dY3T!CBq;|@0%-k%+=C3BY7D6fmDC1_`}Hn*`k zHH4A6hh+1DupCCk2fhX0eqSJXZ&bw2n5(7p*_HIcw4HKj#!}U|suceEUW*pF>K=U3 zASJf4wO*tvQ1vl=42=p0L7eljAlT4fO+4l`at?OL9G9qrpYDQpAC1M&?Mdw0ND?1E zn1(z~HWt`-yGNS$uZwKyilHzv{dKBjiu$$i2+hjqtKww-!vm+4%WoEG3TPc8L>%Qu zlW%xKUwR}R)s5cXv7dQ<50Mbz?TzfOvi`gNo#>+hX&O24H^ZI`5si3M^t+r#&r`_r zVz5|+V@ww5WIWv3{%O5yVbYoL2FEnPKU$T2r#GwSQw42jI^o)_f|xaUqneJQ`bD{)N=5JA0l zleo95MTK-3y>_99MkhjY*s7~OHBUZ8*4mjH$FKQ;fSk9icM4S`7HhZS9(FW(V_{6{ z(#fu?lH(gGsYZ_rt{N=F`z7gnj~tA*tB81`I@3+)q?A#G!Q*5;ikOJOf5+jdr#dnq zB&(#UOc~bxHvlzxS*!k=ph8bYB;6EEu=PCReMqFoNl9_C&U;UzDxnW4Lm1IHImk-+ zq(a?URN|>0(j6WOKa6x`*pW0*5-CMbuMu_D@-A}arrcPWwKVmSIzmc%ous04kKV0X zPSuVg^VrQp^?SUjuK^E{d5!9~uS&xaZ_siHTD-BI-1LZ7$X$l<`sd~hRDUs5hF%s;N#psav8 zZHVYaI{*8CE7Lo}$8%|#bwYb5gBdbg|5`upf&EO?a}qYKg^x;QjBxs~d8$HRfA-sZ zA?-MlA%Y_y=Og>oGcy2--15sywP-YSXSR%=Q~6zaQo#}Ec6(YI2PGXp0?sf7GRUS2 z_=$V;ZvUaie~-D{NaC?fQ`#?|Ejdk-@jT@~u0vjfmyv0t#W(=Vj!uw@IGMw>np6G- z1pA>5f+)NjuauLN1G&6)RR_v{KtS#=BXo1+4B6hn8)z#((mvXEv@6oNdtW?OMWCg` z@FkkU&1t@A4?ZKmxAakK=@CnNukEaq)Bi@XiN#X*>@V3^SCciN4XZk?zQr?a>zOj@ zNth+vzUt`5PFHGwx1Sw+nUb!m{p8S{@@J`;hTVsFE=1B=Um~7+VZ+W*(D{b=qYWC# zlpp8hS_j*kI-@xl!k>JK-(&6e5xVbH$0e}gPSJLsPsP#UM0m!2E}8w!89+XNv!0~1 zQ}j_A1G>~8_AnzF&))9tU)$<}*rCP6#T-Ryl0-_YkH?Zk+|JfNyzg=oZ?)FNc`Rd_ zrK(0oh^40+r0*^oR%=`yJv1#ZsVR{gXd#GxP#z;YB+OWw8*o}hsrTLBz)@;nyNHbd zQG>~If}xk1=i=FM0P(@1D~K}O zMOI2_j~AR4yYQ{7wn#jVXXP-yeEH(#cU=BsDsxbC zBfs$*Cqg(y{3 zRTxYX&-3E&@WIJyyE_;JUlS8a2ncGRUN2YU(fNUl>8EF5;l47E&Ubk{54+&w<7<0f z901a_VC#~5xIupmuoB=pBux5ZhK7sF{i%S-xIm#F%6G}Zz`y{N9XYw~WR;EIJ(A6l zeDzRb9E;HHL{o2 z4S({UDsSPbD)~HSkRYEDeh#&nv ztgIPS<%VXp(>E6R4ujHp-6nbQN3Ju7{uRE_1&QID3(6pjW9^2=C)oIEty#k~uf^6d zUK0})=XqLXjy2zC2;r;bTd0ZfmrHslBS;)IY`GhHbzjLW^u><{H;F5mr?GbY34;xc zjOaCM6wiXf!yl4)x6MaHL_pmOW#Xq#p8!n^CgzFJ`Q~wP+Wn2Od1Qo*o7=7Z%QNrK z!hyJtp>Q25F#z09v(ckIHdcM&Fd-pfoW2-LA8;ZwVs#Z2<42_7?T71Q^IKcmnwlFc zE7i6O9WZeYJ9Dj23mM{F&lD085*Zm8pkP)FCyS>CYj?@W!jc7#bR0*XDoQz;IwfV1)pTGB-b;pPzq^n7ay!d%08* zlg2fjW?%P%VdWb|B+>jWRO?j251oE6)yeVi_P{aFG2r&?vVoM?Y)FSozGHyD;-A7xJ_A7?z z3fk!bVJt7Gy)np5!egG%bmp)&1{aq9k=nZ)KsdC`?eQ<2ET`*KX>bFD$Ri}6+&)OW zI$r`124JN`CIg8HuE2l*z?HhXyYG_8$;o+tu65e7gz4;pdG^JKWwVHhj#h;shT2(D z5}hm!(3GO0&sYy&QCd#bbaZsU{uLS=zjq4?wd!+JE#{V%Mt5^9JUu-HK9!d{yi@vc zbLDAiKLIW-Zll=hZc8wc!2KU3*vTjcGO=nM*rwh0$-K!kSlW8~_Hk~eK8j02vznx; zaVagz%F0?cJle!GOSo#BW4In|QOc11pcvZ1e7?`F?rEWIeGT>9$Xqm)vZdyRstW}Q zEB?x;IfN~q7gC~SRRto*L_ilIV-6UooxQcLpy}xRR z3~syDdiKX>GC{Q-jcQwAyEE-e?aB{-cbox#F0QDc@TSqI>g(@kBqP(=7%vALa(Os+ zRFuSh$h50sL!A}n-aQgtw`0MzXbzjXGLu07;fGDjDe(RL{FW9Lo>5Yga62=wu~lr~ zd}fi#Rixfv1~8=YI?#1Sg@5^S88AogE^f&OeNbNd_ z+#XZ?w!FVKux8-&Xp76S`ZxrfhK%~_fE43^6efAm0%9l_DGnfF_s}5iYI>$~BCir&=nnpin+>kANjBC54=ZCddF# zr{6Ha05pC6d<|oUiP-~w2Ji?Jb}=Iw^`jN;XJ(x}J-m+V-~9dCA=(7PwT$PxfLq6)JuAl`;@E};31Hy8U0vtvrM)4KUmljtY|(AV zmG+yKpQfL(`!CYXBuBDs?Jt*S?p6fp7z+^5b|L2hJ>qs7c&oc1BoKSf`^zh2oFoe=F>#<@44mpz!i+*8xP*mv5YPAzS|SPI2F zT(>+sW|91~IG%hb5RWBPJ%ohD_tH#?Wnz~FrnZ_$8o;N&@;#-_ZS*~0hV~FuV`2z+ z6C1CcG$DHkKC|Kea(@(^=I-t;xi>mEXJ}&t;?nBk`L@kRfBQznY1=Y3ruyzP1|eHl z{L3Q%%gf8lVO3{jWB^PKv2=9Cb&WP%zS@3eU^rJ8X>$_1Z7>n%z|>TINr^3Z5b}$*0S=wzR;i+@7#%v^Xe8O-()8oKja+t+tu}X>?|{!bQwVC^fhrq+0r@ zr3LvO>gwko<0&YNX`EJ0PK+tlcXFr|Z)*_h=|#dq0aS6XlbSABYZgr;R;7ARyN_0d z&7;_3sA^OlGcfS=;W;+@NwMofK+3Ea;tYWdOG8ge5#~KSCoxfzfa(9^hxHYMbmU_3d#B}!$ z68wzp!``u7T{Jav=WVnXOO!6UyQzTXFQh>?`uq1WBL)=z=oln4)jD_?4Gq&jfBIhM zfbgkGBu-!}c;$N*GaiuBaTCjsVinQk%4a)s6E`zBI5>d2rG4?ORkzsR-~W)BDAItC zfS`;%8ynT<#)qa~%Ue~=43dI^A0r~(ycSKCHMX?uS+I?ec;?q^@{&(YTieRg@>0G# z){*Xnle1gCU69^ks>ER}GJnSJ9>&?<=6jl#C-jw=Ef2_*nHRdHMzIID4HU}dliv1p7+hw zcYsbS<>cIC@W2$G5YZ%f!3QAO)z$ixlM43I2x8l*!YQ)E9xIz#xHviM{cm+ebCvt+ z6sV?kF_ZNEv=r1_|$~b zefgQ$F3+y?n;oJVojo((Y8sx;=6V|S{yx(oad5)TA(@D_^@wIQ4H2DL;uOB%8m0Bc z8p`<*N@SA7UgOpIm}T8v!Q=gmSJPI*+5QjV+N?EaC8VHSd6YNQH(u@EMZO)grlBYoaQRO`%{&cHvj^*T4Y#-fs%telK*$_hVn zY3I>8a&O_V3Rvc6Ns!LnZFNgdQ+t|?^ZM3*MJYKN$)usgOUDsCX~(0xDFU-Zm)%6tmcEVuPZNSMxl$ymDBefHC6ytOk*{k+Z^ zM-UziGhZ_GnQd#Q7gLzxrn!k%8!M-%FrtY}OjG&xZZcBMf)3)sIz)++7ui8RMwxLx zp0jk5v9acKiB?@CIC1;0iCe7qVQ8;bNABy4HNHVDzAIuuKWQSqke%eE0KKBam-t#=h8WMup=XSbuo6^(N zoMO#Y>W*cu9a{pF95y^(8xA6F;?TC)A>jqJ53)}mBva*N7K&y9$Ay)ZwfqLshgWY) z$M6Dvw(>w0ogqub#YK~ttzd7jg5sq(I)rjdfGN6V)*2gC7$-DSkt$meHMW}q3xz!F zleVT*X8&(1-*sjs@T^Vh}C;%A5;Xr-3Gyl6S)FLnc;sW-5I@=cN=y!{)uiTn(&JAq39q)$r=OVFs(*Z<4xfcgsI zt3wsQ>vEcwRir{4F8AxNYN=tQ@>lH!Hy@EqB=oAcg!S5tXTIt}%(S zH49ZlRH0U?Je7IHW?8nvCZF5(Vz)YzzffvFbcyq2smObU`bT9pel@A27i9|(_MU6hzC75o`1HKhHPQ=wzQG?K>}zU2bh3v)SkV_= zo#~`PhxwL|Ps83~V^2rUJEUL{^%`TjCo+l}8gbu%DhlygHs)m}@3wVd265K_HpK;Z)KpYI^=> zhzOQYwqizR<8CxpYgZRG;uR@YNFs-cmC@S9M*Qb*&XCIx^AK_K_#z<~5RMzDJsqwe zbyWm(2Raryp{1G9BN{H&B7BL2;w4L^k4)F*EJ_F|pR@Z3`0}n@xM~t-6!eXU|rRJXEgg zqq9w5sFbuqNUP@NbP)^>!c>&BG8(vibnq%mm|niDtx%Gc`~D_d(ahl%#~{+^ylk<2J%#oy!O z`bJBigy%YPp!pHi=^h*Z1Pi6(1+`+9M7qG^^$kDF$0_pJ9i1;=0Ohj39vv}}zi-v; zU$IB^5l9_q(G^E?iy|DUlu2wLbMNS2^G@V&VOyPC$W^A-t{*S7nh{33_!&Z+@1m^| z?E8;*{ou5^%6Pgp;U43RjEs1H$-G_As+(Il&4Be*M`veup;L(y1KEKKqdQncZhw8o zbF?u+mJ};3t>N1y^P7u>qz^ZjdJ;4%S2~S;ypob~@Zjgy^7#H9>4Io|gYo^WHAk_E z+hUh$n3z6{OJ-&u(jOkTr0vTr^}a zxao{KXGVyA{ZB-1ARq~t9BVkkBvQdD`qf^(l|5+G;7krZ@(0n}#-AsSA!!f6H#0X6 zO2K=D6)i~@-hSsE!BDE-M~QUBUkeiH9pAru{5ZCe4hebI`V7VE4}$!GKQ`(uU!(|B zVJsnK<)ej8W!q{E^02pfBANZZm^O)v%n`4eMu&!q#<=Wn>&_?2zi?}7p;M{e%Q^P2jkqV2)a=IVYdWQWIUV?%Q6@i#AWOzkzT?9GA)npV4q z6o8ZP8kXhYa%>`Lme~Mzc4~#vHWgh+_Xs|pW1b3)2lk_*KmAV-?-i*v%MC52IspsV zZhh#mWoclhSL)KqcKf4rx~y`Z%Fm=h16$6(*0jNutBD;-bgP18;1xJG%a=DOogB_W)zRQL}okYwTBCoRdWQ z4*=5p+xv%x?7@k)g(hO5I0K?-qt~{|pz67P9Nuj>Z*vTw#F3+oPU#r1_4z3z@s40$ z48HSUZy84N@9HwDI@FVMnd64_>4tL=ShZ}=KJmTR>$|0tYZ#v15J@pM@*F z9ShY*|A9CuR>uMpL`!`pBm^~feol@6qHlCmnF?E<37;P;SxiwuQ!x26R93>|VXsJ$ z5zH(siuXpaUO%m)CQf)is#9WYAEQXUu;Xmc761J$iZD4rM2;-A$RQg zUJbGLDL;=u@q+s9de)yk@IN-p=N^`K=ALf<^r=xMF*_&c24a78>`9YVa_`r6#QU5a z>Wld2j{+>D z&?|cIU%D4{#y_&E5SjMxHa=k)?~t{+z%GoKpVWTzW}o3|v?U9wYG4c!1dq&7`Xot{ zKH<@%egKr95gy<+Zt}V6|C&W7WJ=oZ^TtE>fQf*>)VBvr2ccu*vU9gPUA8U>o z$+)bOgVWPfAK_LO0x25Y+(dkiB+oU!yY=kOeM#Rx-d=cwO~m8D%|y*b%?1Cv0^kf_ z2tJ>O+dI)aFivVPvlM@9c)DYYaz;dd3EhYgLcZz>w#?_QxooQKU4<@%jSXW!3S=+& z50H>eGAEoi>pFWJok-4GkqHo=!%kd#Zt!Jl)mK=a8VnuZU8Ov$sw%S$>T9b}RkMT) z|NSkrf*F5KLVSE+H$8kf|K|@D9^UfO(w%p|Nl{ka;PRIepIlFH*4^8De30H{yxaOv zf&1&-LEu=@EG1HbNBLQE_`&NxuIOsoX3VmM6Mq!|d4OV7gD$Fhwrr{r;l*{`+k%eD z>dJ}>%{T2v_8Kz$=tB3UUZ>UJ7jrE(h^B9Hsg_gK%t@YnOwp64-vtuoG~~hBP9b3o z3kw6-UnY@P6Pf~^S7$TnucHdI>c`9aU(7=i;NYg_5;&{;^j4r2CxhWt3VE@n0$pbgl`Y?j<9}8!Fas;ggTm|*puGu36BGs z-PEgl4)h{Z;a47$zbX6g-9vqm3w4_El~vf{=PHc?v8N>^C8pt42xJFRAavT;*Z^%f zud{$RerP9K#?H7r+HUCN696;aB}+dLf*4d;@Okbq9rV_e*yOM6I&k9Y2A zl^S*j;WLjv7eVn#itHC2T<%KQ-L4gn3TJ9zs-}z2xx{@-qBMTdH<0I{vekKSChxQm z8L#P8{hd`>lFN@eugIg0D$A=bItS=QT9@`t+#1Rcj7DDhCP_bk`SJ`Hvv6ZX41cs= zr@IvsT^G&ab?|6c5b-!-$F@Sr}74XG%qc7F|8_|K&QFD9wbfc%LM#~7BfR*1V=Jx4pUuN$oz;{5r@)B`z@W}Z%w9VW-i>sdE$ccT0lv5 zbE_*Y-7|U=GNKV|nT3hrfSi~w;`w*H+zhM>Xvsk8ir?JYBEiEmHZz-0VidqaE)SxM z5y|y8uuYJq`9i&Vnm`yMP`@^i+DO7%c>ndtX3FaTr>_hrhvK(d~WuA0*v8h%x{8Lrw(VSzkl0>g_(`!qfH+yMbfIX zU+i`k{P}aRHsVXnt2;H-F)^r+*(v$S>YUmax|p?&8^Esqsz`kcAsNH+oQmq##6<0Z zo;N9qDgH5ZOv|0nH+;pKN{z8^;%X*lUkVNy4eWnoVY(kmih>kM`M_G}M#B4MzU~g0 zZrZSmT7vEjg-0Q>^gHi$5N{DlREA9{Ys@Az1U|y)4B7e=5qZP~%A!z+q^Do6I_OCN zJ2O#4m=p_0gtoRN{{G@G8eC9JuluXe*JEa2z(va3r&RJ3iVD@Y(2FE-`|ax(L`sjlu?>2UGYz_OA6R3oD$`i3vf zvu=Cc?7$RF8e9QRytK45ugzQw5E!!*O%r+D$4M2zjthsHbbt-g+>D8(^?|N^dxr9Vmj?Qsg-$%P}(iz1|Ne_A+{RufhEc`VG*%4G)K^ zejg+UK<{Ef-rJrP1U@ychac7u@xb6asHV z_6+bwJ=zYn$22o8DtCsLLy7o2xSu|K%F2p^cx!B21r5xI7s@}xlfXsk$I}KwdqO@N zn;(lSUhC@Q%a;eZfx@Z%zz+AFGC*C(AIe+tjFY8$(~b7FLAmijp)Ri0J>#<*v_HXZ zUIOL*T0T)xad`-!wsRS9l3kpgS?C|2tfIi9golSiD|{#|J(+JKc(aI;1wqP2)}KB* ztOZ~Cp)b!k`Sw3@*Mpx&tsfhE4+icTFZMtXny>5UJP zq@Vq-&Xb2O$nHOXN`5fo88jwIdfpCP3ESa&; zJ1J!GWN!%`=vkN;v;qFXD)0LHk8y6THmv`GUH?G$<$=FYvl6(3zLB1`@6(&5mBDOZ zqM}BDO946*r9!Rlw+tX5(A3Zn3Bm*VbObK?ZmBG`J4h3NEa(WFa^P{7n~hc+;*+5a zIL7A_#DD&5eSU1i!&475^kL70=2;-!UR#|c#ltfM|L4$vL0cd4!n)O_V9M-U3lca1 zf>_wXNKH^E;RPUG0oH>oFhzq&S@?UCe&j!-9 z6Fild;4VS(Kr%c^menjFDd`4!1864_5wYZRrSU^SeWw@T*?W+USa8@#N=p;-w$#(k zn`mpR1{V}oRIq_U0eA-|CnpF5l*vs^uRJ}k*i@9cpFJ}eND~(p7S4AkNAW6Cqm)d& z7CYqT)&g_mv=^fJ2Jj!CYX-gD&V_(hUEaiA-Qv}ICQ6~dsHmv{470SfG-o69YsPu> zNLg7KnvcAR&?bP!95fYT9&737BtwBOpMAcMx&;mZTARQ0nfPk;BqRdL%URpYYX#AKt-pE%PC7jx^Ge$H8RqtVv@J{xPfKWl`(!J#RX}ULqM@Xrush>Xm3-uNP z!4_Mu-~I)X^vcze!d#_%V4MU4lfPa8BIaHDiVVzyyhRy4LP($v!ZqlQMzuYmVA#!h zlZy+?ecjf8t}aOgBD4(+0|Q!lCl^OEnUc{=CIe}clNyLu=H}~=6n*sa8Y$HF1aqi) z?%b`Xql1J{DKnOXKzgxcHCeT@R@i9zyMK7NJSpi_JpvEe0XWP&TyDojfIowBq>;(* zHlhlH$k51bU9v+7N zBCv73frb2Wz99ot^iQ+k1K(t*tp+cK?Ee z2{;dpyuW};58e1-_Y+v?8xxhH!osYI0ZMj`j*C0afP89`Adt(G-W_eiw5qD80Ik0& zU8zq@Ofsn2N&AH@%v>hB-cVR4Xb@&C@Fi={I;M>=PLH-cfww<5HwR2Cei<1MJwxlx zy)TL1s3-2^^rLpYtHW|%GBkL>D}apvcOoV%tbddQ*DI+f-!7w{>U2lS!(rI?-Me?y z8kf5ZulzfR5$SB!oc;q4P+872)B}+K#1kMS6B85r#`CVi z$EOvjB66TZsM8|`5**}q>J{cL;7xu86SAV;_74nHDAdQ~$3o^ZAefR>@Z+iks!(iY zC3|A;3`osjFHpS{%2Fv(zk!0#(eYYk5=En00(J|uo~Ve3=MfUf{GeP46EhEP7abcL zgB8F9*_{R*1f2$GXajHr#TV=PK)yPAy1Xj%rsuxDlAWJwaDU0dGBiEi(E89A@e0Hi z+-9dF&a1HIYY&^`%v zU6A(0^LZu+B!k$7o`!~ofx#L)6rgf95T9D-VBz#`8km?su|1SOfi?6KE^?oMU>qbS zaDO0(s5stHP*4DH2ztZP#1=&gDS~9-?Z6r{H+~V`zJGLN)oeYkZKj;c#{`E0oLvu# z2W){l!W8s%@VqcULEy8_`C&+4*+fQ0E`mC1etvH$(enlZ3hm57x3j~1*SpE{ZK1#g zGsCk-fv!W}*nlxy_2$iw;CBwH4A7gMxwV9Zyn#YgUCrq)1bmHQpfZh_A6D7SgOmpg zdDgaCl47u@rywn@rM=zcH_*O(F*FMbKGg#d{%+w^O^?F4L@;g6YUy$)NkBxx> z@3eBpBbL{_<`pTpXaZJKMZuRbF);(iWe{~ML=>_1Rh0|1FrcaD=%`g=Z}I{|Tz07s z{Jmzex#~rA|6^4Liw%$s5st$^XX}_{4zWREvJXWT3riWyJ*XWiC@3IdBV;xG3b3?q z#eqz}aT#d9K-)!5O8UNcY2x?qI-qf-j{g1omxGfN_;%ne?xP?dxE|cSdlx8Qv?m^< zC<71vopaKPPu0n(sC@hO4TPSNAlAddiTL{U@7C4~$YoG{%T1w}%pMX|gk(&e*q}aqT27&rhK4gX{`o zBA5a&MB;$)<9+ubIPQSJKrjtuOnWaHi*l|ZVdv-pC!_P=DNSTJ7>)ykv;$h2P}y4 z(%~nfAbOpWr0K#PzU%4KvcONGbn86)hoK79LdPI&gFpcG+C@Cs-=|>94s}zWaTzq`ZYQ_Ixfx)f*Ax6B)?`r zRh^uUASFD%Jllfk37fdMI7B8^HnxJ;6JWdk37Eqg@^`3MP+p~q=l}{P1a>$i)bu7e zP2kNweel3v-`H5?h}&1kjkmF;h5(VYi9iA`q6=gPitq_iL-b})STflD6@!ukf+nDz z6*)Pmp>5$}PhY&q^*w~7l3u&P-p?-tu_nwYkt%1?U9+>g~OE_b#XH-%qbdK~d%6a_iPDsRZt7e7_oC zj6qc~ny-Ge*UJaVQHBHnj&NZh=6lQ@(elw^<_sHv!6p5C>b>iv7f?T`J(AA!vZ3}NXwj{E$dfrtiMsY)JrM36s2l7htl z8JJktytk<-h!+D;*mZUSe^9f^dUoy6Q!mmiy;bu{JkwgxCO)O4eCOhVkB0|bB6d)<72uN_7?xyc%>ymg`|+@aFyV zKs=ajO>J$N$@xGT8YGy87`KxaA_BTTpl?Br92gQp%x10v^&*G@tE*3rW;_!U6K8&X z#PV+u52whcw_?IW=S?gm&MO1%(sP9jAyH9qcapJe!9wKVd_f`}D3k(zrO|ok20~Uv zg(|##6%$rEkVVyJg9WgLGTyS*JOK>58`4YB5TcdU)tIQLrNu?Zm4RnW(d$b~nJ%I# z;hSH9<>!Y%2KEy!H8W5_#Mon@HxE58 z4+*)h$RQIFBH!8FjRDrD(~SEgpiV(;{_tV*f~~u|yN^IJv=JKIs$Ex(fma8~p;J>Y zAr><0UCGp!SwlmLY_U85I>K;2d-@cpo!8W9(8UAT!$H(DUh`dG3R2SB_lWawo?s@4 z5b$}_!P}X2p$RbpTjQiR z^&nF6u#^jnkDoHC1k&_&4)qxaSH<-+rXGlx$to#TK)MWwmC~CxNQm3$cv#rjESW1| zQ1rWk^cDU{fE(`>ToNJ@`t4M>CPet_vH)&&#opD}I7@>Lc-_CC$~`++TLK04($XU= ztUgpLG_RzJo3UJ6SzwWGflIs$=;qSe+8Wfr5SOo$0wPZNJ=*Q*>FMXspRdN6BuOU$ z`c_e)L>C1X0#*9guhHQhG6^7)jt3tP$=ypuaGH=v4@Fk@b5>Uq zT#|!RbXprxzF;+pcZ?>S;s#n91mE1eJjFt)uV25iIBjZx!WXiz;NB(nOAg>TdjKsD zV4#Mx&6k*dPr}52J)5pb4b;9=HV57phok z!(f_%fuW&|;6*>&yaS9#XjZU67NpaB8yp}#ok!T$uljm=H?8h%5w+;rz?c&AdBjCU zRT}rF93MLq1Rny|yt=9iW^T_3PRd6^I9dfHvdAC%stc2-A#hyb)w ziBla*d$(9wdw5>j`w4aqq?qoD!Y$)GcwoD?D1|^kd%Wl^kSb*rwlISN14~OvxNPQx zfbI@TTF6i!akPWDe_9B2fHY*tP;kK=smaONz*4&Lq3_EUGR)b)*rJFN6C{-&kp~y? z@iqo3Iy(2eLmI#4yBHYmC%XoqWM>U}!ph97P}n$}Q|ggCvJFlIih95#yFhdCU#`4jv}0Jt59|KNy| zL&XzI%F6VMuiX()B<=^c+H>*1ADn~R!s+VL9TWxNjojdmPyiX- z;$!g_fF5tF1drUn^F28`1K~aNwOv6W4Hi^dR(37J2dp{Ur480C=p76VN7!rxw8iP7 zNC1IiWp$YU84LxI`{_Bec@-cJkTk(M0TL%95kgpL;^IX@mZv&2NvT7rX9nb5|_3=9cD=QTB5}vNAH0?46laL}gV*nIVJ_vPY2- zS!E>&$;zn6tc*g)cu&vsfB)~>aoqQDKi&6}-|zdquFrLT&d+(C*DoxS2Pr8#whdsa z9bk)Z3JM(seQPWdB>~(JCjCx#+0$*?+u8=}{p(=>K}8CtVjC?ot^}|ox%&;)0k87E zlO4%XY_d2>onvFvqgCi=Xl{c~^z|J&d7DI|E&Xgr+9&tplf`xE->~xPsL0OWT#>HN zwFi7l^PNENq8=8OAVBuuIGqmt3BXV*h>EwjoLStAKZdHZM$qHrVqw|%IVeI1xPAL| z>E-sF{c3GJ&cpg0Huf7gZdkX3Tb3ALgFsc~=HWq1C^V~O3TX@q3@lTd#(R}{r0f(6 zvELlRMmYXMd0}nifG{56$d5@yizzg-s60=*?U8Kp zcmB)Yr6rS!D~5j}k5Q9?fQV=vN9IV+T&!`@LxsDMrV;vK(*EgBa^E&LUq;)3^)Q6~ z!_Hh^N9T1xe)WL^))p3WJkgI?Z!LRyI*)4)8BFCW`*Q(>adAB@zIqZ-8Q?Eop5A=+ z8$Tc28G2KGLO|;{#$4zh@K`F`UgEnPe5CBf4!vk=Ye=(I1k^3yTGHyCAkRT|do2o8 z9yuv*ZAx!eT_rT^coOD@Z{RrxN1EvAanauYs%G@d1#cO#B{(Mc5FVqF12y3X)Nidj zAs7VQ-3m7I`iapp2YB)|6fSF0I66Aw^n7dc8}&o-P4igxWY?3%OPRf)>g|1n=CZ!) z9zpUaQD`Wh`G7vi)=2)awJ8>K-too4XZ526%KZsxY4s>mI@Q6OCMTf86ALDy_W5%F?%b56=-J4l1k| zq`&y1H5D+f5K26cIsjciPf7-&AUHYi$ftWeAZ8Lh9zyb2J3_x8x3C(}+GPl)3?H3iR@U-vH|Wgs}O&ZEM^2U#|mRdNFFwSku1ci`MDdDqy)^ z@`QU%EqY@LlHD`2)2EDHKr{m+v zm2aKSuFTRjiP(Az0stI!FFk;`zMPz#wl;MQ&7M2ebT2p5Lfr7G@n;J&goDYzqx|aj z@`UC$0^$4Qe*=u=(czENXyWaQ3qkFN;35wrB+XJ%p{uPsDnLcpd0HCKUHLAO360pcuRsNr_SX$ryK z!E_%HKj}O9r(H)ky8nQ{YN`buIHkgrF)eVAk+=b#Yvrr5MWuJrX2{mo7P(84F5F*8l*`-Y&;{ zK>x2Ycq6kvy*|stW5_zG_-P!#RElrW8pTF`=#U={7WAe&Av~m{(+FHa`UXZ5KY#A9 zq*KUu!otixoY+2r2%qWMS>&!fImNlnWFD(PRFtn=&>PzHWe*9SU&)+N7jVTefSK$~ zy?-QqE7h8(s_Ouh&>h3a@}PZUHvzsZdOEtNhv}hEJWA3t9XSR%2^kkO zkYnrW3Z@xT$|B3eI7vjy41?bQouosTrOoT<#ddmUEO&tCynXsWdgs?g@7=fAMMVDK z_W*A2W`Mxvv4huqw$P?ybppuLc>d5o6t>6vOL-t{*+wFIIZu zvqihJyIIlZsX(xUpN+N&{Rs$?vRi*$K%G-jQFTa-R$a40*b=t(AbL79OA;&};B^7- zAv*7gvckreb|~y{{uI($a5OxEO$7iOOs*k+A0FT4<|aOJWaGD%C^j~r-kVF)wk%bs zq=3+X6V^OkPq+c0gL>`DrT)%uyS9D+cBG(^`!Xu{i&gXAbP{+Th<1qg?#0$)CCFvz zB6JFoIFjPW_uY|XL!lOjNOy;_1y{a{04irzVmryzxC*NMf`T9TR`A8aYe66^EG|}| zf2Hr-@!kQJlxctMlAY{K>$DKUzh3}QLyXd6jNw{a_bPRxTiRakH;1ikmU4!N^#!<8 zL~x;oUz2o8d8UMqoPv+Mk--^cV(BzeTTA31W?I8!Y7WSUr4-2~D|jeu7XgRnJ~Fa9 z6rR+d5*tLT8=yC$xl7YFQXWxhm{0%RGei&x&v=DnfVq1XT0?YIT>0*OAnOMVT&ag(co-iug;o&ST zEpf1N#OB8o=h(;7c_m zHPP{+-;rZCsP@Qv@#5NXr6EouA&|ficBZeCf5@F5u5%U@W@Oym{!%WW^B=AX&$|M1 zHZ)H_lwf#f(I>$;jZy|t(~y)L>767h{Ya**;^z-fN# zd4OYStjqr2-T=}BVsZ#Td%^e2$z#Xvx7qBpoXpHlytcA8em8yoY&0UbyZj#79dHP( zt*!k64|}*EWq_aq2NbfK>=y@&OxiId*3T?wEL3({f$tD>EDBd%{bLY)!0J@=EqB-j z_MnawJb%7`c&4jfUb2Yx3O#rnE;4E4B3&%dk6@L$j(34nu?q?Zq~#S9pkhIQ6*cjW zixdYH{u#a6sWd?unNg5sI0_Grs?nFDn82QT%G3D;N?~p;FUD)O;SEmo0;vi_&T|pL z7Oo!FbO#UqLaZEM2xdEk0Kikw#dWQ#Ht)s6=+?Lnnr z3bw?977!HN`WC_l>T_jrI+#G6SU_6In?qXzG^j)@p*`6hPesY^SR zyf`PPzxGyttyI1*&;$tcI{a}c-_G*}Oa-TK-V}6yCfCno*5W_JcIKuBPCtwg2n1Nx zmaE6ooEx0fh4admRQYvvyXq`GG#-^my|w=`pY`Er(JD_$&)oq8xTBqb#Y0oXt0z|(*!Njch}3482CnVFO0ed!gv7|Wa!5W^_x{tL4lL(0 zhxXWBi8LDgdS4O7QAmD9zsZ>s&SvoOJZ`&mcL+QrkITWPH38Ug5W} z8x@t5@L=`@$@9d{pbA?y2A>#JdlVacg!aCPN%i!nq0{lHseWh=>FE=|{FHm~YxLZd z5QV1MnWUh*1OKuPWC{Qd))`PI5R3GgGea`8kCZEqydN;JvNCI)N={6C9oERoc0RMb z1?MD=(i4Zmkc3Bj@L*PErU=L>@TpF#Wkz~>bkx*97U|V&Ao&zQ>)F0%=on!IS^^bW z`(FH$8Xhjp)1Oe`;aaP}r|HZ8-%n$HzkOm0B^0wWFf#-KSVAl@0UW{t*7kX#MnSt% z8ET3e+T)(Ql5s4&8%_E|;;)Gp_pE3&BJAA6(Rck~rCi!06w$|nX96Mtzth*#gKa&C z(-8`$+#3ntPci=95f4DGUBB_W9Y;@XZS4hJUCiKeBw__InH3atnWg2>wnmq0^6GjK zjuG??PV)vyndKPj{&&S0t9g5SV@hUZ#6)Nq81M$(2XlWV*|WKxC!&R*ofn`@qUPkD znDsK8mgUmDMvjK_dp(5}Si^OW98%*UNEbI}&T&~V6`A6%%ZRFrQ|oyDo}82vF4=7` zr3j;hB;g<^n=V0YH~0wfjZi32`G^S{piY5e%&!c6y;_Bi8UR{le|?A`jvG7Vmbrn) z9aa0vdY1Y7|E(vLh1lUh${10CIZ4-^{Ez|_&1bXPo8_~r$qCBV3=jBdQb~_}1f5qU zMozU1v+%Y8U5}2YZxS^J$c27)b1!jSOwiqTsfSecMxYwa9<7gXBX{@DXFqdovtqOq z*Vmy~>t-Iy(tES)@#c?@j{`-9cDdIQ_0mC$Q%1-41-$`|0U?GClZb$0UW_=0MEwi%w$_h?SLXc@C~zKsr$Vl?%6~g)s#I_o4tWMDh2o zW6rOa_ovoKo8TXDec7}NOKEe}ih!3I_b&o`UdV86Z<-rf_lh;f?eW3nj8O~H9 zG%%Ttef=4=*JuWMG~(UgfDX{MZT_1FQxL0_BAIm~2B?UaPR4uT36?f?;)jlo5%A1u zo#x=FtYLqy|WdjuopcYt^+Mhi+WVo6XuKeG5~L4P z^tt^+m#^rx_QSx|a07eS{blB2aWbq&{L*N! z$vYJg)BMk>vqs~GB}cJ;|M;IZLIBXt&SBCL&S-w^e20Ta7IjtZZU6Wv`ZgK@_3qp9 znI+TYzU%*pnL;x7vUEA($P_d9qH~SXAB?$qlLej4ocTCYJ(d%`nth~l>w`l}m6gA2?FYjD8hm&* z>bv;v@wZe>{rEJOl+qjj)@Lq+h_75}iLR%`IkO*sJ<`qNq9*Ma5(AM3?grmm9ETy! zKd@SYbYu{P13pc%;yC#ZtGexrWUo=ppD4rV4S^6KHvmX5I{)hb6~Xu zVY>hHP@@g$9X>if7Rq$(dko5ifb;34xq7FA&L5$EOM2&0-gi!#`t0oUTw|HJw7cY@ zZg+TH7+_9MyRNSO{kLg1y^~T*_vSkFzQERJ!UaJYX_Fr~WA72OXcp_ei&Nu?BO_?% z>ggsvKAXu2En>cQ9w~gvAw&cVhHSmOhrCG-mn(9zB+i}N+bEQ0LP;IC{;hg$litDC z+1X>-gAibn_wd~WKa)nSYMw4UGc0#hCa!Xa3eOhV2V2VVMr*(2;yg@60H?>rsS}iO zHR#Ug>19$&977ApuW+1acw(z-&gyjd$dOFUnRh1MAtA7}V+mrCiYXPBl$7x`HhGjq zl(+_~l4*~dFFHrtOyFYS6jO14Jemu&`dO`Ah z)St+dP6;O(Wa&Q1mFLVmcz;zmk(59e(rC>PZ22VT{yW7f5#Nt$akCDSKjwIpV@W8= zPMhq1XX;_R zsbA8Aq!eQVL)C`yWyAMgmnRt4_B`7kb~vj(wDYZSm*vchM}&Zy-=X`X(quY(Om$2% zJFPw*d-zmraQ@vpYhAO4e+phDGmJB@X`Q~CWmBqXbuvX$?23LmtE2kDhXFydv(!ux z%3QPnKj=e#OiV!9RtcDgz!xYk5gM6r0x{2af6*`e#6Kh2Pu{ruUWa`Q&|vxMbIXXHuUI@j%~@koe+he!F$H$7guxuMF=mX^t6 z*E)wqHk;k4PcyCG+a^C*Voo<4%6`Gj#1l1NJ!;+ zIyJGsX>6$T!twlqU4(5Z<_MyDg0Wk?n-{OE?hCf0`Fl8SF+Ys2K(l3-UWE&RX*EW# z7p4C4IXQ+MFIX|HTdB@{)YwO(!tMXXSNjNU!s6=Q;?XA$RN{k!pM{0GoYA6bw@KU+ zWV|JF;W#bxkD2RhQb(nVY%7T6QtrNb^(r}e2x&_|4Mlo<(5)a##R+~~WOwcG-#EI; zs1izK#(sffq0iCd!!T>-ZE!+?gnkk2;UmOi-)jB%`?DAtTP`l1H>36f6Be$mpPa(t zR@f=f!Fs|gu@m(4tbXe=5OOt+z(k{Jh2ks+<@SWjQl1iC!)w% z#D##<&YVHVUPUo>#VbcMWhWXAklMC0b!4=pEI#sq0ug3^w$GVHc0u)vUuQ5k6Bl2_ zuE22xVm=nn7cFbj#t`!^wxTZ)?4*<-t>4^8?{v*@5fL$IjxX4G%4nohzqOIR^Oc%7 zCpGdCRj^~Ba@5ScWam)W>b%s?XsUfuaf-AmTob*$E!EY$p;w(8&zQ0piYArcUu-N( zFjV4Bp6V?Ak{zCrTa?$9$ktyk$iFT!JJx)3ali770zF12-eK$1grjEMP9^k z2^b|V+8MMFMNDR|q#*V0l-(Z2cE)5+Qe3>uOS~xS?)NUauv?iW)mE&OMX&Q-lJewL z=BWqNcC#V_uOG7kRA&r4n*To?c=w}&FSoMdw(ARvmBs#X-Q>^K6Rb~2N=;4n7g#jZ zK`Nj+5@_$q2%Z?7Xtt}HvvW1xUiL|M8;>5xiR4CVW&1>nle!^fpY3(q>^%pEAASEM z*37LXY~FuQk?7Wd|H{+Me^s^#4|(jSYz`hu3-k5a_)zN@om(|e_{al;el2Xu8{gYv z@l8*ci+}t2^+9qnIJ7Sk@l0zf-rlRIQXustH#s9&k7B}Og#J1Bo1RI`(qYxB^w0SB z$OV04P}5~Q=3Eo{X+IHHfXCLmJ%!3%ZlKO~dix4z#HJ1Gj0b`2=0}ml|3BLB;JJ{s zbId-6n*!8&zY3Wx2Si4ZTr_05nrjnN&F<`MCGGk4D^Qs5elmB`-#x>IvsIoXl7%k8 zk6)i*T}p(iIyzw9**dG;V)}(aXF8s0q7Z64E0I{(3MXYn6BA~)r+zGJ){*zfxL z_)Fa?pXGn)P+qKKC+YPZ;js^!6_EVi%YMB?So;?s>pBas`St4S9zl{D+8^0Nujxqa2@1 z?7IIf5YrLJEoR%lkP^K2+;|=+aY?Ccb2ZdEhiFRKDFFlu_(RYI1Ol3HTrq)ZFvZd` za}sleC3__q{aUyIr^Ta3OBp?bq<7xhjD?0?DS7#E=#{a>6Jyg0dxLK;E&bm7m!`wJ zsuX6Dk`f^!9TgXMKO<9QZsa&S?J__A<6Q*ad|J|%82&v{lYXrvQ5y2Qr<^>FFHVj# zJw}BsD@Pbo8{J;EZ~PZ{->vmdvJkQS?$lz@V&|e+AYZm{$kNT$(^ch?rVOz(DA(qC zpdv2GzZ_uaqCqj$_}2NpiP?v7V@Bp)S^AXoNCHUL>kH-PXSz6I$@xGf-_dEA;ryW# z5(0I*4MTAE52aoPH1h03u1 zvJ1ahmBuXz7mlfCTvVbdH8OKP#kh;1ftr{=Ohg*I>j~$(RH4=6#n74+%JN=`fuUv0 z1!^+`n4DmtK>wksrUtgq4EKs49S^PiY3I@5QoGkeT~Cuz@^9bvB9=Bwc)eQIq|QUj zxcx!DO)(>jBWFiXh&Ru`rvHtbmXw@KX@o0+9Vo=o;ccEVSI`C5K_hu2`l=97{o+on zWSI`AQ^@l7qFnJJ<8ff%WlibF)#H@+EFYN^^BtpUYNcOXqor$h8Gft7r?Yo25ntRr zpP8d_S?i(K6ttP|IEa3V8Mv|nC08ViL@9(cE0J0Ip6bqkbFMO(i{VyBY~GO6 z*D$-O?e^YH^TpZ4)UU9~QaAtSW`r6(Qg+i!p2`^=pDDxdr5w-2jDCQCI}aHl#L#H% z!E7NOt`3pF+V7IwdC7^cRv{b`phV`De$r!;Hxr>WIP$nomaHy2xjZvt@$w3r8}Ym> zd{JS-f!up^Ubu!I>u2jX6)!kRw5_bA%w@G0ike7{mI$AAJT7ET+BTF z5QAH9p_P zj5mGy{Xe${rT#4Ne2|LT9EhbbV@Mc2e?|T^FOx^e{=S@OhIvSLODdqy(haP zNXqLYxvzrK5w76Jv4==A&FCr-GW9aFM?|zG2ugIOIMsM}yKnjQ zR9)*=)S(QS=zHDBa74AvdS?0oZ)#EX&EId7l3z$jopDx4lgaCQ=o*{J@`^9RseKht+&5X`RDdJO5GCp@_Kv`0+r7{mT{GXEC68o2XMf`PkdiI41!_$u%ek?hgiu;vX(ro{rtWZ@)ryi+8Xl3MC9m}=oj!DX*AK* z-JVEDC^m73zZ_1j(WB#WFFe!ysHKjynP1)4RaxBwWB<6C%u5EN63o3P5_5WJ*|;A_ zk+pkmY3Log~kZSJ8KETHBMCYhog+snv<|-&^5|!J5{{-c)JB{sG>%!r8pa znI~tKb^JoPCDxm-Hjlm>pS~ZRNOW|H;==PQFD|WKnctF~mwmbLFP7ECwWD0QHc9qY zmQia#8dHqJ*Bb=V;Ksf65<^|z&qsU5`LP@41F6W;r`)H?MB zfBF{f%M2k`6He}%AFa$;I-k)dDxJAK-AeL`IDPuZi^~@;T@_i6di%aBU+>bdh#Wq_ z9Bv+;3pXxb>9ba~?b%F|lNY=D)?WUfFlMlj!W>QY&bUT&6!~)SS|RfqqB(&>>PL&8 zUb_#)UCFrekwEcC^?Ol!19_9`1T$&mGXl$aCY7@&DTsOmViK$FHulxByLiy&U=WO^ zz)La(13#SVi;K?GG-q|nl&y90$nSi;;!^DX3Yv|f4{2UMM~v5M=a(5JN)veBoeO<- zSVqs(e4c`Yn~*d$P9DsaDkc=3u^bRX&Q4VJyXi;r;|#8;rTH8SLxPa~>NiHB)AX#4 zZl}|#^r~F`n#J8=23#)r{d+w-+i<*PL9O}SiR-B~Ztm~9I1u0~@T~ZT@5dJjv&^HL zK7a1rX6$K|&axQvA6lpynqnJ%b!oB6^;y-(wco8X-f?D)A&T9O!n+%`e-7V0sTnEa zdO}9W$lc-Fr?%dR2q!Q9gU|Y9CB!ehR@y(uv&3*OoTp6cLiYJVh-UhKL@Y7dzT+RS#Ro-MSuzjsUwfw_qy*%MK0{U5LHCUEeFjO#v^WO09b zKtibWOxACLd*&-)xM00uOWb7v$Oks+N@MgfL~RDWe$eno`K;SZlM1{3Swbr-RGn5Ho)TH*kYGG#mrQDi1A`3PBY}DL&rRm?k zo||uFCZ~7$A@jNPSKjNy%^2G4`L5_QY&H7{S`>^+^UPwKHgCLD2frV*J}0Clt}cJ6-`ILvv(={hPt)JGs_vWY%&Btn;qo@3=|{Zhs-`f2IvQ$)C}#B&wPoO*DIb;PrXDXB8>J zd@9MpU?YF2%qVZ;sRcd3d&4i`>;7ORg7))5vm%lJ@s#59wyL#;be_s7E7Mx8FJ_vp z-&V-k#MwVCG(YKXVqRLAw2jb}D07|7d@Lznc^f-$?@7OnRg7eP7Mtn){|$X1+rJ;; z-%ozmc03U#RWX*2z-Ta$2d^aa>faOi&%tvpDbIkx#i1k4;``xn_Zm%+fOrCXjL9B` zBY|W!k!eh{*YrNmv3@=~JS}*y@!KOwPe*t663v_;uv~tATbbe4&*-Zj+T5~@pAgqn z()qHlUXHY+P}oZ1mUM+ve?E1FMmAm1=~PQ`I)M)T_+|@jQex>+{x{1tcX%VqdTpNa z=J$QO7ig?fYBFe%bL^d?=);RO=R^ ziGqHOcfOobX9Py=b8`#oHyb&>Q;{55AEhTBNH*P!jXAs&PE&nSQTc#mB)}eMYC;2I zRzIsM4_&FisNbFCZWb(JKOKy0&_?)5P;8WGu{DOj>o*n?0#MIO+-b@Z+>5%dAs}Um89(}~S z1)2`u zCzC8|9+TD?qYBm?d;dkxbi{x1$5imdqcS5xZt38Dt%o*QS@HTNF_$0H+37zI*Uus* zPOQBZcd1`|_%Lgy!gz*8WTX zPmSs`EksL^p8a9|+nz);p6eL(!po7>A~ym+9J7@fv4X(#6EOt9hG^pyr2`GzU|PokT|Y9 zFL}eUwd6pJ?pkJd=EuT5ZG*B0ugHOGwaW&0cBhXtlxCK!bj@kCP*ep)l0~MOY#PDgg1R*9oXP3gizUe!8_ z=UGG7+CoINO ziTK{#t%nKaQz6J(d(o}+{(7F>>!+#n_pT)@{FKi?v&iJG>hD^2wuj+yT#! zDMBglQ`b^a^H-r3U2zsNl0B6}bIYegtsclf(UA+?71d6?m;2pMUlm%M48kS_rpE#e zg1gHcL*I6lO}@IBkUnGl>R!P?vhuN&WJ-meVCKkEK@aq^cK3RXIv=IbxF-P!;6ctn z@S!+58%E#G5tXK!I7S>Cw~M+uRh={Jjl|3wt2mbLg*|f;!w}AaAca)?&GM`j@W#Ek%Xg@r#kstcrfN#&ls z85e%qdaKR6zw2^&Zt>uh`^8(el;xk_{vk1pKiN{szh}3+&@?@94!ii}V^1vDHADVA z%KB;MtsT=eQsg}58=d-iDfC!;syR*d-kpL=LgyCWe|Rg7!DE*Nb#!Ypqkvz-wCyz; z{v$>~%__mItZV_7^!1C4%HKJbZ>?0;U!;*`XDTU9Pj`CR9hGY`q`lJeAb6)}oW@NP zJHtqE7Uu!eiBxUU_tJ-@rAu8%YA(ph@h9z>q5IRUq8`4>;3x=skZ%4N_uB$Na zyPC4!wDbb~PhHOg5hq)O14}ODG&$%?oH3iTd+p9tIO%qNGem`5ap!10$4)ALSs=)L zMc`NwgLNj0UBtubt`c6=2z#D&*I5vk{e^}-VBz4qx-`hQ6wxLB7o+S`!}f0)arS<& zf`AFtV{gFDwVny}8<*__?q)YUX^}iQ=M;ZUSLb4;M{(-az1pq^K0H@zBPqOS5vf8Y z?{(WJVIS-3wgX4~%k86OzqSgK5aa^5jy;I?{5A1qm(niwgAqJDd0S48nj1z^(|`T_ zx1%BZ_hq~8Kp*P`4`vFArijSP_mIgTp7H_od>zO!jp~B3eRR7I?rmeKaLjrv6BACO zZW&AxI?x$7*PIgej+)f@Y?EKb;FUYu*dZ!P_)fu=I(nk$#U=OH$23Qbw~F(MHec7i3)zHm=Usk&-w+jxwEc^rdB2~f zR9k;u(+>oR`+u{-gv9PyMh+Z+uxA_Nrk1ocb6wrv_gKmH5x==Be@Bw^m-^GUa(8<3 z^$NJylz&G&mfsgbu~;V0NHai2(xKTKUdnqvl!TDn0grhFXl#8wz4x#Qxkxy4Py68j z<~n^7&JIQ*lAj~>^{#23O04Vl5>Azu|D&SI`m64==C$zkTO(3bbuau2XBNrMuDIUu z*;6ZmN#a(^`QXdf{njoEF|Z&d@D#@`f@X@u1^w&S70t{FkD5jgOjBC#Oflp6cLhM) z=)2T?E~S=bcly!)0B(kOsh=+*(v-;VDOa$Eb4Tb1b8|E>G`!WVJtZ2XO*ON0-3y^f z2g#tEXfAkGw~h{O@9x`(#b2G0-M;bT%&V;6V21k_D z{(ka?tKjhSm7dw8B(G89iq&70`@Mes-dFcw%O;BWe#9IB?h-gXc8-MI4VWoGoIg@A zek3s9a!CxG;w(9a;bjSttUGm=*s=WcVY`j7S?>61Lz^#r9OXSjzn#vC>E88dVaZfd zmA%Tb^{9FN?~y<#b8)FIBtUgo1z2=YaN*9#J}IwDK-FW)L=Ea9cFS7xN85j|oFFUQ z*H>2O`=>Q(Z$Qyd<*tVPva5ch3&Z%pehvY;mVvBm(IR4|Ib%r7`KVL>3kzmieWb)FktHC&xbbTUmQAHbe5uY zKbp;p<=Wl+2tnf}lw=s~y+pDKSdiL=aPgF~h^T-n ztws-lnv_^PKI}^W4^P6AUV9&zv($m{!AVA-Nc#L&3>UktJHN~RI!LTqml!v_;w|m+ z7Zqj~JGVp?#<>$yoyKGQgoe^9g$W*9E*`ITt~iBYeBjp#^t9j-KxD2w3d(j+Pm_U~ zZ`srHZ{oyxdlBL5|L}$TQ6nP*47=89b5R# zqJEf|z90U3X5QV?Gu7-*wYlrP?k7^XTx(dJ+aam(MCAqZQ>S8t3e%Fvg3~`ZBnDcl z2@8{idVD$};(?Sf2+sBO56SbMIb(X^f(Q~Iplg;-$H~(E?%j16gK7~`+Iq(&2$Fx87Zcmk#qo!GwPkw?qOKRQgu3&Kx4Cx4+}k_4sfy--kL_CAr#^`8T0d15s*G%B5kkd7luayR<%V+i-YMnyeb zCy$`!yRrJ9_alVkoU=h2{b30w#9Rjvc93}9_L?P+L+eq z8m%Q+Uw1lvh0F0HTfXc@g->(i*w_y-7lx%DVjj~E237+boQ->gg^{MWJy3D#)GvLo zdNDCpPRIXr-L^OsR^D_g@%C+*i;*|+us2&|GyGb=oRFFlRMqdZo-@%oq4*j6q$C#? zF$c^Rb6D2<{!O^C-~ufFtqqCfj^M{#n<`u}7z*Hiid_2Lb-S%Zr}Y&pl_x_&f6SZ7 zIy>5rBLd*5AK8+FuScP%XWN{K0%b^}Rf)GRJ6qcEba>Ss>h(U!QwYE-M4kuk4LUDKb7<3I-M#Vl6q;v7_OKQ&At4U>iMVMUc`@BTUp)vH-{`0 z_``8!8$3qC6K@a4q(`qV1HGLa-F~wu$X4jp_L}VXc8aIm)|ZGEHAHW1t|shPI~%Or z#VlYCl@XkYkpCmPv>sB}2A7T3Rt&fW?8FJGKazx1{;P6w0lEDIu7f%mUuCz(R{dRh zn}osxEw6Hw?oO2Fj%{ODxyzW;KA~ns=IZPm7acu=ObJZ9F=vBfAiJ;-m#%E$e=IC4 z2r`CslBdMRUR(Hj78ySGw?r2pQ|r*skbG0+tEOF;t*L{=kfwTX`K%}G&s^=F78eW7 z2C3YI)7r`D2a-+q?+=576CV4}n@WU$Z7r=U@_X*+UNHUBcJ@ljH}MZnELOsaDXwxn zmJEwus05K`D=I3w@l~nX+|#oh91=n?j=edr$ykhW+!DT6V=T;W|8Gl;w@?yo|;%6rp7qfE5i#Kmd`ZQ4mFx=VR z8{m3(n<}Awr1F;6T2D?>HebrP@TjV(Gh7`Ui{1-cPf#8`CAO|j`OOV|@W<<(aGP~> zI-;cnC;5kSp|Y5u;a(f)g*jEV5`~*bN4LgbU!OLL^n~j`5VsLbx$(O=22L~f zNSrkN@5PJG&cDQQ&vE6oDig@{@heapN;+iLN2}n+VXP;^dShK>T8EJ_Q7Mc*FR#Fw zp=EA*+Tul<4IC^ixbnl*wN>amUe;4Q@LT0KD!LbqH*?|FNR%d1?xxIE74 zi?;z!6x>zIOG}0M`7zPaiDzzIM<_fNEPx^Pm-}7!>|8~^?p;#eW*?p7!_I<#XD5ar z_YyIdtveNqT%pZyH4K+$1QVn_e)9PJ@R<0AW_M)HEFn5i;tHiQr+`UKZeHH8(>%dD zsrSgTx1$$gaQ#vv!OAG%yB^LgwpCJIe)*dCKKDh^*7CCQoX$snGRN}L({+O76K$O@ z+uM7elvwkp^d`A(RdUy|)`H1aX=+51WX z$7Et;B&^ptM>v1XI^s5!nnMCy!j;~*2gk(7=reW-n!rWeP6y*6I;JA@Rt*j8W0}8x ztiJEUtD&T#N)xf8?0x{TQ!iYhPEJn9h2p3?U^zQk)oprO3fHC`#2GcedpzuERNOTw!(Gc#kO z!o~gZL?DL;q)1`(F_$)0oNnH13A}$GyYrFz#P6)^Y~?TIPO1!fRau5+XGh)R4QKwWWC1LC5jNE2Py)ex*GZ;@ zg7mu7_HQ!$e4KSf&z?=4NreF#ZPCippE)!~!Xe;|l*@Zx*(!}fJ1H8fC`=NYm;7UQ$wo2%UGM!-u~PB)=TF!;-g(6#A1M4E z6?|EXPs<)QPK=3<=0qX(?|7noUT*GVaR=J&lCI&8=i@S0=0=zspTnT9U+aG&iLJ{H z)d#}3@NjZCh2ZAeQOH$Rx(=3d!-IC@2y~LH+gtx!abtYKi@KzAqzO)b`?iP)5TGTz z&f-#1QOsh+Q0ri%1Et;1(jVzfGn0->h^x2Wk{nr!KF7`7M9-FA;5uB>W_b-29zhRU zNJ)^*6LWHOQYHr-r`33%Gc8K%{46jM?nzt>>uSIB3|I<-_#wwxI@)%zOY6%r> zp3#TV$mn?NjbHQiI5)Emj8?ioDU!y?+=z)yE_nLX#?JbP7FJBwGOSB~2VXGu^`)E$ zqJE?D^ylMOmnXDxG*b)VbNRbrQ%8hX*9p(Uz0~6W1StCJKn~VG|FW6dhe$Z65Wyn( z`6~sTd_)Qgo z^9jCUQ*QS5N|&>?D+sZ>hyZ}D{~Sb=OoI1`*mP}=vhvzXTc5iIR>vmPzbrG8P_P(W zx-?bk$t=&iw7Mu_*ZDDS+Eg%z`dxcxL0-`>rTOxYcICPLLgu?;CF?F)TW6wVVX~T$ zVRiMYu#G)|i&QUjSG}H|o^g_~5P8p!oc1}33y7JcKPQeeK2*(vNuy+T^A>DtpOcJE zhh+9^Au{mAqx0O{&G}$G>U;~bvxCR)wNIc7JvFJq1oz#<)mz7oizJ>;sgq1fsy)uf zr*SR<7vmHs*ZE0>acgD9ow;!*S{31HRC8aR6&KsS7!;2bB5xFW|FV8DE-Ya$#o;E) zYq`IFZ`1cRI~bfZHcp0D5_xY(`IDbUM|>|ld}(Fnw1|jtxnnPs9;2`n!wn*6w>U`U zY-iVMRRR?t?jAcMExmj9?vEco2CmvxUlO~N+5Rr|rr-i+9rAC;M=KLnlMtLFp0Mz9 z?N9JJ2}^51zRXu~Go*hN75UB%oD9^yrXH4*pWha+t7)-+YisCwOStL*o^Zx9)!7w? zS{LK{c5KhBH%`Z?3EW(zJ`x=s5l$mdq;dYEh%gH)i`{#F41mvf|H;nwTUX#&|*y4eG8YmHeAE|#uM)=x#Hk@Q* zHf#8Ym;W~K09j(j;JKcg>mK+r&aBR2UVkC~gpmDn%ZK8uECi)_4SJQjv#?nRTLc#s zJ$^iVg?>uCjoxnH^ULa2%YC0rYHU>B#TC%Uo^V|0X2DIkb0fp?@wS)|KMOKb^SB8? zkI9w7$Av$&jwn+yiLM&1)vMmWk2s?v%8^%`9ZE_oIIlaEc)7`un$~$6-Y*vI>F$2_ zIh0r3ShM1%!0Dk#E*_pk2M@9$+-CJOwEXtw`j;-%Uz-93jKs1_cy}_55$OtJID6rQ z3qvL}awZNPI3R+C;^|YaeJSmWB`&F-7TquGO;0aze?7)6Qdm@8zV23KLYXj*OAvl% zwg00(L#@MzuNSLSd=DceueI^vP=3_%au(;cnI37QIaTWbbos7zTX?<+D1d+=K?D#mGq!Uw9UcWw{a`;U4G#OdV7fd#CpGAHs%~H>Ia@-)Q z0G2G2@-ngQyqY{td+2a#yr0wCuN*9t30^g~>$Wmu&XJ~zxqdPG_irARoittJM?V5= z42A|})rQcnS7E<*37e`p7z54mtwtia2@A zrM0%!7tIiqRHB#dKmF>RK6?4G%b&@`x=qiut7Bm>BGr1YSs|R{x89mzqzDxo#VJwI zdm1WTeINZ6e>cxHAX0Gs`AhL7e>U-f!S}02ms|2rIgQr%zUs3(m3V%%dXIT3i5KBxF3%frMG9*Rh#Cn z;(^Fw;h!5eV4NEuc`tHlaK$oyfy+R85MmROrM!v}Dh|eVb-t38f#K1^haS*HBd^Qn zH(^Vptft1t(2%j4b_ARSOua~0+;7*LL3d;Jp|~DYD7crbsMa)9O^Fnjds+Qu?+%~~V4(1O3lkmD?%fETfMCkp$S6NOJsLeOU?#K+ zhDdb${(a|?VPJVAA0YtB!_#xegoxV}K?NVeW?^H)MWn_EOMChh5cQOG3%RUpJp|&x zK|w(7=mxn31e}q)fczFFaknRT?Yk5N7cN0prsi)7m=Q_jP(lg042zRD?kM~9^a81c z#l#T!3cd^l8w@7k3SJ|#s6B3IMN%%710vc=OF#Dai=8>M2^}fGIuN*x@D0dR+05(V zx5wqmuo#O+`mZin0=J_;V<>-8n6_2F8`6(~!9n!wo^4&E>DY6H=Jj=Gs!b11i724s zMTYLL!58PWw6u`Y+tlM zN)08Ypv+8bWGn%7JS8;QFDQUpk@Z;(pvJOCTDk%$ys8|5hyuj+o$*-5a0fN4+;8)y z)dhvIhj{o1IuSz5yO5B!s%lB&k5^AbF1KcMFP(l}S;%S!(p$yZ}qv_G5oNwY=W6k>!0>ILW99w8~-SGedsY4pwuU@^mzLb}fGv)O@M41%r zOFoRm5R*a$WKikC8YB-~xCT-lE0C6!Ht=7uqT=~U<-7BQYnigJu4w?s@*i@7_yz6u za|I5<$Fsx9YPTOH<05Cot|71t`sjI6iBNNjJ3iam3Y6(yUJWLVc;RtyHTZhl%Y+aeg3gR!v=fXE>m%+caxNNoa_=n@!11Pz?UhnmTs;xYwvI(n=;{ z*~~G&_~V!bBp{!T%#4tMGf%JKzp`ZLTO&g~t$Dr2DlVNy9^Ds`!p^Iy@trtvjx;VI z0rw93_%LiUGI5C7cRj%!%UIsn%*fc?37*`K-$znw6~+hnz!+Q2p%MGCX{NaUW+pfN zlJXyTpOKA&&}8Jcq`tP03nr0UmpjEo*aeTSiJTlEMHA0AGA!)J$7^rwg0~jF-3<;V znIFQ_fhr$A?ZjhUGc%VfS89+igN=azH(GOb`?E~{r@c1~$8v4ohwo+yAxVZzMadA6 znL;8-hLEWUAw#BQrVyEukU69>WR{tP6bVUY%8;349^&2awbt+VZrl6$-Tt5c_m`(< zt<`nk*L9ueaUREh?8m;Rw?5gI-sIx#{Z_yEo&+su z?V{>eKegbD!H%T!gU&G2z7NxHE?*W0AnvwYM9u1e5i-n;J#rBx5DdL_$+l5SkkwT3$+G+SGaV+X zL|gY=eAVXscObE`WqNuV17@li*&hm(CK6t;E1+%yupFbKp&cwIN}UhMvYeH4J0uH` z3Lm6!pmF~L5y;37~EeTSYSn;Z=VTj1+w?2AA`k=ZP>QX2# ztL}`*KBOpo`xYuXa~R!&x?x6oy3Sk6E-0z_{N^adql?s&ylX9J<8%;({l)7SQk+VYwOjnG$b?~#+8Hir(N?-Fa`8Km5LgzU3)4FYLkbwB z$nFP8z=#md2K%PlpVqzjj|AE0o&0a$C4fP0VP%R+jDTq`S zc?aY(u`^lA_fmQFn|*3;b4hV7%ea#E4Q+C02Kf^)=b0V+^bQ$-OrVeH(*m1v0RaKI z3dWync7B@(kU(nFl9Q<+{(&P6`OpzjJ0CeVC^kYb*VC{TB9%nVcz-H(qnkJ3O#z(+ z1T(l2NR^}{`P|nhJEj%dUH7=C>=BfBu>_AgoT4E7C_AUVXNI<8(Xd*_Kbn2 z#N}h}7?PUJwaIJ}R>&Aa9Z#wmr976%^@NHYgvTr`-?n*q&4nic+&ch-9$FuV4vo&t z9HFIsiwqOv(U6Gnn4zI~n2&?)`EwR5<=qdcFw{Fe&2#dkE7ax0;(6nW*aXnmC~=Xo z!HSH4WDKS5!9VQi1Bi$``(%vQj^^w#Vjp_fmFmG{k4yQ*b&4mKK#eT;=v6UwCWa= zQn{p{Q0%s92jD;AA_S0=kX@*%wvPUSGQ!ba3-UWyPHh;E0iDM~KuXp+zJRDVC~`Sf zZF#C|-nS7lf~zYlK(22ig+ta@apxz>hl=5uD0$ho&N z4nwK>$+9CfO`*mGjc`P9pu#wj@kUv8vEWPT@TDN6d~a`udXXzqa_E4f34@_pIss#U zC|8bNCLW1_fjvOQa9NNuU@d`(4(tB%aXjI;Wmei7)&;{)zL!9i*nknK3l>&D-&MTp zDwjt^Mazh&&{JtN&3DBZE;cc8G@pkLfdhol3pMogtV7$RFUOAH_4^%<-}5rX5gtGM zBt$f1P}uqdA3nTHN~-SfmqIJc{(vYw0|NpLE{>Z}II8CCyb9fOwC1t$aEKnKIbl=( z|8^#W0yAD`w2f8ZIP$Tz{q){6Kbohh@ak1PUacz~pTCqSJ&9O+Ca!hm5X~N_hyen; zKmzf2@6KU=X2hRAb917YF$83TE(CW(734Gml@EN5{}bq>PNRl(g) z{yl^!%*;<_q{;emzyUbbph3sQWqbL{r&m*Sb#;;Yfm?}%aDbY6rMZQKRaCSDF>KGC zJss`sm|!pj9E(&EY9CnoNZAo!{K&|?UOcc69J5x_4E{BrAVV8C0*|ytzA9K&l@%8Q zp~VT|CSX;IsdV_a?b?NE2vSXGZ=RwHg3JaHW&QsBF0d*Y*ONwa3hidsL`9kYHZL_V z7`M-I#nRL7v^!yrjwcTzC|m;tWfzg9VxSiycB!eyLWWR0;0(lqI2xva6oGvI{#RFy zBfoXJdzW@;Z5!eJZUjtW^GB^4#%8-8sOsv91As@qfK)D9x9GB>B7o8+2;?%`JcMw} zJS0|VXlNiF4sozV5nC7#+(0CSYsN>L^W`Mw)&*=8QGSt;H?mJK#n4Y=3{)KWnEJxh z+5zHm=!hdZ#QFU1z_*RZ7mU#s;wa6^vIfut+747&pvwxrFItjBW)z%9=;+D41ifEU z5GoHOI-23kMw1Q3s<2m(-r69_5u?b$ib0ai%xrn%Mh?73A#f&t;lcn05+TL`^ux*4 z)_L_$YD@n^5$dlmD5&>t)~yTX{SfngaZ~LAY?BcbQ9eRV3JqGca7b7YuaQY{;KRwu zEOLj1n>*<8D>#|RA$7-?%D}+&*NqrRIsIDx!3+me`xxJ?F+5P9JUI6U0ekB}8U?B> zFwKQI7Q&+2>LA9#kUwDlZfdVpOXATN7J~ZM&MT{{=X5-!c*D~)yGu|=hBUnu5It1S z&XfE_!GMxrXsZ_93dfG6BJ)DNzO^wbjd0KjT_`LJB+rmvjtLJR`uTI4{S4tT$CcP^ zKVscQWfR4nc##hSNL%(#Pq#2K8pQ2D0rK}JvUnV^adDqxBXf}JVPJb>ZzPA=_t7k8 ztQ#b2fN;>Sg%mz`y#+~0tSGZ#52`RLvk}a6+OPd9_rz7N*|`@}Clr@bYlR#ULyAjE zgoK4vp!x}p$>mq`_L>03gM*v!sE8#JL?B_!0-O+@Cj-0y2PQYqeJREjWJzs z3qiuh;An!3K{NDaFDNLG%oQR{O!|3Cmj;3t*t7Tm`nr(Pdd9M|<&_FE4vA`BgS@js zI(CXiLM{j}kORQ02i#L)JG46y`9Jj4gmFkf2pc_WBv%MjI5+Tr(UT|6`Qv&xBo9qA zY$GJ^YBwXSydTx0NZ9x|a0}WDFJaTXhY|qK%)V`qC z*U@=InSA9q>cV5D6x5-VV-BT3LGY(0pQXZj!Iysc@ZsxMHQ3&>%KF`)(?p9>U%yFQ zUR`}2E1}#5ij89qVj)Awt}V|X<`v_>MV1BpwfrWOz1q9G9~nMF@|l+>Fn0sd3#$$C z=Xk$haLk^$X4oa6hcaHP$EG{jgY^c7^+0zAh*f9#;4H^_}Zk47CbEFkw?Jw21- z;{cRU-C>=Tl$9ZrJoz*XiBP0sc%p+(!PLUssysCJfeqFAQz6v`BLWy6bXTFUhI$;g z$sp!%+?MWP4g3ZXPqhmz4sJf^5gkqegzbSyKW0n#rE>U~-j_27#wiE4U zDEctKQ*JkcjQ&g_Cv@5>b8?6-!4PM{*#`*9+uOUArw)lVmaiC`Lrxu{AW*VXrg*=5 zk(!DeSh35)bs#>e>R_VPjXdL6RJYg z%`nYz?{z&5MLZ14=wS!+_PQaZt@X;s%SLJH^BB@MR8Pg(-##7NSa*UI{hpi{xYhjGYX2sf3(5edf$%6%|h; zX*xPFkkUhzL?wewcHm%Az>^6q9i6d4=6uT zfyh-9%LEuYCvY&E5kJ~cd|nt=9~8STwU6~_ zObnQJ;B$&V?;#|F0?sU=^0Pffp-%R!GOrJ+nPYO(PoAD>67r8A z)}{*?Vx-_ihIG#jpdt+^ad-&`@%27?1BKeR#IwWV=FNFvr#N^K>d1#(ef4;SVcY{2 zj(iGoX776~t*yD(b4^V=5baV{zT{~|AdoOznjmZsfBLkR*naDr17ZeL2613MocUzn z)(eV{Eo__=;bI;*&?STA*F*joD=F=Ruh5weYW3fxy+JGPFFe%t0P+AO4b(v(1a58P z_)?@cOxHYdthcM{%7cY5q1*&WqLPD;vVY6%0J7PA8UU4IwV zfgJ(`B6Q-WLAkBBgfpo;K?NCDrjq98e$xB^r`ygaNxLQ$yV`)J`nw$5N+hCWWf_?| zg%(zn$PBG8Vl1KKNcJlCeDe$QizW{R-Zk3!FPt`6hbK`q45Q^#EtP z=;o4w#{1Ot^kFi;n3!(Jv{XuLhIT$O!44%ZyGplJbVcqljgqjE`aXI3v;cV&kX3yx ztz|@~WDChX1pkBhvh83~z*F-N;xF_lkr*RQP&vYpP|=Dzdh}vhUWsvhGWbh#GgxkD zS`pQZE?y+HQTo{2Rq5`kq|}HrpHc>Bi3B8yWMsC_UK)fYs-< z2F^O(=u6gCR_==v2Lt5L>z@$NT7eEFl75j8NFl$N9|APN&BHU|jus6t2JmiRd*pIr$Gw9(4?iT22EE1Q6*%$^VFsbr`U> ziOG4({D+iy&SL<@{4Zc;KFDCAm9z;r9|j1@iJi29%3tQo1Ry04zNN4$#_Lvy+&uoY3^n2`e_9Ee#mV+e?d zSVRX5N!Y~}VRZp!XY)Z?4Cn$u^;*Bw38ad(LHl5Tgz4Mza*6(Zjvg;6EBm{<#dvs> zQG8#qk@He!`ma5fT)ZPYH@CH`OH@c`QP)XpJgCVOYwp0VDbVdusNr3bxTF#YC$_bl zvC!cVg?bYtXOZvXh=qP2EO2T$Zx*Z&??S+&0Xp)mvVjbW2&HPU$3*lA|MdCeM{Nj_ zj=sVT&7)HYMo2~mXn&xd$jZo|wv_6xLSIeb)TzfS-Cw>~BU>$h^)w>lLNGmS_SpMv z-WZ+NXdgpThvIs!reg)&SA|t=SlG{@p#w}z zR%kFLCGlH4)xO2@v1Ra*noA1*)Rt6BjcwdYuQVvnC{{0DhHh$8L&F5tO1z3;eixlz zJbZiPn(FQB zTyZbUaL~MRB?rU-__3krg&00@PTp&BCqdz`+0yp=L}DZ;Wcv>b$m#03;v^HOLJO;KpJsfc83kEw)xinNOe2hpifAGdM{vU%f(^cUNiz2P}3d zYvuRWR=D15q3sPmk-s-JG(fKebHGG1bWA_Q-(MTfYoXV#KouJ)k2F#LUxfg;-y6Nw zu-sf*><2lyT0kMF1Al+2k#h}XH8u-ixn`d-IXMaZaCv%@hvTTNwZs4RyQRxA2n4TF z#3cO#d~$#~&JGR|6EXvpSH<3BJSUE((ksKSi*}K^R$*Bq1pyw7^7H3+CI0sp(RL$4 zOhTZ>bd9#IE{gO6;#d3+er%zZ!@8=jFHI6&!%!ajf!VKL!zc$zIlg4%1j>WN;|r}u zY*JV{dqVdZI0h7}`H~G(1f zlu3)zY$tdfVb_Q~>18O!4|DU#< z9|aRv=j0B;J0~w<7=u|4P%=c8ViXui1%V&RRj~Q6$SQHOF3J}b!PyU{(zP2dVqXc(cq#l6esZY^fKY%g2!Jd5MEg@b~^M{FM|86+J+{{EETOt$??<+@*OnvXuNG?qPo zbi%&FI>~#}gz4+&oNNq$RyNkSn{TzX_8VNCi>&!cGI9U@G_hd-{F&G#Bb~@25LhTt zuiSv2U(~WAs1Xlqh#l$kg>)ZVo?O0H(aLycy>C(rGNanUg$4R0cbi-$G>A21DlJV^FkxbiBZ_9W{`|sku znS-~W&y(GfKM__!dmXLqUlo_(HymSO92 z77KvRwfDL7f!qKn5*om?{2pBVZyN%7v*;?Dj_O}ZG0a*UKdTW%#%+L@n z#Y4Ez#0vkew3(TenSUNFR|NgRgF?5yFdB02$1@>GZ~bwhu%MvDOprPNb>?}tFiIIv zL9o>NN_Ky2QIogJXjMs&nm%jx`!OqXlqzBp)Cenh?p&C7HMnK=wfO ziP{Nv1CODQO1+Kvv^BVhd>+_E0A~PzgW3WQB>`d+MLgJP3;(-c=bGqaVq`)e0{-{> zS{slkaRX@}imnVRGZJ#?)~4H&XH*&zAkh6I_AE&PBr(s&A;Qdj z8t*z~&;7;qrK?xfl|(#~)(>@&R~FK{FN~=oM?|9-JsN}^q$m4qP&Tk=KYRDkZ~Jc( zfMmGs?W=*P;dCM-+&)^E!%2&3QD&viG+(^avR|QW{rvuQGjsRe4pq=(sx1ZizrRc^ zjF4kf9H1!&8~Nz9YoV7P3|HL%2!hZ}l7tF=ZDobT*&JVU1AF`lsXzfuc(H|`ugo~3 zKl+wnXiG~?AtNjgn*gvh-q_+lYvBN{ApmA{n{m9tQ-Bq!oJXlp+)(coC~ycLQ)D7x zMQ2b-M~7K@W@O}i#D}ro!_whm*aZObP%)Adqw`UEjTG-;1o6eAS@eCoq}$ER7uzis z@O}aKtV5&q-^MO+VSYjlsDOFslQ;s@v{Uni;oQD2&XWd9vII}0l-G;i5Kjv_pE z!2UhzP1Yw%L>Jw0;qLAUSTo>IM%Wlh3Ppx`OzK5V@Gy@7%(|xLB#t}RRi2NbUH}zdWw4cD8YCvVK=d% z8Pz|Aa~`E8&I}@5i7<|)t-m;aGo=Rp^{B-HE_;Ya$A=OMv7QbN1qvOz2!#84Dq7EG zp7WcDelDzIM^({UAtZFGDFy>g=U%m8U=rVz7T~O}xw=0l~)$76stVLjN3i6u6+si<~{_>F8jw72x71 zQ*E59nXT_M^h?>NrQzU7z}b8g2k}!|3`If$UC5u`0{cWsQE|2Rx&+=KD>!mA!54)X zBroPfRW#qws|w@lyzK$=TX@gt@xqzbaWlN;=be&XYi&Z})U8Wf<}1IMt1g?2l=)U| zs4Xp|#l-0R?w7pDd1)%|FDZebaO}1G4*ufAqu3`rS{eU2`X!2v(D9Pcv=J( z3f%fr7TE_X)S5zTJ)_h@xy9$RyKQKSxK2KJ)I71CTf_YSJPF1voCsT6(|yi*8Y7!~e$R^LH|KnQmy^OK?q|v6 z&rr-~o|cySy|6UGeEC-D+wtbuH*dFe%1t7*Yz(IFP8zU9^~#>5+C_6kj*5oTZvCFW#o|OqKc9ZZ?*7X)VP3+&wMG1K()41@erZPZNSm6b;fS*| zKff;d&;E=r>lZgRs4c3{ypN zk*0Yb{~<6%c$YWarax~2s<>cezXH9Zf##H%D6)ETRv4XO) z_0ikX0fE64)iYQlUPEVt?&`^_>!##>&+6&zdvGjD>dCGCHT%Jpb2^m)WJPO#r%hC! zxs>fiQ$rYiSQv zf;WfJ)?lm$&A<{aSi&pi72?(l-)nSkortA8dYG1r>GauO<8~tqjUP{SmC(;8<@+)J zaWE;2e1B2>MV0ourHc%%Eum#j%#EL{LR)wF`g(fUsiIkLb@yQQKpD5i4K_89arOcX=wmnJc0J5RYMvU*Rn?Hkawb4&+Lb zP|pLa1T0D{S&&zO2Xxfe-x{n9AT|WtR`YPbK+)#s=eHJKNrM>Qb9dMo+=p;~$b4Ok zgt_S?QDp%)%z5ls1(w>y)IU(3T$z9h1I#dc=xVAy*ydSg?g3QQ z4PI{O)Xcd0s;y*Wqs0Ie3eT z*1grHGJZHZ`;AAmzgm%%>r%<+czV}W4|^vJ0KKzik(V947eXeMp7vtjtJi~qg=a{8 z*uIoWGmAXlko{B_yfeAqI+^{a$YtIrbPT?AbTk?ZgbZmfT|6j=vi?m`KYCEn63)n; z|Da0;Kx%DiT1gI=GfVQbXLPyR8`XQ5%>-lcQ!j@~gTBM|C@3iN&6{J%=rA2JxaaMU z#GlAYqhJ7{a0gP`63svcZNXK?zQ2cB^wp(kBL*33u$2L-2}E;?ie5WT0~nox;mqgn zK}7wRH<~JOB6AU11|@T&({&aLN%^}UL?0r3cIUckPL$b{zCcC5x-ZaSa=2kHxSK^?^U;162@sxB6M%hmWxbbhSKNW6sy+~(6>C4kA= z+}P*}`8?Dx3oWwy-+)~Yup9$OGD(m9MM5yb`*vTwE!x=tGiGLHK+r5mozD<1-fKR7 z7*Nu9boAke6jq(Z0sfJa^9$-KQeAa@syHg^kzB=}Iy|y72a(lu-jDCHgolnCyiq(PEA2o-7}#7A_A}Jrir{IAZThwJCtcyF zK=dW(h1}iAmS=U-HO>KxA2q_l@86vm*(8^;;nPeBjdGf*VLz0wMrH40SwDZ})lcs4 zSxW)n1Swv;$d*jSE)Oyt9og=LQP8v0_4TcCwb710#K>5VeiM<%yx6XVYg$6TF*Gz0 z{On5{uB#(a=)_`7UNfWV<3|yv8OG*$GzKa5?dvJFy$Mzs5b`5#zpD`+Ai=@+ul)Y+ zg@qst2=vkBOb#)Nx)c-^M)=e6IqPJ1G@9gvO5O9E8f(>;-j2NEhsfrgzO^4tKQcA0 zpBHsKv3Oc)nxXWb4%b28!sl1lD6Vk1OG@roUkJGCw69etJ!G2Jay_nD;b+Fhd-LS3 z6tQG4o0vCmwUrILv}#`avC6l!$yw8{Sy4D_<65X#|XggW;#h%x?0JQZ5U6=GUTBRGh?IOEUC-7RqNOR~yaFO-@Vz4$ah@ zM1#_EW9|Y(4A@+tm5D)!zZ)BI@$tY*@x4VwPT&FI2i`-S=PSc+R7>K0HU~g4$Um^l zg#K=*)Y|LPQkQpL+pwdhqzuwDfc*h{Vn<^i!?@@LVEb$=dX^11{CFkUbKf94>S8Lg z7vZ}mW^APU$OB++zU;n@FeCR`zUH6{Ia6Ou_A4#TD|asqk9Mmfk+P*vD;KMJ==S!9o4R}BR{0W1%_(vrWgF@F@s)M? zUnaC?zMY9Wq~)@_$S8gLsT`NJb;}Fu?&;z5;@R92Od%}P`%ZgGh)t^u1-vf2rt7hk zY*kTbskpkHrF6)GL-H{rEtA6?UCod0Hha#@?h7Dgw4XhdFq*$KY~54tbVH`Ld@K60 zT;bJ4U8&1+za5wN{iYW+d41)mNQC{;?}WI@YL5e>N2yvvc-9@e-+EoyT508%yc=Wl z6?IkOy}Pf) z;PFqPBLvD7bTKZ-%U43Kreo5iXD0eq0!CRkK_dah71*d}zUw=1k@1-=89kD8<@J^0 z2oX00>4hyjPr>jx=?bxjWg92Q77dwS07?KI;UnF{e(#k_2EG1(#hW&V@mN1)mt0of z=SnsG@gPUn9lHK)auFN{6sAEIUYjA^(lV3bJ6~+wZpp3r$v=4%YqhQjoHw96BUpG%h45X^}qR z7tCzgXgFiQ712F1oO{~jRl~l#z}ai3e%d)VeaYT`$@NZesn!{lXyKPH(`Q;{Zq{&8 zl$1KFmB<~Zp83mrb0ae>_528{s`oKYKFZdlK2fg1+j4pzu>@py@BU>a6(@;3y=*`*t zK)`9_g;ite?JFO>pUT=tUE0soxfPl4H!C10NCLT}x_Y=wQjTMy0;YjfR5+7gCAdtW zGiQUS$+us6bvQg!a_QOSSN4vM$C9PNG1xEhn?QAf<`K~Xg0aIRr79~c#D8fsE)E80 zAa?kF1k(%CB``kD(dOP-sh7?=^71B;dpO!W+alp-SP5tuWDPdNJGrA#jh<`coWb%# zcJ16wI5@$w!wRTceW7Es7#yiY7ZL8|PhZ%Z7RaG;$UAT6+ zG$mid9uwL$U}<66@ZX^B% z1tp*@j^prc6Nt$kl3#9sk_WTow_vy+vr>r{TMXb#$kE@S?~qhp{s$vs9dW$836eIO zPZh%rTsB3k7$|OcQn>${NQsLjy!yk&B_{LX4BX@37pRtYpeBblP%*>G(xSBU`*+K- zE$du5hf?`4g>;Rz(&BrM>g-a#cGT5nUw#J?`QzsuZ*pA>z`Xvbc~0u*k(DrqgUb${Gx^j(OeY0|M^L*P~Yss^6SQO0Nj)I&OEL$JVd#dZ37=Mn;8W!HUI+~Gk zU{6qj)W*Bz=wrU4sdkEUbM`y@e2xVL97|5_rDki8y7RuNO-MT9wQ^5nlqww+1Ea|E z*&nK^MN^*#d@_cn+G~w+d4db!hpsis=__NX3mdVM@g_vk0}lvBpEgAg6E3`CA0meC zz9qN|Mcuyj$n@G8+?%Ev$^}t;z!XT z>HUQX$A$M^Grbt(cJY!{V+B$g$gP=gUFC{^ai?;Opn3KEUArH!;ygkkkLZ_ZTj^RJ_^ahExzS z76cHwgC_OI;=^OXeV?8j8X6fNADTurG%`IhjXw+xwbhX+6DvS=Olq?wXlou1QwWlu zPE|Tj>dVkbTF@>3An9t-jhi-7P4l zf4U~eD6H4gGXI%)i0m7+lc(<&x!uFdv*w&iqv>eS)So8D4y+4*7?1~Yxt z_fYK35W8-rdsF=7n?L$G8VP@v^>5w^x}zy3By4ss#O8J7*P`N==?VV55)p>2-fX$Y zCCzcT^8k~2m%F2_L%5EV~X$TXjuk*ywjkZUXZ)K!|o-j?BWY04oVMR z2KPASy~+9Npc0p?db>}*WvNefx`f_7KY=}&(x&z1r!RJM$_A&^(i~8`qFieH`0;gi zb|_jkK|v0vAeNR4FwKVDNgUY#hzuCZqEIK&1lcMcFtJMgz+eLk%qx&-(DtMadq{RT z57urdBruWzu1ThD(HHn6o;l;<>Uv$jf9BCObkc6Pyyum{%oO7uph;8#b#_kKB>7!e zu6rUH+#hug@L@Xh87#76XqM6>w^b4d4+{+k>|Fz z*MHY7ef#$5sA%%-`{xxjwro0V27Z69GP)r5(nk7;h{xkgtnA5|?#`_>yN6#M@{`|V z5mPuTUF0z{=xR)AJ{RzH>6~cRnTZ*n(b0<4{wmGFY};jxrTvraG=McKtq}Q5|u{~n)=uk;}`05LD+4{hqS$al#DyP#a z{u|zKbzBP2v96d&mmfVuf@0!>|lpC9I!{ABoh`PO}5j!nk8Rt2pr1o^@nUdOb58UdESZl7VA28)^6cJY% z9CVYB?W3NMW*TNKv?WVUJ%zms~ONrB?nK4Ko?f3kX#hT=IX6SF= z3~Tb}rcG;>XKKLF-y2Ag_l#%t+IcG5>Xerh?G_HQ^+xx+d(JyuO}W+dY_2=@Hb<}I z;sd7vMS9UcAK10DMbm8xANdAdtPTlz-1JE6#jlAb&4hgYHtFkKqL8`2B5xtvxkO*rt*6)lpB?zTRFjeLfme3@BwAq@XBkIC;7`P%arGruZyr9rjbr zXn*@jvm2UH8D(vqOQqHIQ|AAnR(`in1$h~2dwVS|J4GmDxjHKx-R>*%`BZJ5jNfYN z5cfbAi>V#wwZWAT=Z`fV?qB1Em##mUAtfn2`MhnJUTVDIz{8s__j9tPU9y|4IpF`X zNnOb1Q_xZq^@k7Lv>R*Jqkr-_ck?Yu>t}2>Rb{&W;9LHF-MYnMVSAF7Np$DWu-qDU zD+ZC0mbmxMZg|)}<9Othu(L?rsDJn}21zo!p>^vgx>l01S z9y-Z)P2|J*cLE2=e^KZXpBU4x^66raSw(l9uxiVZfU%WbLj>u$0&Qov$1f6fzWGLz zZ{-j{SYnOc7mzQ&)YuxYNGGkpG5htDVc+?m+*?1SJ)642Cr4f-2sn}5R{0zd5UhDZ zJexyiKPHW7b29V8+h(OTmYR3yoF*Gum`{n+D(f^`U9bLL&}QZMM(d@zsHW+H?ZMtd zvkOskyj7KP_6s}8h=vP==Q`g6lj(_BVV9V4i4j^9|a2KeCX16omEfhWOGOKKu@b)o00qY zk&#);4JOk6;M?B}_Y$~iLq!;3 z&Fu46f64P4>eCY7znDPBJ*P_QufS?JwBy*{Z|_;XzfP&iT4c?bMXTQa5%)qro`dLF zk6Afl@`%N6)x58t&^(X%O60FgA$7L<(90r(pe2`Rn14t@K#myEV|GA`u;F6mYw=% zcjW^|Eg`WpVrwpZ?G~9X?3^?$;Wy)B7kigWO7GLO_3_)~K3)}55*Ih%znR1K`9gu) zZP~eF_TeGwC$z(Tb-G2^)WrAg-gA*#v)?kceE!3}#b3t`=+(){a~bI@#9et;FkUj) z_>%X7SzsL?j+Kw>p+StIUyX9XtF!J1f5YrVpm@gU~VrTz2LYwDZ76=lmezR4LE z+C6k)zPW4adDsqLHbZNdB+uu~tAmfI!iPec#jfdS91%UrEx{qOrJ&j{;FR_)D?g!F zwPm@ass~T75Srt`1 zuiA#W>$k>x<2tvb>{F@rG;edTrT7++*SCKC5YeHpm_>$y#((FxX4!kWpS~J?xf2+X zZT;3x!tw7l&7>>GiOs_>v>#A(k?pPlZ!#_+LF&$rcPdh=!*CtL9PBWt3-IH^Oy@F$ zxq;=OV0P;08k7Gg?8$nRQ)Q!}!T$T^#wQn7Z*OTh zb1Cl*m*X&{6LXv*qr!B|p2UQ6a&Zx5o~@G3(Y_qvCBMyV{}jkYny?!^;GwwcswZ@n_FO zE5qrOMGKmDSbbG{9Myp~4E1KU)3y`KKH6z?DEdm^LsK@-HyJ zksF?w@#vAi`d_a=Tx?UT;*_y~mh_gli+(s-U%EX7=ITGacN{V@D!)DHuxqu@MUpQ} z#Q9z^CL=E%&7;`=P^FUGVKGBh>A;65;< z-mhPJ`6!k1N~veLQH^Z#7wQ1t-4(XWRr`07j}?vfECeibF7BHl6bYrO zDq1NfMjRhho;Xz-zx}iIvE>UL_x|6-rW4g?4tN%PdmIBb%))$@UaeYzr<6yX&3xuN zu64`aA`i8BUFP{(-R-&D@f-fH+uy|>7k(rC{FA87(d5k|?>RpdA3rEOa61c-ESO6J zo?8|0K}FvSBSgFqqWKb60%gq{WZU@Hl5ryhG!b^v9bBD#+R-g1I(l(i~CAXo}Hfl&i754lSkTm`0Uo% z#0Ncjq|toA z<03}Y6fJrCbI#ZXsk&o+;q&(n<-N7@-?VtjJI!n(^Qfuvt*d^uxxaapen}>BCeW!= zx?1u0jiWLzv&(W`bnCU=F#l%PH5w#3s$57v>Fd%MX8)mi$-H5bM{bzyW=HSrnd8qh z<{e_1Oe0O8XQ!{{aX3YuEex8PPa`7@AU|;C%w155)zXMF_d~@PRE@YO1fDY*ZdzI$ z42-KTNFl;7%OZE+LulaGO!>*E#_ii5e6x_K+eh$pBeR)ZG5&o)RaIZp5_0gBX)z%) zgC{TV%j75t4}TaK_)ue__sGitg>0L&Ozt~(?=H{nt#Ya8vivq4BN#*PuA6_w^3P0j zy~@$59~r_Wip&)YBNd^SCK!erNj*d%kQX1KO=$?{e2e%)STcOG1 zM27);GPrjATkm?m>dZ^dYV{aA35b3zMa_5YB$dG7I=|gKw;1BtMP=4z^p=+bG(-}< zbVM`jU)Rbg5xDuv>sX{=CS&v8jaLpMlIeRcbn*v_HlzJl6q7k8L~Wm#{nB1N$)6M z*;IQgieAie=qL9*tH_@AaH>4AUCxir@Gd+nd#l;@fhr6dqMX~cBabkR2)Mp>U*@Ir zqaqdGQs<^55i>T4G_gusdwJSHMRwp{R>fH|SpiPcpkyh-OEt&1M6*sxzsbJ+nrdGk z6viPPxwQF(5&C7_fE<-?zp3k<$jn;`HIXqooK2us8eg;b>1q02xPwtsWtlj;l+SZ1 z=tC}t6w3h-Rdlrp-T&xtUt+I&^kb3AKS4=cTMKXeTyXejDi&}aE88}u{K11?)ZSKv`vuFqCnJyiuyDYsw9G5@Q|lkwFeX0t zw2w(U3dsNI&{3u&v4{DT@1QGW_N7QxGWn2qzq`?dyIFV9iZ;P&j$(Lw3SHHW4|VZA z$upT}cNCQEY9E8-$y&P>w5lB!)%P+hP8iE4FkJ869w|oQv`5Z|*cYI#H5-e&B9~=h zcQ>W-r-pk%#&Hh6pKcH6W(o31FM0}fIX2GW7mJCCb{Pvu0-*wgB)LFal#k9!7p;Ss zdl56^e-+amc5WroLwGlpRAe=e&-(v4-AgKYHF$G%0?pvyfPhwuLikh@y=69qODpu@ zH2JpWK6 zuo15Qxo}0uM(=>LFugx`dC3xw2PfzbA5F7*z->+=`-QiL~bw@oWR+m?=xTWU}flz#KC!{21rq2)v zdJl$*O&X(|U$>L`4(gew$?!_M&MZ4^$)8Y@B~!T zHMSHLTgMh;!bG-)6+=ws=9jjvsonK>_wJ*obsrZciOV|ac6Cu`Eb^b#aioL^SrEsh?G7=b`(w@}q|)|8XwGq$r~GBmL>LNdAA z*u&8Xgn+QSy`ixc(wV{tX=Y(7NVD0{LPKF;B1ogbDaRsbFNQR?kn(gys(8w)8hct9 z^P12I3!w?P^T7Z%NM}O|cN=S4Cq8#Un!m^8gTJrdW~PCI98FC5l*J|gJp^6}(wI9t z+w(CqySceBxv?|ZIhrxE^78UBv#>F1j#}G$489Q3o zJ6qV66##Bz06 z6chsg9M30W;&k=$pOdH{9sYjt&ns(-zt3cE=;(x0^{__@(kLUH>|7j;k$;bH^`(CY z5pzTuIwMVl*jU&&8Ch8w**FE5|F@5>&hcMgkZ^Z?W+G(D#mjAI%x1*M!D-CJ$ic$F z&B)8e!O6&CY|LrQ%EiLLWy1EKf&Tf*{~Sir2u9&#<7DIFuw&VfoJ&{{HZP z9wv@z13E{o>zQ`Om=rOxZsN{Fq}v_aZBUo8^rzl&sI%;#+3Y>oV{B@(fA{+~+(w#R2}Xlo`&`agp&PW@3YuFQl0?hyW z3I2~a_0N5RqX3t|{2#{xFaG12;CcjU9KrFF5O<~`5WhyBiHoSZC$CL;XsV7}-r927 zWMEk<@=vFzcx37Ck4lwkPM2s=XB=kG5?c3Xr#rQ4kb|6_v0B#Qn_Y%r-5*hr-*3Vw z!t%dHea%Eg`-*5|VChadJvEM?wl|cO2tHh0&Zu+i<=rLRBAg-X6-+SFivQbJ`ZFy> zG5!5l`mRXC&8xrq35__ep8b2Vv#wqJ;T%kWdG(ym-fn*Nj7XuBym9rsguzO5_58%_ z)xgy=f=}%KAL9SRbK%UZrl#gtclL-}i(S>p$;riKpPdjJfkWPHibYp=cE!oNN_xJ4VY;0A1!=t0kmlvmZ@7|rAon>WZ zou8jy_r~i{tgEZD8Y}e)qjGX|%*@JaYioNzMWtS%Ju*2di(2P*=_@DKqjF^=Hef?f zXw)?|HPzKe9o;vln^wolYDp>>CJLJtWUZ}>qoez>zF1EbjeI+t_9`naEuB{^Q5Vi_ z4MNA)*3u#s_B|UeQ1LlENa3&;G}g~wSG>5m(5bccG4d#3!MI0A$m_K5o!zXLM9_2Z zV10sKqx50m+H}+D+IS@&=E#L90-=Xln}v=*xF56|Z8v)CuB@&mB_)Z8iV9tx?aC!_ zP1QNku(CQ#)Ywea+WB3cA9fgRCm8rXLyeJoXjIO_&0T9#)}6o%Gc*4EdvSKQI4$m% z&Q2O?>fG$??}?mNrw8kcy(tq-UbO`U1)>?R!oybwvYt*zOUuYCet+LKw~b1Pj?a|L zZWgtGx7d@Up{K`qKU||!m)CakqoCJ;fO9^4?0q=pc6zv(#$}^@d2wE5(2^P#$7wZM zd`I|fYr4q`?k}6RtGm0Xrp9$3Q{waI&x!hxk&*lR`@X)uxDxLZ6E}X8zigY$&duGJ ztQ#5~9qsRDdHB#|v{*|zry?W6aAmllJDyRcXyo#I(+?dTos^VRP*9MOfgzsRAocCr zy$*bXcS%Xz^E);ZHFfp%&FN|6u-pB|!CWQ>2M3#vyF>5MaM09Rj~m(9`JEkY59KLb z9xtYS`}R#bhYg0}=l43=nx%@7Do~Xa7f?HPyV&Pv&RngX^7d_&=l+UH zo;)TI$I{kp8`x4!O^xvRUTjg3mA<~dxAz&GqP4Bf*w~m{d;0Qx=2GJ6)#Gy0@3O&Z zF)AkJE~UxpXbBS&Q=URvRCxI5b|?95%4j0#0@W~aD$!TsRChzgZ()(}cpa<-V-llM zCUM(gVPOfmtPKC|2#+S^&lQZdv9RFvyEwac<0gYn&Dz(P2aa{g-Sb#v0z3>1*2g>Z zbDhzB-Q7tnhOOUXY5GzH8%%qM-pjhe4TH($qv1 z?{uRFx%bv*r<2>)5zflsBm3(d>U7&$%&C2DQ!Xzrw}srH3AE=){`KoukwzJgC_FGQ zFtlARe}^-F|Ni|xgXUJg{`Bzi-=`W%R@pwI_rsddWF5DUQ(B_Sbsh;zG*1$Jph zMn)huVAeZ#qSYf?PF9NU(a3*nXdvfv{@UGbHPh_#voAO>5EUf=k6yi4vpfJfUSD6I zoDA3cJS-Cf6SJ+g6_qk5Cs#(J_!84ATx9)_cm?Vpa3>1O34W|4bA3Ei!djrn$uCgKI1_0nN!@>`RU=) z2!e7jx$#P~&57Ews;a$=`Emv;8yi8NlfC7kJZ>&7Pft%K%@6*_@jQY;wMQ6jRk+{g zYc5#Kv_^Zb9yS+e#}*bA?55veg@svqTwP$T?UZ)4WgJ{Hb!XAYih5STh$TCF<<^#c z&$n{pPN6C~gg+B{N%!LuR70(U!5o?9=4P9T;o;%$N!*)xX})ai>{fov zqNox|XhE4DKYn}~O=jE~Rb5#rBQLLBplYKtIyT17&!6!2tvOawV&br6bxuxBbhLuB zG#X`gP0c~|n0|xD?!vtg%kH5el}h@rk;L5EHUf@2f4=qiE54V_<+Zl9t|~9LC`iVB z1KZvB3kEFG5Xml#|mg%*1U<*dwszVO|LNoXx>BtIG;e5QqTS zqWv;5@eJ>Ud`@7?1&PjndT|5XvCg{KQ`BQv6(S-c*yN?Wva+()6RU8WHn+%Go`2G@ zYpV9xH4QeTmQT)Cq|MCC6gv9#GB7Yuoe>_~wv$I+i0yuik2~-rudJ+qpB(@3g8^2) z>+R#Lr_rT_g`S&DhcJJsJgPd0BE4zCGho`EK7ER>GZB>%7ZMigzP_f6ii&#i;zb~~ zipmg7nWUtor-ug~9-iP-vljy48L?u?Pgow9uNv`2iI{LJb&?3d#~vzf^!#~x)BWG? z_l&gEsbbzGCgO+&ie}{GJdzti3kv946fO*3U=x90xE^a%uIVM{#vfv-1im;%3a*id zB&xRdaI@ud+L8vdC({gkkeFCod@M|O;<(#;*)J>+3wpG;XmQeNs;a|f2Ete&_wLQ5&tQY^PNk_M`UOU6Y$QUP`bL$S-8FyBwcxHNfI;-&)*pjdsm1cd~+S)=z z%GVHk$_cLq-k2Vbya{`d;K+>oR*$hDomNx@SLW5r94&vH=oe&yoqxGEjO*;yk*v>z zT}L$jDJ(1mA7W~3d=DFYYg4${gl!z+Di4XgqT*{)V4h?D$gNDxbx96zDZ?5@v(KIzIq+^apA{jWYc5o!gpVYogQo~ zwO&EaU&y>#R_%v2Ki|BrU+n8<=$33;5%SXX&okz}Cw60<9$hU$yMx8?k<+i%eVfi9 z+vx`3bhqe^n3pqX8#6EBuE6Q1?&wMgEK#Mibv%iw4U2oopfC@wcb5SJl+%LiJrNil zx>2sgiMbz1qA1f=E$v8u&u6L@458UQR+GKdpDL04zy!z$fgpP(T$@LmpMN3vphaYq zdV2 z4i({hn{(C^rGiQBBvA@w8?j>gE;ZF>j1M-)s@4|u9$tOHS>KZ&OQtaJ@i9_2G0J@; zE3MM!l%I%j>!pBauzTHG)!Ti4aD`jYuBIowa;U9(fQHTPA#&P&)0Uq%y9H(@TvXr?G0U z?WUibI4ZGhcGmAW5B8sIE-FzX1oXvxrt;`DnDiPyW}#o3lFAueo1EO5y^9qUMXHC6 z_~euqVwFXVbPPtqiltGB-4jLePR$98$=1@>^tgOKb`!B)&TYF1J0q&}y0fXZwXVnE zhF#0~=s*wx(TF-3#q?`@&VkqTx>)=pG`!s-pA9nU$%kdwT-c)K3|lvrDB40Wt>UHm z>e1YZrq)T#FNS0dVJ-MPd5CSyY%>*EkMTy_R>c(tn$#J3%JNzNZ0oL-bo}Zn;AP9eakvnEt>x^=8e|g7+DY35PE$K5`&=oe?A*nua+!xsW ziAWc`dGn?bSAA@~l5_0B%)#Ey%91ExaDKt$NUZg_pA*vy+7`%zb+7sZ8)e!eiQhLZ zVw-Zb+=XiwmBGb z#o`ETm$7Gq?;>ydmx|pPlXNoYYwvz^DZfAMq!FFF`0=Nf9O2q-yFQiHF1*t$C$+q} z_Q6Q>e6XNYha5XNfW>g}70_`JTFj7|jRvGuV9Dn#8?%P5R{v%P7(e zbJ0ETI~2wLoMh8T3YC@Zk6SHU!q<6xbUU-Rs*@%6_T82*-q~cUpm&Xu@Khg8stHB& z`=%c@j@{gLo_Tx+ks22mh!U!~_M^$`DgK~5gVHm#5kHo=76Mx49p1yonN-Dz*qjoH z77I)Yz9~g?YEJ9JBwkI?1Co;0@yDjQxVT95P_JwC`MG{oq2Vb2;IiazZ4 z9-p632fe`_?3v8Y(Ml0_Ey~?Uc)L17t{~G(U(Is=%uT#M^Xq4IzG+bs$hU5g^?h&G zq*%genE$|F7$-Yo_q{2rSUFSC-dzAMqzy+@DNr#mlQtfkvqyCo_dP~SSz^}W@s6ju ztD2l#i|;vKJvPFBQT`6Ti}}_^5}}0d91A8n4KxYxIC=PS6N!XgJg&xCql4m#>UtV9 zG2i}#p=B)TH@y>6>8*)pnqr8wpF{XZ#rqytlxG9*}G0j%ZWPg z$lF)zbBbj!Qjo;DUhCak^^rI=^dh46KO$d4{WX#3=;i=z@R zO*_3U(`YZ+AU&r>)R&$mCz}?u4-QsjpuT(I@8^9I(7%lTUWS3svbu?EJ zPgE&L@vUE87&evd8-DHg=os!?xnHJAz8A|y#$h<^>aXutbhU<^Vh-j~PM3<1^%~#& zMMvn5wberIyTnjApj;q3&5-*zb6=@TSyzaaZM5Ay4BLrGjFv2e$Fq)CnI3}ua2HVm z!SyE^=^EjDfks+`)vXYayR3-4J$a+Hwh0D1s}Bth_gtMm7~a17EJ;(ey~30qUFF8K z@U*Y6S;%wdgw>q5#a$UEJB^$_30zW1)IocT4)%4FpU_9NTvwhGn0EF~nf0d55^r60 z4dCXefAiWah`h+m(K2{&;6wAJ(n{CqMG7AxeYHQ0z^ANv`!M2c{Tco`gKTE&8I}J2 z)`sC&A*RHupmKlgoql4U-MZvduKQDrJ!ezNOn7mANv|trfAEG;&E04suk3z`-^qd@ zH-tOFiY~ajv-8KXt~6WcabrNMTh_c+ywio+7D4MkxmA8Cv~Bllh9_6$C75h3ca1dlO| zvx+_Zy7nPH*nE!=PUcx_D$lTCcjkBCb#|lQ9q`qc=?ofwpJaU@i1Zrl_$j$u$#^vH zw(yf<=Thl;ooLxGtb zmrA%2MQ^L5Med5iKk=FQ(tkZNedSw{;l(rAferWI^J+l3?)Oo3>r~#5-g-&PJ6`Hy zyG=jgT>m~|;;k;4*ar-9CVEiGnSp_(CO~zv z5q&YLn6;nPMN;8qWo526-pj@(BxtCpWclQN`0$~oCLWtAp?mK4?~Aitg-~&v`}etw z+EL|*%QbZs71KRj?O+>3GZVtPsCWXJ2yIl`^uKxC-)`7hykq?UTmY+;=M=jkt@UzYJaTUshu6Cy1& z9vSxdVu&TaJwv6=@#Hs$e$q=rZMN%GO6PhcsUsgq60tmT#=4%}rT3E$lasFG;v{1K zh3E93&5ZM+n*mshwIAhIl!)p}$T}GH>Nx}ibe*oUk)zYoiHQln10;2>|6AQt*jS{du*Liw!GCDj=iyPM7F3yzD06B6%z;&RHfRZ|c zyqJuP445wkS=qhq?Pv1x(;p1m@|Ch`Y^Q$il&-I>85tRYoj}eheEcVtd&WB!duKT> zP5R**xJHFUj^!Z*;qwIHl+tvS;>cXuuCHIe_Drqo%W)=j+h`HX5yZ6)1aWfa(pH;V zHB4V$5I%^=5%QiCA{@O!5ChA?WAoA@h37O8m`4W9q{p~~+8y~L3^G!ym@;G}<;8w> zdE;%@o1=tlh3D(zLY30J6vW69Mr?PYFYl^bMbo$9+#suEpT^z}C5}XznKA3ttDZ*0 z#@?e0?3{~>i-Wuu(!^Jza6ZN&uwlR=;^I5FV{KFcfrEB zZ2#_nT*#E*YNf7SyGBV#2`rdh^YOyT{_1T?s+gr%1);|b3>@6tGqrXGl9Iv71$+DZ zBgMwNzGok0sT&;UrAjpSEMgNMF?1JlwnQSe;vZ>8YjGBB7#hd;oa#@G)7M(AK zX>3U83BK^KFyNKG_w?MRR8mq3eBI!-X$#Bw9o9J*Gl|P4@%?)(m}1DCWo0oavw=srs(2E;znMm*>Y$X_#l^9mA{#$HKham!)h=G@FK#S9tQ;V^ zfB$}q)XGkKB#G$V)(YGVRHQ@mJU#yxl$=OeSXVu}K8yTN|8 z^);oKcCT)oqlD9`!G%HXi$B}$z(1GQ)KCX9=zi=U7+|HOG+h5#4Ls!1NYU6+a-U(# z?=|Z>oTzv1kPCSqiLJiparj+jF$5(1h-Ec3VMs{G;{5z08X9r|cNQ+LnsvO_9CAgf zbnEOuhO}G-y6%M7H*aoYk>tPE+kha@{rf&yG_c5|goMDeUqDI(t8DwT>e;Spcf*1` zPf74d<7it;^cHFgl1tl(Ta^37_F-*!5@Gd1_T99x&ez6Y={V9za47?kkHh$w@p?FF z(K)&NrVm-(N4c4O?)hnNCr=WssD!i+4nll-|KDLdfR^7e5LauA?LPNhnv^7tJ2nK9z3XBL2mWqnX z+QtUSrqzvZ;qKuv)#7&vWrE(`UVhiL{F0LW?QK{*EVQdfJ@^O_5fMejeux1}%gf*9 zw!t6M)6)ab*x<67`*;U<>qPWN)tDqaTTmbY4!rOC_p`Oiz9>@ugQ}TLrgf#t0n6%> zoD=TQ1*Vydc&^R8rRuz$ng~O4Q4+@QC?yBn*|cGq3_QC0WYGebr~XOr-c2$wkM;J? zinY1g%p4>e99;W5=^F?&u6kn_WzfL-TP^yc7b{p`YWgep_BY5=E8ET&qp=RwY)_9k zWM3`(%(xX!z!9wzMa~d>VI{{gvBLpNTSp=f~h5Cfbulrh5ZGa3;Dch0?k_l|_uv3-18%j-2JG1vDL{zD+k ztE;PFSLf#D0!Q&UY96^`IZfawFz4pk4}@e`T8eEBlt zd%FIk+&CpU8AwbZ_7x2bgyDOsf?i!FZ1?WPe)zxwG#!);fWZIJ(UDVJ%)rGp3K-%0 zcLp{#i8#+q4otB_9hrY^CuaXf4kU=??}>6kL~XVE2-H0yQ^`WZF3e9L`@ zKaS~Og@?-=oZ)gG_k@s%rOAJ|y`h2~Dz3{k_T$G9D<&lW7?|V?b%q4F&CSzaz64)kKzLU( z9xuEDyz(8J8DGG30%Qsl*dG{*lndMi$l6$3TwK82GSa{6Fe(oX4LzVEiANF>5mhl4 z;GzXxfAQ(-(&ooD7FjW|mvM1wGEXxU&24Ra=N;puABD78@C$1j7}(j`UMTk@xiTH| z^7bfy6JvIns&HA2FPRCsgL(R=?T+5XF>@_eI}Viw`+Sew81A3|Ql(lsX}bhhD>N%KHlom{fNu+odSnt$w%HIc28&EPm%1JjkS< zd3lIxn7;-E2F+ ztS_0jtR{B=sV3Uuv*_66uHk!7%QoD>{m@`m9lg_*SNU=LhK_(ml~pbO<4vp?-}pBw zdkS2-v4k(9-Oxr&zN9suxE)0JkQ^vEnPr@mr#v>B`+m1l(6i+$-O^CGEo)%;t(7`zhn3o1R8L ze*Ad+!^YD-N-@!B(gyqG3eTbZyKUz__*IIl9ug9NsVIi)WH zYHOa~X0C{ZiFi$~Mu1SoQEJMPqU&ZZJ5yAS(d+7dJU8!nd^{ib7oJ*BVo5IF3aUc-QYL&@P}qilcf~tx5O#YoPLbDe8o;c*=F6xr$wJbUA2h8B3k*Cn_EF z?6P$M>89C+cw%B$ovTAlC7trgdqk`|n_KK91llRX!V3flcYlJpXIu|0@o|DeA0QAv zv(-g16n$rE@DQJ#vyiNzS z*)Bc3-&6&%{fC1>T1U0C`Q_0{PGh1;{Q4TBPOMWAZuzF4_z%7gEdVXzIL zKs_i>pH$}MK>Qj?2{DTK1U1|F-s>Wy5tr-J#UUS!tI_b4tZYVe%z zs;e~PJkf=ccw-|;8w*y*%g(uNp5*#`T}EsuA>&Hy@9i&f5Fyf`Y@$s;fsJa-MJzV( z;|C4E1za?Vry-okork2P_X8;6)YL>@yq2YQ;2{YTRr);l7*ComuGO>>GC`?_zcthc zEjYN6A}OR?s-Mu}7AnRyr;??}Yd{4l2tJq2R;GSQi4#g!R0u_$m<-Bm3W~J29UYGd zf{7e4LLdIFomTg8DX*HW-yXXx*77-7tBcK3=oIk4+5Hn5SzVpBdG`1o4VRHARQvp=k6Z zNE7nXer-S-_l?$M2f&YXw{)3qopja^;^ySktv9c-A&El|5+y5Z?H;11=5l>^*k4gm zK|fZi@)(y(#|_SThfL7>a6|CDc;>(o6#G(&u#M?R)72G63=Gc$XI?L+0P%-|%Mpa^xfukBM`yeV-Hgm-_6`nYyhiSN zq|A|RCTt0e`c>B3nMGrEfhhtpJ3AwnHj`CWZvDmY*%$j#j7`inFU3Ava-fK$gsA5Z z(1tk_1&LqBvKl7cTERmT>6ssnXSg3rt12kiRP~nQIdag5=0$pxq!=gPPU~L?|JsDH z%k!$wg)#|^XWnUbyVIs7!;SuuS&UjAKcv#oF)+wV$?8@+K1%hUIheqrdHQre&yoad zaeld=IrMj5dODB=*?<1nVoiZM;l$wsX#fu|Z+-p2%KD17nqEp$LY^Y6d>Ze>P|o7J zfd_X{l<49L3hZ)aQ%y`w(GXybH1eqxwUzq^2T=-AAKf>MTPxPL}+o@uD{k7^ok-B1iL=@vHfAXv?+pknnsWO-j~P} z7F%m&>8|oQ8r#Ww7nX;Ao=bgoo9krO{k6LGM>Xxir~gdXirUS{@ksLf;bPduuAKE9 zg&hRJO0>8S?hT!td84C;-{(U|abY1U9AJJEdx?c!Wbv}5>I07-^`zNGV{0bWKD%(q z>A?5lDwwLrj1t(ohK7^~f>0@j$Baq_DXlCKI{K-QLFzHm($4|+$j(0Gbx4*@Usb8 zUfE9N;2{C));-_c_Z)f02L@X_wy5%@Z_5Te983-l3BE#8=EawGt#u! zsBcvC<&)U7&gcCv&KJKYs#Pven6*ApQ$yP7>&x5fxA^)%L}c}!wdX0<^gdYQjgbOd z$j!?OzxGSedFeahkdmqk>{jAh+iUwIseZmL6o?xuMY=0x&mImIYY81-9dtZ;l<_ob ze&J6hT@^(O1r4_bb;2Fe6*VI89EWDa3=-arQa9O!BI4_|*7^ZfOtSy)vlb;E9bi590@gF~AL3F-7 zQz}{Z>6Mt};z!1@;TI#59jFLCPncb|#g9=LIIPV3X`%nB}HI|Gfvaz;~$Rv1z{a%(S z_S-Eig8q`Em(n?^U+1NBx;}sQo&3Qn9~C7LB!L?63qgewii`GF@U=KJ32boMIMA=zH|mlYNoRk|!;d^vIe{)cc~$Ua?`4#@2rw?4XTE~G1G z01igTrby0hyJLmKzJ1FOCp{7(%8J`2bV2=DsC08!Iv=;|jMr&nrWwz3>(i%eH_*_~ z{Z3~tZ@RnN9@}{7`(7lJg*avXjTkBlvP}7I{miy>ma}t~i5&c7*)yyHk_4G&&z99T zlhP>iodyL8vg)nt27yWNr+Fkwv7?~s&smFo^n0iCx@GdJ@;JJEs<8n zl^)$(H($!ZB5ojV<)1y8QYz+Zk=Vhq>2cP=2;n_o}$~j&x2d@aO~I28M^7 z$2QVw6jE_e-GR_#&3d}ZqUE=C7TaSw=WtYOH*?_ZU|2a0YJHI}k;J=;8|nMP=ii^R zjFw`VuxfEOQSn*dkLkCLkB01s^kSxvU=c z)e;^2ZA=s;S|jF6la{j)DR~G4J~zYjp#sdQrgsK{%tx|!%Pc37SygWG%slg`<@{}H z?de+CIX)s~H}$Y=K5J?SUAiHOctjvAK7JG;jj1pxiVJ~DPh@FvVVKX(TbS9kjnss+ z7T5D&e3!jDinMR5-u^jBMwd`xN&xDmK$R$|Bulw~1>-@Clybq_&P8@}XTDfUT(oZw z-#nps+l`dH3Ygassj*kJx_6!@ERn$! z`Ro9W>l?K>)}!w|in}gzLvNqzFWO+kADy-q`+mZwI2})V=BRu*4QOjPS%2_(?Nz;C zax-Y2AadZQE;$}Drx$B-iEUJmaPZSn9PK$XxR(W_A_D`1Ifr&-8g2lef7P&RfmD7` zk=i1Crmuy)kjyH+mez{dH9 zwkXUu^>IQ0Io}ra+7B3;1dzD}ATu}SS-0e!w_SLMYb%8PD{-g}Upvc*tJB1r-QUn) zm)&kq{$6(R?sv_v-?W+~vgb~qX#~+7ZCGIgB_(QM#zrL5oa5Z@IbM=p2&z`TP0h`| zz9N*7lJ~+pR?JQgS0{E1MaeNqSf2m#zgFkr0SVKqR}oQB?1|tgNZ|Dl*S&pw4DS%5h>%7DWReTaPcZl_s?)d!s-zln~P)_rd&26+F@GmmPo ztkpO17o%p=0(oSDtqP!zk04_GeK6HtYb)K)NDWTd zLe!I3&1^p%&fIY%{}FIk%*E@30qZH14_84}e*TqC5Cov!LgWY*5UT-8r;sMp6loMl zfoe&X!VxLYOpDo&;QokJjv;n+uwcdR+JQyRlmcVVN=S9{*78X^ zf}0N=Jq&ch`H2&}+3AGR`WAOfN?IyzvZ1(0y?_56MqUT0a*OY!Hr;&E$kLeI^mnaa zi+`5f_xeTl>mPD%2zxKFx_|JLXCFS{_#SybyrVOkdZ7{)3}1sVT>cyXXW(T~DuPZu_|$(aIJ$v){jegNU}J-o9$V z(pk^NJYVq0lJkTenDskYAV-S2iRfQ;78Y*t_ebdHOpJ!z4WP*Pa318)hB?{|G>TDG z8amV+!(Ew5Aiq3cl8DpP_HrXOK0nooANg698%H!OL=q=HjO2J;AW@*2n36IDtCicx ziE3FpeA^jHAD#zmT4Poms#z0;SWU~d>(RzWy-&r9wHmn057TsNtf#6R<~GvPU983e zNq8-%M!F`3RPu&oUzJ_b1%uKRCJU6FRjPD}~?8UFA`nL&pf0g^j^ya*Qs3%tiA47$C z+usoJ9Fb0I`YCgj-GYHANFs-&z=S%kn5+jg1B1`Xwcy z;L69xmuqT@i`MaD7?%Py<@1`6?X*Xttg?PH-D7${_7@BES59-rr`G$}kuT)}Nc-&U zY$$br1~o=H=K!t-Pc#VZI8$CP@&YH!aDF7ys&7d8=sMoXcCH9+8(GWo>n$MzhS)J* z)+Bj7o<`x|^VQ(ifl{oEsp21dWpC3CH)qn`yb-0qPEJmSk~_$-f32<-QN9DAuZO26 z*Dilxdd=&^#1l}DH~XCO@$!x=?i-Xy*7lJQAwD?I7hYZ*wzzN}L47%zzxMLW2FH5K zewn1-0?2ToHVHHx0s-{&Kf3Dc^4EhX@&#ATEvf)fNavsi^e^xI?8C>yE7xQoiVSX> z&UMeH`76esH7v(l$XY-7-(fvA`|4n}>x-nzLZ{pWO{b}x^np}S&d$xEdXW>q$6y@E5 z%M0W0roXYpGelcFx1PmPL+Qcx*QXbtjobeQ*9TSc?d|P-k4Grg+#oFHaa|(_72i2L zw7ViJUnxj^{eGgji}dP+L!U;GKuZQu4c*(W`WjaOtOHkrCY!3%#>Pe_z52Itan+V1 zbT?n#y?=k<_ir|khJYTK%lNfHQ05ie;JWt10oq-lngneXReOY#sDrMgB}Am;%Ood#~`Y_s^7nQ@*aESNaKZxVdF|%`vi#+ zK^Wo*Rm>|C(H;gYpdM3~4eh$N?W;D-tTOt}PAY1lK%!w}l-ba5`sUG0s-MOj6!HKa zq>YSG=1)mWNkLM$pDyx}goLAnFI!~Y(wV;xZt_l=fJ+I#gRHbP5#QG)&N&MmxnjvN z@L;4+N41-2o}AQ0_@|`kJ$d4vmS#X-OrM&ZEMV9g2%?yHp}Qy|+9fw~2f(k6jf_k+ z`*^J{Mn*(H4B~Y7Ee;NM-m$}oHLcIAT(dfaqv)v#E-o%@oHRE#H;`3rvuNn(Q0!8p z!%ORe-g=S1pTZ7p|VDq zJ|-bs_Q35bZAflTukK}N@w=eoA1OT@Xc-{cJ}elG$g||r~Qow#fL&xh~te!vr1)_~o{U&Ypzsei{ z%pwq=FM;Rc^Ru2LZokuYZcKjYdRUmBFTh8JQ}!{*V0B9qgiJdh&q+0BH!4vIY# z_Mik0N*YWufg`A>Um3?P8A(8TR35C?uOa1tG{G|4KOa5!EumQpBn?n}<+huitTa<# zO3+bLbAieg=q8|^B`zTW9Vo=a#4{fqvZ|_ZMP7gZE_>@%2f-s?_%2kP#mF&@-^vuk5o(SyWh>e*R5X@K_8kU<6WG-lnJP@L!Jh`}0 z@)cIjU^+rcj42@`D5%<$#ilg0X@F>Nec}+*03Z#z>e&H^DJ~uewhbEp-c-Sfy_W_J zt~St#0*(O(x^V80H9|CqiBTH5gD#@jIm$X@j(dJa-I8co)zs9a%j9={QY}tJ$YcKt z_&m?Ujh=Z&=nqlI8$z1+9YZFD<8vX{Q5*SRfu|_ zD%7A$t>{C=2R~GcS%z9V+dxGVdcT-yX?fhYocO>EDZCqrAX+FF7_coY7m-;Jo~MeZ6OK7Z$G(bR;kl z|FjKgX#qs4Gx_%BO7>;YeAX354+)#6r)CDX{2-xlTHnTpx`;==MSDnG0@U#f&}ON`*DaPY4FsaB?n-Fg5o)7 zb)fem(JlJbEBCEmBKNL`P~?j(L(~Ff)ZBbw#u*AcFDT8yQP9P8&Dwv7Agl!YfnqWN z5fONYL+INBk<{d*_fmfbbZbq3%n71aqt_u|8VWBjul>8)OOmpHXkiON;tksb#CtbU zP-K&NwxPU^K$L5e5D}e>G%+wT0-Ij%LL8ty4!aZ@_8Cd6rk(!zsATK$qQD9|dWpK?t^MH93M5q?; z<3Ugp^({{P_|XXds&dQ(iG-gyctlMN>jWg~z9&c~wn0JIbL>ljI*9Wx=InVk)fkhO zmiGF!qO$Uv`1t#TgrFf<0*z0e0y$a`B&Gy}gvUqQbAYuVRD^aIL9Y{RCG6042p-@; ztE#F%9%s?UkW-Iu98u&8K_s~nYd|v`^bVD47T@QSk;6Lw_4x){%vYoS5ngHj z5DL`JuOCY%27pxws;dhsu%LfB68sPRw5oQwWtBBx4&VfQPjz|ux1Js!=njmd*8mj) z1uhyW5>|LU$wjcQgqzA&jXUFu*9ufU!RDdk>-gvhPz24xhtNpIEGXy$#y0mQa%f-x z^l2uQ<8mZRfGqCArs8<Onw2uyyPmBntIlFw#v>lg`=$rUc>vln63rMF3!-cuE3IaiFMy{2ls4 zeSCcInRLI-+DDNKPecDCvs;F8fjAW_BV$kdA9|~3F~y+*jSqnDKf1f1KU;%poo z#U7w$5K1%9(z=aMbn{0b+KpJrqG@8km0+_68-lp->dvyV<^TuurtoV3Yu7pV>>+gB zsp)w^#}yb}LtS09XrSWaT|7KBeSIN|!EAMc(|s*0%!{TlLw*|UFnu$VVV4mphWm?) z^ZRb=BV9_>8K$}&AGH1~EZn?xi(JrCl>$160FGY2af3!a`Nu1P*WeY+gVlJG;9+-rl7KEv0#RW8XK?0@4pM z+Hwc9=^i}_`ep)+kQ7u@0X?P)AdF>UQ51`Hjs$aOS9LPn<*s|3S>95G;1-8g5CxP2WPa*0Nw_-D_aK?5TIN|1SNA08fVZD}@pd$_xQ zTp$II;tTL*^fr#A7*re~5(@)rl$kv6gkEOAN_U94tUiYC_=7tmO%DB zTTW>Sq>?JD(?2Pzce0~Ap{)->7c4Y%f)as~h3;cO$#r$dpyq<+W}Uv5;B0Q+qt0ix zVgwyELp)qOQ>{q7$@>@w@j5MyIrf_+R*rH3 zPzTz-B1N%>3!ih=3j%DJ0%7o#K6qs%F*X+bz3cG~c>6WTB0ZjJ#%{a@Q7<_B>xhpK z{$d_Hz3QZg3|5);Er4k5vBwZO{<{7dP?gT^ijy8MM}MK^UJ-h~Kq1GExrkHB>Y&HZL#A z0Q<7NFNXC7$iFuF$7CLEjw;3+qOdhrgH~n<4d&HdPA{R62{)ueAl@j_DJm><$D3~= zA=Tk6aI1xEw3Qco@}1VZlNw2qISw6?)>q? z9fB{Y?^p1GaPHlMUkEU^wQUA5IXJJ&Iru$+RmeSn1v;J$A&0J1ut%TM8s&my7DG{a z`CeETki>B#5Fn5f_Jeky$nbCwKh{)K0INh;Y$Ebn{pHA#E5wowz+(TRa9Qdr5C%B~ zH2t^K9Z5h^8X1Y%DA;I>9p2I8x$n922lVK@0Ng+kZO-wN8JJ+mpkeBQ$>iwbQu*EF zsbhPRc~u1es)+Mp54owW7RfXpj~^fZ^T!Ww7y7h5d{s4GH@h)XtR9*0!htlnWLJCp zEm#x;f`jO)LmT!bW)nZiTSvRPRsfs=c#VmPF@yDjUj6;mF{>6+5DNlAhVV;t;ovs{)}hy(lJW&4t^2IYGC*xI-0%MOZ5b53M&_t+V!!@oCe@=ZF4rJO&S~tgOss{XeJbVL9@wgIK32R^Vq_(J_lX-u0(ZD0!jgkpBQ~8w4<{KILRd2iJ z#yH2DGBZW#?jL@*%yZ&fw~>stCG|Laf1sXaHMe^V$z)cTl$ zSYeq%xpKxV1dw0Bazh9JSuSKHaQJnEiHQjY3D1>yTTW;QT&gomb==FoGu2=8uvH!owdJCW| z`|S(#L0SO;LFono6_Jqc6lv*Hy1Tm(kQNb8LK;LGq)`L}ln_B_5b5p`xcfcl{?FW* z`_1`g&N({pzR&af?O1!QwO2_)KW*>rl{Sp~Lhk_p1JpkS5!IFi;S~!eSB)Ow?E9gto1`br(4Qo#Ek+Z=Lp$Mt^;r3Uq=8 zDZOQmGg<^6d96G1;!dw!zkdDdRiBH00tj&DpQ2Q0S}*}y3&7Ak?2nBXAnK#&>zk7T2om32U0bUpFJDnxYrp73k7IL> zGuA;4S{M-W>)aiMSOZ29I}6vMV`47%vM#|xpv7?0@P8tqatmMQdL% zF)im5IqN@OGim01EItypUSsX6s=Vc%Y5FZz!NC z508vY?)eMO5%rNVo%r8#B$~=)OG$60z6V~+c#ZYda1*GzDU43$IwAoU2ahDjt7$)N z=4w+2v|>Np7RHS4ZY3|@5Rajibe_D z0<_uIJmWz0hT&gZgQ<_tDQKjijECszaKH`FkNbHH(0{@5=hVF*2W{hO^gB>LX%=fV z!Ks8t14TWmzy!gW+wvfQ$^cQ#L$qHl?UKw%kj#okAx?}bn zMXD6~y<*u?!v^P$uKrA5BI~>#c!(SR`T294tsE))yZ6Xn)Y&OIDp7x=I8OsL6gDPi z@qGrkT!Dd??d{@Tj8`$-*_}>OJmw(h!3BVT12l4Yl%M9DX5N6$z}No%!-E3=LZQc; ztayfw%tC9pifs&C)B5^4+^x|!IZ_AN7Zos-hw>drdQdGwaRf)$(%J)EqBzP3>5;`bjuc#t$_+a9&nnV7J+0Q09EzAL^A~FuK7alU3A30+$M^47B6r7qaChJ_{;sV; z2fel1UMpr12fQ0ph7Sn1fI4`2exBQV*9@KmePF;5f)_uP1GHoi1OThE`;rLZ1b#!`oYT1F75sEY zC`oi;wVKXmKV5okj#rvu(*#Y67>*0RsT&d~9miqnO>jm#joy*ywy|(r*sAIl#bwOq!L{MNDK$QV|2Jk}C zd92mBF$vC1Atm5p&Ch-c1q^fjSNRNJOu(ZEdAAI6_>0rE^1M7I_ei(UI9*j$CrB2c z(HPtK*#GM;RQ>%nHF^#b!IB>;YB$G6gKN^1jUWzlbGyP#gLWBeQ6vbGCMFiv=LkWZ zRr#kBQ=9Vn>YxuZHa6zw1{|Rf-uK2-lXkv3^eyEdFtvUz_P+(&sb_+3jr81s02_GtaGfFhfo?A z1_mghEWW3Y5Dz^}UF^*X9@WaFhw{sKcJBn_hJdsHLJX!6Thk&?;XqYu-^xmiA)ypG z0|8yX8#%ugEWW+H4X`O1O}udfm=L;D`2^Rzc!78O_HCN@BY;{1-G3H|)5$9-bT=`Uv;=1tlK`AKwwU z0m0?U;&(O)jZ02Vy+usSK#23|uI``p^-0O0hcOo40Kqq7NbJn+5SF6MZ}!tv*Bb4= z+!{|W(pZHj^y|IuFx2KCse(ew2*LyONq{&)aSC%CduloiiCFB?p^aZqhdu+lET~99 z2q*?G9HtTRv9aGF%?E%67Xk=;olCgPfX73qbX^<0cW?mkjWkQLHB@(C?UwWUe1GMM zt*xy)I_}TANy*8n_w;Yxx}{yMsvK%&X$guU9esTiBVo+c*5iAI6uM6As6_=u86@=q z7-+#bvi!mY8UrZS84bv;h18J1{VRtL4!&ERQ&aH6han%plRydTw7^mg5Ms#n=evFP z-Ica+aB&~W%j4kV%MkvBX))w_Tzq_99-g0mjWA>a+8sD6-SoQi_9tY3n;Zte2W!6r zrHGM{5!AR853ZCe@%TdIYv7{@X&E|DQ&p{iIvMVxhL+Y91Pg*08QJfEfWIjU=H3A` zgt!xe3Cd9lrfIJMtE-DcFS+$vw%(R3f*M-nii&NpliJAlfy0Y&3hy)s}%&6;6!X+NeE5IeY5I z{Z9MDLD2e9@1~ z{qVNCD^ED{y?bcG%ZG6TODKIIqlP6Dn4_2Y{$bZ&-e0+==azjxA zB`gwz8oNQC3Gh)x8Vv6T`wTPH${BN*aWtTxpY|bK*0|u+;{{ zSQ9%tVbiV`z|{qj5(HEqz|sI4M9GK^n@Dcne3gdiI5(FYCe1L? zwmPoSQ&pYa+6n|4ymZ=5Fd(6c=d=0A`hAKT;B(MjwS43MDGBmM)J1}_8lH4Le45dB z|NAsbUlv?0A%?>Dg>Qz27#JF&qoYGSn@NAbqfJpJiKc=#ojSLmFsMn(JNii;WAL@_ z2caA86o$PEF8ow-a@}0h-l^3 zlE0VbO9fUljBo+yK|@}|#%hCeoJnW&sIGQ1n>7zskOF2Hm#RVR#} z(W4kdeK2fdV}r{p;=W3P%r0dG9`5X%SWIB0_W48d00=DoM8wTxAWN*xd#?a$w|T8A zj@R|*Be5Wsd)`v&MiHa0e9n!R(CYrTc1;g*5Y9NzGKfD-2nRPH@4Cqci$mIRGbbGNZFy@9Wc{5SfSF|1xmklmtM18({IKfc+A(@}Y~)^-$g#rR0%(1-yo`oWAs3KRMsS%{n;H`;z4-}|I5h#8jY>qgSsFf^T_`g$A z{|5922?+^shZmNX&=F{2fnNyt@!@#szbOF+jFNh~2U8uC5%m3vMVodIF zW7mvSFi-4pyAq56G8gcZlV>PSk}vtoMMUt`z36oVhmGZP4{q659WuK;)zFduoT{>Y z$UIkqo!~Y?uDgK82Ji!9%~9bV0t6;F@EL!@{{&-ANJLvxO{oKL(Zt$B4;BHSt=kph zpl*k*2PV97u@t<(si}+0FV+L>tAY zrB?9jz2sq7(gU>1oR{)?C<|$%gV)#O+7N@(rP~h{{F}$-qXvsi8?G}VV zQRDN!98&5H6%BA4pKQUZ{ZLbmb6QPT1d>9_}5}_X%~!6ghnM# zeN3?cEw+n=xR{@c1_^qn5GBo;q6MduJw=H@4U|!cc)oh?f=S}@`-Cqa=WCKhCvCqb zy0&2*e7N%NN1^E#N0M+sF4R?mf0i4$yUzVF8C`VOA{FRYfmhk+;fT>k%g zkZj#Q-^iw3OKh?}UTW}4q(VdJ30Zk4V)_RQ0~NEKSxeN}N!5$RnMw$w*h#Ol5kwZM zlJTT-y!}x6K^;9PyGViKR!pZ~PK6{pDQk)g0}Z!{iU0eTq4pYXz1j}bpE=jHOw`?W z9}_1AA$jpFO2)nu91#(*Uh(<=dPPz{^wX@an3bB- zsQ5-Z!puttFfdonBC-wftqaZ~?WYoeHWMC2x=Lqo6e1&}Jp)W6v zz{N;FP>?L*(Sl15-A|QY?zGK*BNL;tk;>S}*A#_fgZ_6zKQmsklM+OB&_CDU(0<37 zI~r|4h80GP8?MOkX9qzVBqbP#HCfrUS&I&g;t5gLtIc}2pe_#0hUs9Wvg-jCSGu6P z4a{!9v>5Q0VDXR8CqiQgO$hH1J(OS@P_ctihH@d_={hxPSOSBbsmV!;flPZ7lOLmV zJQ;PpP=f=`b>k|A#rI!pxV01%u{P%Rrca*^jt0ANNHbJIiwWfv^ial7Iw>gh!b^c< zY;0gK3Gpf-fI$=VgdxJxss{@!ct|UTx8d{hq(8VATnJ9|JF*CHCMtf33tHV0_26uS zVv^Nmy3tX)$B(t#Yu|jLJ6tg}F_YG0&ATE|tE$1A9Hq&Q8O_d=WCK?*ISLVX<`<55 zMuydxQKabB`r~$)Msa3-+lGw^&ol8r+Z(1gM5e;(xA5Fv3kP%NW-t5rnfMsxP$jF-L4x=iya#@8*(oYkJyyUQv%^Z+W#QYJod3N) zc_mT9rX6BH*lU0bz9|-)xoA1}q*(5-rH!2|EMN#5Wp#oGIsH|}(ZjY`=hcjlHF)UM zuS8>Tk*{p`nvMbB+pjlI(LBA&@>cSZU+PIt9uJ`LY;9GgQ4d-(FU`1h6j&ZJ-5g$V z>yRZK&l$zoda;p5h>PxyC+aM7&eK-*&P8TEnFFIP7gD<1IFSW;uiU0=uUVo$` z)3~XzuWx=fOLB8#b#SnFQs|NOum;U8(L&p_gP7*>TWB{tqtPx|^@jDqT7A-ykat&& z6jvz|HLfBb|M7_O-qxnOIpdwEt_Ab)}ylZWjAH%eDRgIk4 zPlLr7Dj)DJYF<;GY%w;{G}JMwyzD)w#|)>cixX!*Ih*caPy86xE#|es5-wfE!{eu~ z>U1VkrNi=gpl@dPXv`+@w~k8v@QlFNGu|p@WR(nNDR4Ew=)(_eyud8NW~2mwRIrqV zp}@s<$8|`%h5n5%4p0_fzkdTm$J*R{q>;Y<79?1d4>nLe01%rHe6{$i4rJAIqV#Bo zvbJp4zCr~?w(o1Otp{Py1OK@|akXH~9*mBSY8Alf39=r{n_*_2t*ab7yUmkM6E3OE z%EUm_*4U`Qa)W|Q_4I(-(~YO5%4BWrNwJ2ykKnEN?zx@kaRi;?dcP*$Ec`XoQKj=b zZ@f#N!ztEA5ns5nqN`f0cqRDmO>Nw8;h-xiT@J(hlOd7i>UYBbEdNa{IQ8i_EQ31` z>+WvpjSr$LT=K0iTx0*{q+;F1T*Cj1?*08G;5~Ks8-r=)D@`qT-hw>8uNThaf5)*& z`pgJ!S}>k{e6@RLh2-jS%@;=-j}JO5`usj&nBCdxOyhwoKC%+@NPEKfOfIWz%S?oXR1;e-*j_`W0SDi}5j_*z_a6G0j&DDP8>MJN|L%3Gw}2CGc$Jrm<3)S(4rex znqY!s#5aiti0HyceyU9QJi;gvsyNm}oRz=B4uMVePr=)bntg7)&x-3fu#qz7|JNP<^)D%kRO#hLhvXgx0 zxEK3)K;G{NRrK8G55x&w z3~&U&Z3$4q^+Xk1p%iIid8Yi7l&;1`)@YZ~5)nfxU7qyXsGVHx za_+0q4+k^`^GQSwVt>wry@~f02N`v((qROsWF8Y^X zbNf;h*oR<~Uqq*Z$@KM0ZA7L7R9&7~Pib5*}V?3A< zjktCL^pPx|2T!cV%FBKes#H{=3zZu%GFNA32S?{>UV0?KuwXN`%O={LzvgP;ABNX? zqXo(k5$g3J%ELkH@9ouVaA6MpgmdkhtFtrtLpvRRt=ZGmn-`Z=jTBSM$;*|m)f@f= zDA?F|Y#xLyVJ<98-Ma@BhEGHDzl0m97j^M1_8Y(U!aw87sWgAT-%kHLEm$;49LZvN z4Fl@M6zAMz^=ZgoN&>gqiJ>;Q!w8T=C#8Y%^BK=7RI*0K4^4Q)7dBHITC8{0Abby)yW+wxMNlSMpn ze)QvvsYj%_iVPu*FlBwzl2=p7u6`Xa0r|0W=T-&9R6opup>OeoSuj+#aCDjiAq|}! z0eA&X6<}nmjkJrjW!v6_=-J4ff*wsvCdA+={i*cp!gW+=wVI)YIWg;W3)ndDLr?k3ysBoz=iC9fTWBmNgTl!sQcsMtL$Y@?ctFPg?@g%Cd zfi=?eNkGG^9QAy-OB%#0)%zL+s;n9%f}!Rb?0w2IyzJIX=b#&e~X07&Q1NEx>JHI$)SV*zjt z=oA%j0bnTclwgYG|3w$vNe&{>6!Y&!tCNQHm3WV&m|;9cqzDm0JUTM0plDX#-dV@4 zJ1*mbKW{EQb^Ha>j=H!jCFM{j14zjP`!b*_L(x<{H)|>?e%E-r?qy6WWs^BRsTvXAtBm46Fq$I8q_%GFAKt(}L$<{@xVQ4+#< zTE{hUiY)-OQC|`iGK=3cgru(EkDCK>Th2$Jw)02Byq*)U6JaO4fvL%aaNFFJ>R5Bg z#mhSdZ9QBDA26u5rHs{3!42DBB~iExIDfe{ z6H|;^H8s2Sj(wO{$U`MIe+~Q`*SjTCd9-!M#ifM!MvQUm-RjyAUT#y; zn^!g1U&{aM4b~tdK$J+W8`RlS`gAV`rqkr))Hb~TdD^b!HxB#$(50itkVK;Bh{Y3R z+A-xsPNh||`o7-Sp5yA@@s67(f9pJbJWVz8Z;o~CxuT!wC+^U@M1*EDq0$MR7*&Kx z32uCk3R71%S3fj*URdGCQxGvd_S{@5-(1z#tY3B0^go+uTWG{hBvH5}WzLiT_%%2n zT3fHo$BTVvB6&VKk-%gP1AWn~Em)2M#3KLrZFZmPh=@FwT+`?(n~~MnjE9v}zK>)W z^Q2p@-%aQeqX={~8;VhO_YmA>%QV6fpIUwirV?Hbe)lw)E&FR44L<28)=nJM&tyJu zD`FIz0mBL0Pc$kXO0h>j*?>`2rm1OPsBEZ8LKQRD|EVL4j?}o?%}qm1Ia9>*c%MAA z-rr2(VjGC!O9Sp#n*>(%43tY{@rQ`MJ#wIe0?~?=m~LP z&BX9sN^x5zRdvzUt zf$sC<+pHrgJnp$qkXIWTjPq9o*Yv{>dTE^F^Qyb0n3%7uhp|X*3E4js8Q|Wr6Mh@# zDTp5)UdJex-gZ&|4rJ7(A%uNZxNbNsQRT_6-%*vkXBXNxG?Mjkko9Tl^Do-n-{aow z8E~7POf-s#{}p(C7HL7?bl8cFu@I5XmHs1K_-N9#RI4mq)W7lj=$%QntN}pWtDN|S zfuPgMmQ7F2!^4yUUXRP_yvs4QI5-I5l~f;ox-eWPM6vMDkqs9Xl)X#6M1@9IK&3eCp~EaIAK9aWFN_vpt5TIEcB{XomP>`Iw%*|(c-CieW@?E@^jb{bE6I_u5H-q&wp zJW0Z1&t)_01Ljg+Z|~;E`M6B{H!#tLS!R@ME+A!|{8S?R$(hV-@RZ^yV+n~d=ov+* z2~Bhk-@TB&w9e=+bl|h+yBH;12(URM;>g|U;d20_3y&6KC6Lrotpw4cpX4%W_*<{9&gO*?g*zPmj&UaHd3+m6bd@U;6wS*^xpl9w@l#MXdAoyp{l_XNGY|Ar?mm3+=I z+b+kfpE&jnJOLM&#N0FGi{)g=_-S5;v0pR7!XM03Jw^|6#eP1l{|B6=QKsl*IXvSA zcVBa<8(n!ruFs~LF5O~O7?^Q$;KET)g47n6zS<=DkPyIm7#PS(eCsu3W%lrJaB?uO z-Q0IZ!+(^YL+?Gg%>OXMzKNok0F(Tq1|vnKfI-3&NhAnSj*y^z7HQilEPCMkK$Wj? zE1@X&>L2geo|kOVT90TCs9)Gj5(3%l-?WJkHY>INAO5 zOu~IwQiuyu6!ZZVbHVee*e{{+jv)gQ>X1>+~ zU%z;A*{j15`*!n$&i(GJPhok59 z2YXOT;s`bih7kfSIr$TAaam4q<4`eyB;8e}sppg}Rlg?4qK@}{N?Fr+DWn)4lnq(o z;dRh|vEgADxP3KLi&U2YE5Am9j}5{luyyf&o;VqV?;D&5G#fiax`rn_>k>vbSQ z-rzs%@XwXgjg>|C*IXR$imAWJ(pEDJeb7@i!6R(>(;7`e?J^(*Kep^bJe@UU$%VIV zugsooUT%}3kIt=V*R6Db*7(CO8|61;0~N1h_Vz@L+MLgZXcZ#xF6#cg?wiP*ukarC zuit*DoV-c;c|{5KNaTzF_9h`V1|_u30?{D(`mp%PE9R%Htx;G&D$dm5W(K>pVM$@f@8uRI>C zo_Zb)=u{se1C@b%9^c-u-Qc6W2sr=bqE>qT{magYvL#Oe|MHUvD0*4d>~4hrFejW` zwH~iFUwJvl%c?5jMDl>5i&Mj?|Ifw|LqEoHL}>^wA79|Voo>Lluid=6Y|mTb`-*8~ z5OG>P-g!;}&PF04iIj9SQ$(LvKO}6)GkHlwm557!S-jTR^Uk^=`?mkPY3g+`rv#!f z0Tt1);Tt)V!ii2F7ut>7HaH?M#5EJ}FjBA#(`VMV6<=CBh;Lwn~5tVEj}T5zYwHkCSyFwOaDLy zUoJPgWECTLUMF}SnRT4-OraKRUZ=g;MG`I{|I=oETu*I8o`*tHBSJ#>-L6%aB5fr>rP7K`HW14md)w?*^MK# zlOXhmjNG{g_v1a)+LA^4I90_ZjKp#Sy;iGEvig z^)JAab6JffP2%aGcwCwyn?{6%$;B;s;;@8vSINR0JW1hhazS31zdEDaPg?pa*Ha9? z98km>xgJPGn(FQ`cc)8yEgBBqyq)Ed7*W7Pjmo>&SCE`nyNlO0DT9C8gog=ErZ8mxQ*Ob8`d48Ila8q_&)L z1M^*4!mIeAcz7W7seGoZh%(!Kx^PJ4HvavMbEV}(NSQh-iH$j#=*?uJE9Lb>rZSpl z)bZT*Of$+?xqk{%j@mkW@8t4v`c(ECFWkjcm0W3sg^alGqyI(9;~F3GPx<3g7@R+x ztQmDH2@&Gx1i;W?^kiXa*><+s1g9Z}1XcnHOy(amTsAt)rmJ>beD2Ia2 zb-e6hO=TEi2)%5GwO-QJ7t9r}?9ku%()SbcPQ&GMn1Xt(6g=r3sTbx*mA& z+Ie<8cFWsgeH^dDu_IDjQep%B5I{_Uu=%a#t>g?VbsqHL({r*<90;I(bk@Q8=95&w zh;HYlK_j@Prozj8fwSt!9Inheq47MVsb*?K97)(&pCO)+Dh21C~^IpP_aBYKK{>GWs4$8B{T#p zSUJcpm_K&4$bj_57w$0A;G9El=Gu=QYUEcJv&aKqR zxWm$?hR4NnL3Z*^wC2~16@wtP!r_*GyL}>@2bl%1#sI9(Z@tP>WX8uwB76HvZEbvg zZl0DEIgCq2L5Q2N1<=Y|9y0pD0S9 zi}$>KfQi0``laN&9tPcOT?GYlG=cw?x&teH+^5E>-==O&woEK|T>fhe{I?f4d5qyp zq**wip`o$Ae;1VI`9j$e)Gt3U8=28{to&Scw|P$I$5yDy{J>SPbf^onE<#FWfxJto zMneW2Y!UT<{0DR(a8%6lS!J7a{z07R11XK^p$}C3;i1@$6P2do4%2@hd7_J(J{gXC z+jJFc(eoeGWQ(VxyStK+(dNrX$rKssugZ_zJ|8=)e9!FInJjQ@O*Hogl~4Z*JmmP| zcO@*GjJwD7=rHdN1*KAs>L~##B!mum#$BwJXLVkC-jzxxsSzquYe7$k(f*nH9RF^p zD(%Yt;T{h!uQg9P9P?X7UUQ&yiI-+TErGdH-u%PcOaVe)((mQ?)Vud+oF{6EpNO4j zsRy*AW`b1MoD(T}8vc3Q2Ha02V7qapR)x~d4i`U6G?N>Dr zooa`Bi16vx4rbF$J1aY?*I8g#(j!o9k)~AehltU})|RM)1n({q1WP)B>1$(W<72h> z?SW?;jkoX4F6kYg=p(8F;)n^B5+Pazx6+%eR(uvvq75S-U zk|1+kIR^(RDXG1ay*AFiG?%^v$zMt)1^YA?wEyz~|8*p`&0U=`jY*P^%WC!h_ikl3 z2NCJDBvy}D*-(+g-@Y9*`WhsJB))Q(B$%W!7?^jngWRn3pIR;+b+Z}={3i4qGdyh0 z6c%mjm>LthN&|FgR#s0*{>4m$J-02E31^bf$si-b&`|7Sl33pHs%t#W zP9zr;78Ztqny@ex0?Rb49UKBcB{)0#lp~drnE3P3-W`GuQ{)sHrd(<^B8zJ~B9-DYk>F)z#SrC)l^%f^m6{BGAe zzzNA_?d>0Z)v<7K(rt!#?emXL_aCkdAHV#mfp}%S($dJA4rc{52!R2IHXY$0&-sBb zcfy~dUx)b=G3~TGYOy(FEL`0Fow-HYNhYrTA|c2XoWrwt#A%d|!hWKtA%`vg=NC*f zQ1XC^La)vatSg8i`n`Vr8ul<)S}v)`jh^D+BReXY@(R%)t#~Ptc-^Ql@0)Rr4s5{b zG(kr{cQ4giTU%?mPCnjhpFy0+y16nOYx`?_{P(xl+^J=cESJnK1y15`J3FU-FsiMs zbNKOP!ApF0mpMktaSDjMJ5|gv<>AxMqvJ-0he@NpOgwZL*J5o$H zZE?fxh|)K3*=WVpUP?2tZK0p+*V?f2@o5kI#>fz~L5q0CL>#4aw=2Z$`@9o>$rCwo zV-Tsr2i48ZEnXNXcokmtgDZWJC{HwA#b*0{1^8^YG&jSD2Q;r}2xKjoA^}AgaG!D> zbtx{MA|pqhQj@>uLC@H0xwLyfjqA~l3eNg9JJ3xc#zEqfcko&AKMz|y=?Aku5~)y^OA1kh?7eP*1N{Tk^klprD2}ph%!3x zEz#GfM1cDOv_wxav9tuioqRJn!0|1f|2EHV;B>IH>^PPIiWth*Jv{P)D*y-OL9rt^Jv_LEM&I&ry@ zo}Qr`^S)QTnV6C|BN*$kvNLk*DOZBT>5!mr@9!4ZynOch#}D=N0&iiO_-^!|sw%ED zMcE{#tN|5yd49L;!Na#gUc%F_D?!t$gd36olbIefDgrq=LRAt?Jv{|N3~=;t;^$F% zoUhIS<}!Qa5o}_lXK_4AuKaJ{hjFAn9(J>~W_y&x72P|~m&*!*rG=dngAXNv+L2*l z$0sKT>tn8hgZ10l>c|}+C#I-rWCtRgFU+SJTYQcL z$*Ew$+1GiOz?HEM;5WF*%UfvOjQqRr>B`N^Epo#}@b+!FyF_60Fg8B+R>)PhDE~n8 z^8CI?0Pgm6*5nj&3bKEKPsNRlrgPA-VObI$er$913k&Xg8$oc-0-Xj3qAtPf4K_rB zG!+QbfiQyyS0ZfDvDOdKX75kR!AOZ``yiDgRW+;i{P zdw_4=4{k0ZRzrSvea|S)pCG@l=Sr5n@7ZE&W|cz z)r|%PZ&DFh&iy+nGtsa0aMwI6JdgpkFKUlBc;bK%5z;n%4iI6G!)ht({Y#68Y1q>R zWP0#Y0pe97N{;F4iQvE<6C4~?c6RkOeLUMLinp&{@_bw4Tk4AKi+^uieQNFZZ^~C~ z=EV?R2S%3Q!UL-Wu!{;*%ca8|$TV(qmbz!cqm@e}nT~GCQ&JLgv}GL~Kup;u>VJ9R zf3gG9O5p4=iK^I*HhW89N9IPx1A~{(!(sBB%r#MCARB^h3amD!N2;o;BV)IJ4vo;` zz)+Ck_)8RH9?5%vR@f<4Ot)^rQ9uFs!Q z=0{W*~U(aRm3X z6b}&%Azp)64>z~%vn|z5MwR*bG3+q`0qt_*LNg6muMM?Hp886P?Jw6omyFAw@4c(D zLXm>Ui$Fq!M@snD;)_5E2!CBZ$?x!=_s{dQMnSk>|G>J0kvQhQQ7f2W zH@UBvJaZsG1~SFsn>~SoT5g;rswX*DJE*#bg>TgCH`Zi15M6i$31Qljo>o#+UIp!} z|DDqG6SESu(jgrB@4@wC@3z|KTp+N5heUyLmW8dQ93h5OJJ^AG?yFCaNDwkTsuCGq_kJCJM14i#M~Pc1xMENP z_sQ9LXD0BHnv4u3EriLdsL%dapd`!5-3A*UsDEKQREolo=E_2maa$|wTm+t39SA{g z-KvMz1(Q`DP);^96#GVCrRWxi7Oc!P%TZF2kCfT2hy+}bQ7auS>-*O0=Hj$HRm;sL zkVc1-qx#g$%4_paCzL^s!#^y(Wr7OEf5{l+pEaCceyTmyLkKf z1&WuC$7sL(npL1iJ@^qT4NWj}WbS0fkl?BKaQ{v+Zn$K$MGzV&Z%Yfyb8|CN(yXm@ z`R?tkUaht6Q8ln=30HC=FvpM3_{5%+wTd z-4ldMaZkS%C9C12%utV#ek^x;=fpKcDm^1Z%x{|xVlYec1H6hmntX2X&G*N;`eWJ1 znScH`ZL9d85aSN3p~3hA&b24xY@`B)exQeeLI*O5JH%V4EPR3N0=h%?mXhf6LAZ9w z@=VZT3j6FAL3RaOK8Q4=UJrOnBT9#&q8=ILHji~f`Ij+TW=@;TZ-bp=4i3s$0q3Bv<>H>1YVePIMkaT=d)oi(Dn^I|P3S<_o6V{EfkBWozs`uK z=G*+-uvaLhoT2|3c$OE_Q@%jN8y_E^&TWD1Cfr0a#%Ug?Bh^0!_$ndW-_x2zf5#I{)6%NCupAY!759JPEMVhLe z9ZEUV3hQ zi(g5E_Uds>aj2%M4!_O%$7zr#)YfEMr@t%41WUL#0!M@fIn(Xn9S5C6oc;5sq@!#&vQB=tFMiKPY@x-C(dtpyjr}JIfbzUj>7c>>tPQ3?bw%E-j%t#$8j7`_QFos{aV zQ_>UJ-+;g#80Rrns!#$~7de|b#-=S2qpYsLlqx8pmErLS|q%%Cfdx(Nj4 zSojoR#{kv47q(XcqX!&J zRStvD-|Oj9!@kDyCr0H)tp=2uHYLR!M8Y61g)n7dZf-wT1w&OUFm_W) z=Mey@4qONTOuWI57D^;AJORTvRT7kaZ3rEhZ;aR3`!4lU-@kt!eiX=AVUa4Mm{=fK zsW2pd1-D-)e4$AKLlqw%pJWy#3b6148$^)qXau&UKxK-JjcwfM*7N!EpKA1y%$(Yq z8Za6fdu0VWT=3ijc@w+EXs?^oWq@XpEIKrE&{*8dAw*cCO*}7&Vwg`)*Z**<>gn}< zNHV%0PSm+eg3}vV0oHXteh^m)-Ke|`#%rJkT_1+N0h!(LTBmuF5DgJ9u(u0L;$br#d=ZdVXMrFO zsL4RwgCcDbq*tJp0}Db>!Ow%sD=aGoO9wa!_&7MfTm6M1J1W5g`4DbTQ+-RzIW+sQ z0XQ!=*A-Z4pr{8MC0oc9pxknCb%kSv@}>Y)1ypiSN29Dm!SkzFBLc2m7R(&<^u|Gq z2}V$T@zuO5P=$haJ0EJb6eG<}JlOLa2byLO!h-w;HvGcsUw^PU1u3q^mvGj%tLrB6 zNlY0zbMubMy`F1k{*4^bb+5EK3m1Z${xL0(r;AXY1)bmf{!1nQW(7qo;SZ`+jlbgT zsO|_(%x#=^@G?Q0fD9Htxed|A&=CA%Bk?Kgp!|T($m=jkh6J3hfU36|kQY#r)6!-^ zISoiPc!s=(hLMV(5Gs$XOtFU#4}k>-t%f-ni;z&$e=SoX0o2uNcWMnm?+fC0M-Zft za`V{2Z1PzVGS0(6(`cFN%wcAP7X^jKW#`>x;|&$4EnCHzpN& z(-K%}6bU>+cHf&g^cpufWpimwY;3^e93KJABREPTsv59}4O$~UkXirwB?fwCFa`i! z%y$QyudR7t76SOf7~Fe!r%-Cc+dxCLJV^V3%h=Q`bn;01moH}Er~!WsT;hQLe*rXL zl}uQQl3NWXBnyW7OAqk&z|02xPdM@BH#eavaRq+>a4zqini64SgFR`b6l+rG?ZAbF zeglr#F6aW_K*aPdK%TD!Q&{lQhQ){)>zZs)(eEi^OVIw?W|5wT^Q+52DInx`ybV<# z^d-$;=>^A8T22l&9`D1+0B2Ef_6E2Gig45kJGuWJ{`?>OZLm(c4^I*Jf51bAr`rmS zD4_C1vY`$_k$+uWOajM5V7DbGC`Zcl`M`x2CK~`QKo7lxdZmhKlEeFe4OBxORe9X3 zAVC-ghHkJx95@J|thRyvb73LByj;xYXA$bJ0jQPI1^7yJ>0&Ik5G=hs1AQPkK*BwN zNP!*%2iSe}=gM#@_y(1Ozy&@>io=WsBYTxPb_te2F*<#o!1hfkCT#FqdHnbq0(Z$4 ze#hpf(KM$X0FYW*1?N3U_25@Euv9G&UtpbH0Qm}v_+K1`jbKIenkIsAXIz8`nJV(8Vc z+O3MaWFPFvvclZ~zx8N7!yT_?6>grhq+LBgG+;yK=g%QiB6%heSV#yRd!__5D}HlV zQhEoG!BTAqv*n;<2D>)|6we-DZR4gh!p(#501pIwJAsR=sG{QO?v7d~tIfrOgu!lf zz%Jk$f<<3r;AKEd!Cm-#z+GH_~(b7>^>J8W$79JUFiE*^D zI<}U{gF7cA1UpJ!_)mgA6=;pVy%E@h-(~Fm2(ADqCLm-Bi-^E&MH#n2s3^6T&~w*X z6`#4Ly(<2n;svK?3^ox1+6-%J@$m8Q_!qO2r@t$Ld_#;uL`&NSZ%If<2yp^4>Ak(P z-VCT1jv-3ob%0uZ4vZ+o+?G|rk|;ht-rz&g;xosZQ{becfH1PygnzJltOc-Lpysa( z=7g4(uE7yRg~zkDwi;|Xph&`dg{}?E96X@Ug|`U2QhM-qo1VTH7AU&*UbYn0VljsC zU?0jdYTRf|LuGarE$o#dpSN)L0Mp=`KM`;*T zweYQrA?$#I42rN1UR0owu|wg$Ed{!G9q>z@f*1yVFiO7wCjblq>OD5}R9i3=QGM;| zwf~m&Eki{G__U(Juy2Be1tiltW3Fl+OU#x5I|41?!r#AmQ1(C^24V!@XPzw+%Xq(!|6eC^S961|M|3kb9Du8bGuC z_wQd8rF6fQ8WNQGL>_!L;8cS-Cm{|71QH-s|67lXuu<5&<4$t(*5wK}BRzc%B$L&q zhTLXQ(NT~s&P+8?jQ6y;V*TIIELsFNUvR;lNLH81fdSD7@=h&4BI#oDQ_YFQ)KsJMv557p?BL5YNds0{G zUiujuKnjx!E(CxG3Q6GW+M88acei;o6ChM!SwHD0)A`@()RCt|V03g8w6m6;q+s#* z|Frk!;Z(M5_~;@DsU!(WluG6dDVd@qks-5$WN2j`%Pd1h@tQLyGoeIgLXr?N6f%ZN z2$_fMbG?1vw}1Qp{ntMBIyzo@Tb8w+=f1D|y3Xso&a-_KOZqB*G2$XBlXWz!kZVA< z`Rm9CltWe`X%nKN0LPRd7L%sNK%142xXK+#8Y{!=2bS(k3$?MC?|?0ALc$3-?M}$d zSG&8f`}&S~n*A*)u;w|RXB{1l$;V5=ZEn3C4povB9QG*wj!;fGP~0j6e^y3qTPdERuApT^DFbh#jC4Selt3BZ-S!L*DR(^tftj_;d3= z#dc*oRpSLsp?O>p;Jf&=m6%4X@>o+$`-sE6yrkrFZ?ET%Bgi(9U}I_ow@LWC8b$(G zMTwnApa9}$KK`jnU1ZV^pK}NiC>{N3Un&%1i6@2}l$LUy)VY3r1eE`m1qD#5#_x|C+@_3mJZ1|l2VWY2Epqk$=^)o zJfn5{7tcgoX*{bblpd1DXEa6tg`w5NJahhPPYc@+%Qxn>5O= z6r--E2W67yIH~xHqc)TDw`aE#==SWX~{13&~36_FWX z<0mvxp+2>To*vmw2VxVhmi)PMBxP`14|D_GH8$e5VvZnM83i4M^&K>%J0d?OKKpU`cj(GgpH$rM0Fz#n$1Q=U1alalSesp|aO94Y+t^{3h^=O^OqZ^m*^ zEq^rZ)&{LZ0Lzoh3rX1(WY8G3-Xf!`%S^amf89rB=WY3?Mwo%_Z6P;OLEpp55NP{s93|UKi7h{ypM5|9FM$R?p-fxME>yDhh!b2+J)0 zpOwM-q}qkXuNO@o78(AGfih-+6=T&&&fl*#C?mBLI(Ma~q!L3P*c zadBzq#<`oe=PtSvna}(BoNNrHrrp;c81873*Hs?bt#Zg+ojrn11X)QBxN#72U2m=Je|Qd6*$FEi{Rz3qDf+` z&?D`Bri)q>9S>+?t~#EX$Ie;+uz|~mwTE*Avd6!0_@LP3W1!jVemr{&y;J*V@=2$? zYTDl1=*(0byC8MLBDTl1OmOnSs8jl+!-83WF%>~xCC>6jeO#M%zNsA9>}-*=P!eiI zRJS*wA&6F(Bm|px)A)-hhu;`m(@a&*dQ$tgfAU?c=MOh%H`_OVZ{M~wowdj;#q+n@ zil%_^|JiG#(P`eDtb+MfD$OfVTi;G&IhtHt>RAZm>R2A5e+hCvOCVYgU|ZJk&T(6uum~U zVVpxIAs4n%V94s+i~FT(O5!*5KU8eonaxT|+P~rf5gpvYW8W+Ch<+b#K4Cgj@RhHquv8D~PB*_L^hAUNhlPjM!( ziNk_2?~OuYM&wi?k!XFcAc3RH#2k-|%!Bk1JO3ArJc{$pzXN#n3kbLdhV=SnUbJDJ}{yD%ai-?Ar?aJ%(nd0a3I&1EyJv>Jv z16>J`EfW03*{vtK4lB=Jl(bU6`u(`VgIjSCk`dPkY_dN{s4kceCC%o>BKa*Bn<*+3&OX?n{4?Fvj zD~uagCxj7E5A0Ola$b%YHA#HP#U-o3Y3A*nDx#}7kck5pP|CkS3)@s)vxHRXIzh%e z>o(`Vg~lY0z{^&YQ_P<$lb~g7CL)4PxZC6(kGK=fEm0)7>Rb}58mpSI}A>ps8 zDu}Isitj^=a0ARmkc-(TRkNre1n$_alIDL%Pha8v4I`VHTT|=`x34&ly3Xb(>ArY; zu{AtW;d|B@2U;fu2f53_XFU7l?`Bz3+PuulIYEDy{b_jW%nPqo9z zNeY#0AufQufKs`+tsyHEqCLzQl>6)%l&o(V8xKR=uG-6s0OL@Dt5@TLg0|nQi$R5p zv=Cbqq3lxF4NOIIPcvFuSm;Q}Odr%JG8QEz`54Vh=FS=^V;zIu5PBD%nKqp8(R3?a zozOfecl!Bu{N`PD5%)8u^$YSHv|r=tT^Vr)QYj^Pm0j+`mhto{I2eGvQgU-Qdb-LxJD1EXx&d|p<2G9y(oV`F22} zmA@=_;r8|8e8)}b2O?s=JG}o?vggtd2em`^mqDeEyVxWiBV|`QdJ7*+xdS^5!O)y( z9O{;krS_%VL2Q5O53>anoREG%zDy>=PKP>;)TTsJ2T~c(et^vhjtS6dc9BYI?9Cka zV%xx{pULbuBRnbo+rtgpkAn%1S2HmG5Oks)Pi$e}Sdf?S!B5XK4Jt9bMsAzX$xyjTP{3^)*_H&TDjBS$p9@Lk<~uzlD6 z%IwjjA|l{(zlxeF)}Er>s6?zpw#u-|NIYlP_5C{a4q5_%OopCLNDLCr@M@9Y&#rBu$%Onv;6O-RK zs{u-Ek#TljL}IL8L`=hA&y5>zzRtoAWE!?Yx4PlClMD4rQRqF6-0UI4c&+qY%)5R6 zyAyF38-<^zp1wYh+adcaCd@3FPfLi7H|}ZxMW&-*2>c$w+cWc_Oe_7%_V%aWY?>zG zW^c1TeR$nZ;x2oQ>b;7+JS(|zdBOiQzsn#{QF=FV42-{{@3_%2wQP9jFs7DC+KLhd zQ0|zUD#sotA&_jagg1+Of|_kB0limTZsZvRWNfX1wD;;}XJ%mjT{ldSLHHvx0&F<8 zg<88q+_v4eTh9{wU5;RdTKULEsAFPr!bbw0dKkX)ko$DJLP&a`L8Pe<( zSQCToqrG4ppv7OWnd0TV+tHTMui>IX*c0Y)FLuj!wQYIM&^KJ@JkEplGJxu?yRF3`$Uuj)AN#joYG|iqJQJEG*vp#fy!o zk&#EHOkVUjh<7iCv}lHdS$KM?$blwn%ADpjr~54v8@f3K{(a*+L`6D}#Drv(P5iPj zGJ1|M1lPP|)nx$LV921#L%mRpDJr<;A@n1hVH7+Ir_^~h2!LcF^96KFBxd{i`ohs0 zYZ0DUv;?dLfCbQBtuwdL)Y$zO%E5xMe+jnb!Gi~(Pzv2j$V7oOa@huW)_+$B?A@&uGZHK^$&lfnVZ(~r7$tKyvuA| zzo15Ykxt5&R{YfV`@014FKZW~Qx9m3D=_fE1GGPYf3YTY1LKQSBZ{?HyyxVC0$}SG^skurm#26D8S_+-$OI+GGr~=rZ^!{}C zJ~5%Es|z934BQwLFV|l%5zu%CNmAsx2& zwSjOG^muC~$@WDAIIe2@C+x@h`Waaa&R=E+f=0_C_~#b$5HB6t3R-K_^SSL6LxRY=tx6*T?_tU2(M}>F)_)ObNpZA_7RnB4SqC6Cb>#%^KQVd4?D24Yz-(eXYFEIP3TQ<^O#p>Hs#uBn7Q7 z2?=+ciX{1e#~)$%IWIxEoM;vuF_z<0|2rN-NK$k6?(Jt{61mk)@>s$Nsk->^&n`2I zqAOc|${*fZ^DhQ4#226-1*!aH_bM1XfQaMh*x9i*{>oOALby=;qn}gCEuOy}B_Qf6 zwhj&ehCZY2f>=JPXmH4rCz;8N0<`q!46?R#asGWODIi}kG8#g`h?QPgRD{YNoJud# zRx05_7G${#{{LIX_mqzAKW%>l%khpxV!=0ZZ)Uxg|3}ke)a1?m-D4q0g^vdkKjkG9 z9Q9?^>+eb@&aX5@yE#t1|MojlGL+@+-)J^q!RhtaYUI7EOLJS1Ozw|IRh!Ev3^PUN ziwEoJ9~693mqXn^IyP7yc}D2)jw3tf-$wuf3Cqbc?)=37<}&BZtz8x`bi!hy_p#ZL zN?YK>$-}!}E#$_FK9LXl`>Ti|t7=FQsdnw!=8YNZhoC4Uy5YaP{93YjBmXVqs%q&bvz^>#9%jG7mJo>K1>7D;=d!!uHR98nJk_$^s zOFMEgj9>%~DnN{Ihu5?QO_?^Lt9R7ZnY~{I1py9B6M&N+3Fc{>+xyA-i^khd;(wNq zTS|KfJ&iwf?*TTDJcNMsH#c7eD-I4cL`(9=$+r%QPCer1;_fXq7r57CPv`;_}G$VC&D2hT=$=Pz?hKZ@E<#qboa~drw1(bNj zzqfI;4k4zqlYt~VOPo|bA@@m(sQoCa@rF}#v$J)7t(23COY!rN?`l<6<6g}afGYJetn~C@p`o)so}<5gQEK3NW=gKX znvlr9zxg5-P$#|F_g>|x?Dk0aEEZI&e9s=Wmmc~rq!GGWLu>Fe%Rsa&EL#H`>3{z} z7CW693dU7`QGetG6WDG65Y}Er@KUD<$f|gEV~35Qx5%Ei7O5u%&(Zv}TX>ZPv)cUErJ8xy_ zyyCqRELr@`Z7e^pX+#98e(x=dEI6jF40w9Wm;M8+tlS4k&C8s0Wk)BcNTHnA;nD9; z%q^cz>azPo+!2*@GqMsCouo*Sk#YlsE|lD)CLSp)e7j3zz-NDocPjMvo!+|70m}CN zlap}mexGj5nH6JtrxZU_O$FVBa&p%EUWnDzmaPm<%1pt-%fFt)GaS{$Utg{0(r~}r78^txoPX=7r3R=(85Pf zm^-PP8IK}UIZ;Bl=cSA&jfsdl-{a)nnO-*YmPyrA1m^R0r{nphUAQ)1I^0_r%v;*T|v_*Kz^*Vk|npXGmQnX=^U1^6U-Z=q}T|g zJN#s?27(^gST?wp78rsuA`nQqeA$s{-gJ@DNR&P_fBa|5+05Y99peNgneVYqky{&IYesn@VDdRTvg#e=TY6Z|SH z7SY@7G!5?CTLcM8U2i>~j9!xcu8-4ctd8~(x7t#qxJZoNvrc1E)5y?J@aCxrn3jNd z@<&48QzV>r3lo}Gu5_&Y{+6%vf}DW99Z7eF)WW&dJ&7+UjzROlDTm7PpeT`nfaLBO zEJraR06PPc8Z_W{0_m;4c>Y|c;`S_HNzj_WGQ`>>QHantNB)bx6HB#g8V4aUJ))&2 z$wgS}5P$vshr}(QG$;lzvv?@><9^5J&C7>#;>yDS&26~WA2=~mOIkLGk`Zqp&|tJ9 zJnkYO*RsXiCZKhTgJ@u2fbbj;IM6(Rfn=}Issl%Ju+q)k%4!3*AU-}GO}a2nIRD_# z1u@~>x8LB)kg*F1eQax+LEV513XSugo*q;wq!!_dip*{(k$E*UhEd;zbTvCwtj`x& zVna-A)2{lhM;vd*kD8kgzcX|+c(%exO`sLr6w|r6&%14@j_FYZwXn!4@8lyb!-hm^WVTVF-t?#yeb140S9JH%y(E_a^N^hfbp;RG- z7Ve;j(bH8rTv0+4WP0T4>Tj>QaRy%wTfCLhzi=!$e6NbOg8S{`gmH^hd90MQ8rnxy z6lm#R;>&L5g~K-R3Bqf%eW5W{?6vu4P5=_uCzrKIG}Ce?kpIwY6T&J9?CLz^)g_d;wfqni`VVoIFROBV` zyOQ=^q?3^ey}!7fuWDB3OY)1EXhuC{%iH3z^h}EjI#Kcy!<%HJiN~pa7afLAtJ`Ll znk@&nMjbM^`SB3f{R4+TyOqDVDoPzY_~?k(iEaiH`pWcUMEZr;$D`RNP8>bKL7X&r z!4t^E&2@4)dwJ%0-Q;k|E`Gzo)4$mswtt^}O>32lEDiHI z0K81WL=`>q;@(T!VOh)p!n_v=O@fq1Z}vB2d|DONfpjCo!zU1;U0jyjq)lNSfczCd zyXor{y_zX+ezj4;eNqQraRbGvT||ADO8w}3Aw_9J-}N0tnO3PaOLU8`YX#lE{~6q_ zsXb`ZgK6^iw?tC`V148>2AHGL;4aeP9ic`+a#2ai{p%w!!jtUmi4GNM4-iCK%wFo{ zn(0P;BRPNSWNDyyLVr#iCdS8(xqNRM-h@>p3ZV`p+2~{;V@8WHEj?YzIQI=T2ZOiz z>Y8!gDrbKe`Syk_a$IyqkHy;`3sP7|Q11_!SV|OirjFhLEQ;XW}p`8Zt;4fCsKP%r*uemSw zm?JJ@Jmxei)~148l6r95(BL-JB>{BZQevW_;E*CcTB<&th9`o}h0}Cyaj~l53T#+! ztujfl$7!dCS@o=Lgh*e0EI9nbx}R+<%}oe?%^Ou{{`HrCs}yY*_Z53UER2r+gl3=_ zzz*o8qjd+4$%DAo4R9N99{fSF1_3-&0jI!!fZi?w@=7J-t4Ua}fm+3*d;o zdyBwJ#DR7(?IUmnoB`s84#}pp!{&Gw;m<<87o=otKt9v2xcvf_m0)rq>cARe7IH!u z2SglNpl)ijy`?!nGRVWYaqY(xce0ERJq;m3ujNBneWo+( z%7W|0t^7)7_=QB74!Zwt0dZ&cfkbNv(}_bM_hBg`afgGL9A?B2>OmcWIRY?KQX>IF z3v6ecm7ZPuqy+2KjB(Fr=%gZnJ<7?6lR@ajiBP(aNE@^bJtEat2i$&NPx7(Mx zV{{6mLa`@I0>J*oSWNx?ZFO~ZpeRrg{Qf~<_h-IC1<@Pi!-GOMs0euZ3^M^2f7us^ zjKDL%q(EW_Mk~2Dr~)`^AIhhcG?yVDk2InTWD>FXXZa2iYCPmN{r3mnPRqzt|9q$S z^B05b!vyd9JR%hapOX_bFB8YdpY9+;`8cgPG2dYfpbO5RVNNFJG(5RhdTHjGT&D(V z?4`3?GxNOXuMCY`^zeOfj`HLkhoJqPjg%>$AK^D&#*vxRbq%arP%}lDSqQL%5Cr81 z;4O^41VHLUJ&RTbC}TKv5nMe$;RS&MezsBYrV0z2TmRjfk(zp6B{kyckt1X>L?U;c zAPRYO!G5~^s)P-~QjdAk&7xkLs+DmVUu|MRO0A5BpZTCj=Lgd#xFN8faV8bOzzt!f zjoS%>M{`|g^j<*0=jY>l1WRGaOQN@8D%68HSJ3XF1PncYFA`s(XfnC9fS8*3#I4lrHQdmX$ET52u4()&!11F`lwqR;99UZ4!I_z`Qt~IY*0)J z&GB@LP7z#M7x}G%6Hf%U6?~9e+?vU4dPKcO*qc)Rt_{_xGQwJ_L7-(P#aZ-6u@sPY z!L6LmFSas>#9GDlJ}f;*EPE-bKiK6k$Ha{%cmqNs$Y^LpC>_-=v`|A1fTqEL2J8?) z!nt$j0Q_ztkj%?4)Vk$0(wsLz{Y6vAA;5}`_=pw;P@IJy9p50M$R=pyefy;bWN9Iq z4)<<%_lk3e>_B(>m=<(u#|r@+sBVEM?)9rW-zulQr?hnGn)CG}_?34E3t388hG)vB zxH|obO_M!Ffb`%Kk$0EGD}u7+?qB65+rCR)*p|HiwuZ~`Phpy`sTs=TxmI6knvF+C zY3c@YQwVRPVU{>|R$NrH=;>2(0#g7Wp_=|rYFj{XoPKTTm-B6CggVw`YqP6-n!TNa zomkBL>3!BW3TSd3qW~E(yK@s;4U`tY$HM1>kk$= z8lK$YjZ&B|KNBo)Et_%e6}fl)_1)2y1-xrZm&p+|z{6tZc+=CPtMr>O=={;}2Zk`3 zH+val+7o*|Y2@_CR4U3Z*Ppq?;9DJaRw&uurJ>S{oGu+i_=`*?iOCxRgy(p{kH}7u|TN!9Bt#DU9 z(Y*A=^N!Y$9|%>NbbiQoeoJ@_*KS3Ejia)BY8EHNJeu0l%R}9XA(JR29phfh-6Xq_ zR@3EBC=z0O)L(I|w>cw5{Rda*JFdN;Q9Giv{rGfGACEx7sJ_J3v!O%N7g7t#gKwUU z_4-2$PwwSXHEvRIJA5W!!E^j(j@d$cr>SH>{Z7xS;$_>7z?O1P6+Ji0A1Uj~ajd0R z;q}Jtc@6wz4Q?lK0}yHeBB65%mDGKhKLUx3MABeAJqN~#c*5meEhuZ6Bi{yMjWop- zK?Fpg*5B*C?&VLOJc)>i0FM;64uhbdR?yP~=APq`im?mC|3}L!$kJDre&Oi=VF^(4 zy-&a{tLEP85ndh@6AGBS3E6w@6>-DTbHN3aQU*@jgxbDSY z(bLGS^XQd>_#VD|MsI?Es-f7$)auT`grb>w)!4>nHCb@{hRpTfYS{;+f({(8ibnI4`fDoT!FI8$6!KUgKzhs}zQ-P`8|=>Op~l zVhVgb?kZh0FDo&n!Ofj*w&jCc{3hoj*STH0Ohn2q8d5hDx6fZO%B1u;_TfX-09#9% z`jNUXyjJwJT#-9z{NMKUWR#VqW^xJNiJrbnzpyX-DgkkyPuRXPuOZTnq{hw8&+Sg@Iu^d|v-egfVr^r$YD_kgH zO38Vjlb`l2u;KLf=;dmwJG!X%m=E?Iwb)EQToXzb*F8yNec-c+#XiL!p$9Tf=Jqi6 zoN%AysUr-%T^Ko7tzhm5P;@ns8Fc{H#1}N@ zrl+Uj9t%z%GXD4zCg8iB1Llq$d<_NzYkB?gP+v#j1ID?Vg{2pyxV1kl`$a6;V1f>} zca$NW#04}il#X^lQ;{?R^WYpk8yoy1uks(nF@q8sBtFE1X#+Ov$5rj~-SWG|w@A9S z&#!UxuUKBX82Osshp-rykk0TtT`TB*DrFcP5c}b?MyCliAQl^;G{xN)S z_`XQckEvO<+WCEZf9IVe>)m@+FMT>)8OnWQXyV;!>7~S<$v4N2Z6O5v_V%Aq%<vK3H zz!>ui`F%S0WqoH`xUbQ6&GJZtf9|w9(t0y~J3B!Zn@B5v;BaK72#eUUV*>uN%2dOr zf@)NexMy}mz8@oAxUj3JSC?CpPWJYIx~Aq;-8*$gT+#3N{_GYJ30S7^Rho^CWGmkj z9)Fd!ufB*8$=Um`n%1S#$MJTHpX)iOJ?8TF+Dk|;_gqVAS$BQ?na%%GkOqsX<=|q` zx0MgulN3|KK2-KCu+7fL)O~ABC@D0ZN-S!Kn%?}KS$|{Ja=7a2r;QFN*V*4k_Ftzn^^K_-u z>MT!cYiN{|m37~^68zOs+@OW>rg*|mLMKr)O@q#F&Dp)FsY#fL7MxSo29QHx4lnuc zDV1*xf$(1MCj~bT57<<<*l6yms;iHD|E|+9J!GxrxsyUXSj%}id$)pb+iaq9~XU`)hPSz zQ3p%2V^>j20H%Y3H+GP zKs3gzJs2avBu4tiB|lUzQOvt~$jHcm7$jHqER(vHSXi0Dke7D0S51&vCb!$wHGqqO zMV2X882E8i*obv8t9j=SWI+B}Kc~Kb=MT>+)U&y8A>`x7FC!y4`+{TgQ`^V{ z=r-?Hzp=moH^lT^LvD-DdiBYxFjtsPu@S!aR@Wt4HK?$iinE`;d+K(+ zm*c{W)mw+!g`BNc;?i9nI+KagVM6Np+?ce=5RmSYDl`efoXi>605&Yf+u6A&o(?n1 zJjr=bX-9}P^Rh;ebHAS0XP!+uqDw-K^3iZql+P9clc~Ff8yD&EdCF4V`6sUt#)H6X z(KNjcgCPtUu-O{Fn6ue#b<5lWqdVyzBz~onWw+zFdT=TI;NpZ(p1u+zcO)HMv0&B0 zR~}XF$bWDkd4yU2rryZR=<+a$sF6~S%4(o(v5PNx;JBq@KkY%U#h%!MpU7VygGd{f zoUv$c;t7go*1gt|(LD3DJ`U-M2}5JG^2HHu7Cj%iIdcAK;yHQ5{BaTS=;sTHWZvG( zyXXi^?E4%}ZND(~@!s*@NI)&XgL_|^x{091gyWc>pitKP>q53sWgeLc{e|0# zS$744jbjj5#Z}Gu)Z<;zrPF>KgeRYvc@%Tfgc@@WR@pAderPvnbUVqa`1r&~#@8l- zxAA@1Bipz1g(- zdAcIUSL@0(^^05e)NLKC?1mSv6Kw511-zn;c%N$AUh}~^*pIHyan9|KBM=6Y@ck#)YUukYfms<0it3-dmh#vf3T0*<4+Z{~vC z{PJuJtX+}}34|68kAt(O4MpwFLF#l|jqhISP!I^RuWQK~)S|Yg#7s-y;-sJ;qarJ% zrI<~mprQC4qtr3$*cJotT%Vv)Ghf|*E*%_yHH2A{u4j)Jv$}}6T5GV4>1I{S(hx0< zCA`M0*Z_)ez(ypn1kCURSaVjkUdTN8upNo)(JQ5{e)_y`M@F87g;|m;zZ`#zNLr{W z*gn5?`Eq?@_#HO3iQ&4sZM0M$-f?Yw4GA*;Fd7|6&D$h?`p$!#G;mjBv)Cy^SZZ>& zcpA)pb2*qBG|gf+)S3b?fvZcAaH@c(R5?21@&Sb%H}~l1UaPcS3_KyQLoIvnzSgT* zH{Yx27yBGA+`m?1{&OJj^vFcs4k|1D&;n+2;Z09Z)0>embHYQa8n+4UH<^RmQ+SPG>lzMz}(lY%1qOGOoqsyn= zoZI&q-gh&<&y^&c zP&g#PgrU-JiI)@{2 zOq?#(?UE+EMvTCd0Y;Dp$>`}xiINNcvwLao@#er`43UT ztL2cY5feLweGt+VV%MwEgBhems(cM zQkRA>esPHUXOFk9fBP9{GNrb#>YU;Zzk5~j2K`epHR>%gGUf!uEF~KCC@Kanf9}eE zLd~gfSsQ$EO#Fs+Q0}Ln$R>Wz&7lW?X2vs;wKGWE8e9yR!^TANwBX{GcdwJH-&4(%1^$d9OwHy_@r z%JVK!SU%i;JaUFr#fMH-=E-ZJbKD$a0^20*Em8drdag}f5~%EadOoP4LgGbTrED@h zeu5r6(5-S`!2%(XYQUBPx(U1WfwgtCfr_mAGe7-FrVKd2vaywemxH!RsH*C+{fdE? z#HhViZq#ma+QV2TX=2=8hHpg5_msNyOLR8(6%A%C$KE@eyY8#EGC1p2zgT{!!8pYD z1>uS6`+LE^wh;KIKbU0f%s-uL(rPigC&@eR%zFK)bp?X=CIv9wM1I~I1H7oE}) zBcIdCJvBLuKX3Mk1W!D+4t0r_S<0*Yke)g)dnxmxVBBUwf}7e_8<8usoA($K#j|4a z_{P-8)NQo8&(@DK*ahw3xyCZeG|JafS`+zW`S3oa27%ZJ8gd3M8MV{9jQR&Kv;xO= zZiYY%>Yx=XOzQx80nCTb(qV!W>|4x){00;g$a5ByN+Mg}&(s{y)491v?e746@j`q> zf&XoM9DaCl6{r4qdhYkq-&_ehV#vQHKu~=moxAc!jZ(sWl`&bD)xxWCG9@WqEWv=wzKBdZm=UXH(iS_;WV@(|%e? zxYBPsJ(ztw`54<b9P`*_E2A zUZPg>Q(4ykvJsUkjcW?s_an2;knGT`*STNS2 zDTf{-v1k>ww%JQKQnC=hXpiRA{a9dnDX9Kw#}5)n&$ha}j0reva6?dKApI;Cr2Z@+EBW*VTGy-@R#Mq+fP~~pRDBeQS1-%4Nq@U)!a-p)r) zhZ8mwE-8ynWdG`p8LipN!07xSD8K*EKZMk8dJh@1!}d^LqG`IFn6bHWrC6+9)rz5+ zwaAt|{2%{nZ9xhR`=hU~e9W)3SBg*nd1daI>~=bPj)n=L=Ou!x;}lf7ptQG0g+4{j zQhWVH=RDc74b?9h={Vo7e-p4f=%k|3ibL#yp~t)EaT~o0^49%@v%VqHx|?nzDQ_$$ zJB^EpuMSP_ODPL$^(fCJcBskey*kf-gr1Sgo87yHqF&Rs$h*&8J?YAuINsu?pX~{|_$6I~c58*=H^?0VwW^iE&1E;igozrOsYssIjW~{AftjZfp)suB*rq6u) zkdz-jmkT~p`Gp93h3{#nJsTZ4`oav~_kU4J8<#y&y5hs(N5_5bGY18)gwo_Mx&c|U zt~543HUa&rttrPIva41OgvIEVnYVUL5P!cgxl_GvFLkE8)?#$JyCuBW?044Nt%Gg* zuQlvEphvk?YW0+XxXT~oF-8ALXW16M@o<5-QT0Pqow{{zi?#kZD~QcDg(dC!o;U7b zw*7Y5b6O5{##3Y;uAcCyEShN1G!+k*u{rsbyfQSeUweb%m;T=N>@GPJ5hW3cq4DvS zbt_NO185@3PQOU182VCcAWIcmz;60L!~W;4OR2n}&;5+OztqmoC}}TvFDg=R-&H6= zm;4sf(%*M-`LC~MH#ZlIj;=d=3*@9vH#n1DAoTT>mG;g9?;kvzx9IOpdDgtV#K}Qr zWK|)^+_GQGs@nBLLt}k%!86y9FLrv`4QtX<(^V@*Q_DRmsVF5 zMY?=&;KmeFfH>EKfpaQ`C)2KpI*{*;>&+f*-u9E5f`*<^hf*RzSJ7tk`aX&hz1M0# zwJ!ASekR{1DeJB7bJ8U`_<^=SpT^Dc>9W~lX&TSuE^K{2z%<4+kV^H+qRmIO?scZm z^y?cKAyzsrAVw^j`>ZpF!Ov}!L?`j47~!q=cA8=;z+vdMs4P`Omq8uJj8m~;X?{XA z+q;(Lq(8Gk&-^4yrA-_4c?SPXbouIsZ1Y=X8;qmx%dfl_vY#+hVS9UxI>7XLVsM1Q zk&Kgg4vteS0A)2nS~emJ@bQ@)W(?4QaM;TH78Y zRglRaxh}Jv>a&u$;BruGf2Bs|wa!8Q0BssxiCq!oWY2A1&klWk|H?KxLUTLoE=D%4 zt-=>{8JP&wTX@L^&FG^zWR-6zPzG}Q)!GmA)kpnG3c0l@8&`+yf4ckCc?Pbj!D~*( z8w$o!eLnVW-kpfwNxoy|p9+aLYPEt4T)t#1qQ^@zx8U@pty8$2XlAZmQl9f?rB&oo zJHVd6ImMmUQm?FgigonOmx zGIrXjcI1HE6yuID7lg`A`*x?6HekUzG}D_-ILmbY+uxz1iN%@maMpW{&@S|_uE6sl!#B4 zZu>uUv^l)w9D1Ll&*Gz&loMiMH3 z&RKT$`<50`=bFrj#1gi}X!}{Z-eQ~T>-B*s5y15_sP>dRdUPDP!4^tdSS#lCG^yrv z_0EXC;KRm#DbDej&SK!NQvC0B^s&`|Q5BJ5Qapb=pQ86_oEgvs$dlq?i^S zVv~AJTTAy2*p>yFIqFN3je&u46>6zx;_T(}Gwe&#b%nLJZF4ETs%w7d`=HB-3b6+o zA0^&>YkRe4gpEgYZ&`CtPu}zg+PdfTjMh$TBfRm~%!T&15CqpZSLJe-x>L%-Lnf81 zJB~|mRh&;4H)OdGc{{!LtZ5rFnarV5hw?3*pFb0=<_FJjegAf;op-fTy_Q}vE9Nj# zz~F0wyUBSkjX&efzv)r4z0)#Tow`wAjEat(-I|GQg6Gb!JLY3@3zE@IV`%H4_Xp~1 zoS<M=ImLr20Ow(@$n$o+<2h_`K`5nmq9+A=R^1+8lxoY#*F4{ zzm~xLmf>Ctw+Bg%$u<|qc*Ex*?M5^=4}5-M?l@QTvMA|Vgkf*`mD@|k zFEq!>^d0CFX_|DRvhq3&PkEGb<&S2h?RRIpkU^Ypv+T~-o?tyiH=KGX|B<6Q4_no_lRzecJ#!pBd{_vm+@~B8 zNTF|Fz!$7?d@se>tCPb&ejGiEjlZ1dpYxH5KYuWDhrYWN&B~R= zYA(c*<5)&ttEzR~sntE7ar))H%e@uOAyvII&DE5JQ!HcznxZcagC@h1`T8!NieEa` z`tvh3R_d%hB0Uvk4Yg{4GSDXtSjpT=s6KFYb@6@FZqETvHp0w!ShUM#qR4Pj`{83d zdQ1O!H(FeKwc=qT9TJth-Y;P%T$`T!2Lb^y#}g#lP??>1Onm0C&L&-UJ&7#f_5PIj zr@l8r3fzTZLyM(^KKT}J^Ta!5kPo2rMM7*0g9v=VBsd<`Un>J#`kJ1Z7LAmGT=72C-GZ-C9Tz5 zE(Vtj9}60f1?F1`={>5u5G*9r^Z12{)frx*Ulsau=#ru)lb(Jc;R%j3*lGqZuAfif zd(*yWS5Et4FrDq9Ur8vZ^p%n@btlP`vgetq$6c=E^fSkz4DQUK18^<((*E;hTGoW`20I-g@sAHJC$4ReAZAyoh!V=LS&jd(xk=J{OnMRneSz7JzXc?uQ{yoE8Z6L ztjZGWXVVG$2h{psm>0IZl~q>VaGLOr>o3WaNsdgi?53KE6-LcOa2Msbh3*O>n&5}-qs2&{G&BiH$NOPx$NqB-uPzGhX)+?XNQ!J8mOtI z*XUf1A?`CAC_Yk1b+{$g_v%W?w?~Pm7Op;`RHP;w+S)sIeC4;KsQzUp?s=D+-3r55 zj0MGSPh~Oqy*3atkrO4cs0$84D91qk^zBu+8&N&*sZhEy-m`xthBmEY;I&$Mt*|PU zrLwyo{RQoR6s&Z89yMvE~XBRQtsq$J48#9N7J*;lv78WUw_FfPBI-yT`-NTU^IQNPt3d6UIloBuv#G3cLde8=~Y ztLs*NNDHkS>T1J1e>Aa|G}3{~Sh(@=-GTM_+!r>>9w#LJ2;a0==*7gvznt>PZR1Tu^$?*TlR1FYbuyWpwlqGmiya%g+bS@t+Gd6)T%ObZXkJl{~r3 zQvcCTTH4Pu-^}zMExtJgiG0wHETwq=`+E(pa7Tj3sgqeQ2Bu_#fxtq5*`&x?9_x}RUh$nV0QM8mu5+4 zyrd)~;*Bj|7<*iaii|wJb=Yh085L+=D?|4}SWRyfoG@wVb-&?9N3;Z4(~l(~a?nHj zV1hU=V^HIbS5uFbL(g%my&GE)xqb2C1K%JXoK93dTdra{i|9C!b>*H0H!!7>2N}Bp zZvDM>BC?t7HC64(`||#CJE)T7x2?&G8T|L!lm4EAfB(;C{`dJr#}2T*{r=fNd5IT> zFS8?Q>imZEzwhb5v;WtBT-(<6Qf^M)SCUY?D8;y)3W39KVxl9%MaFy6B$2|EYiw*U zx~({Zs5m<5J_BTzoIcI&*Jem{SJ0um+05D+pv7Dy4>6K^W6pH_^E>JE#53AjT2&+E zzXl!)-!pzL?@Q-8H+<#*E=Rlgna!xjI$0f;QVSv+zEK3!ggn|bkst5MICR)kr##BE zr(t6cEZ}zds45;@Vx^O%Ld{CNPTFatZ~y5cT?Nv&H$;Kl h|EK+b;(|afcAx-Ra;C0fHnDf=hq|5AFnMT!XtyaM$1(^y}pM zn>%aXee-7K|JS;QReh??*|n?ou3fU_a6f&&1i+M&l9d9$z`y{Epnrh-Wk7?Z2iO7t zkd=J`Kmq^&2movtAOHrM`<{qk;D6jTpn2k-^9Im7{m(G4&^!+8Z}8wZa6jOo2!C8g z(BJQPez_w608xklIOr2rb~aX+=%4a1Pz-?dpP#d_vcjbQgz^3!0D%2D{}~Db{5hZd z9VPU-UqI3!#8K_&sbLY|UMaDLia# z?40>M1nGVt=ZEIMKW3qW0y&vk@T*8j{R{#v3DW)C7k77eW_J!|dnZd4HayBU^QoD;^g7wW->PC=3-*!G2!6h;o{?FGvoag(N9(Wh(^i;iiL}v zi=CT`kByU!hYyPE*9t#g{t;fy-qqCVJ1Ii!zk>bt><27}_FLNX3FmZcCj)4ZHPo| zT>dyjP}$?RF}AZ5r1M}hGq*5ywQ->n60(seLHlOs>g;50P9XsmP-76w?_NP={RhYYC9Xmz5AN=`gPW%rx@E!Hvjr=7B|IMzy+4Yw=@Rxx9maf0q^_MvC zmw^A4uD{v!mpJg3fd6-->o*k;dJYt%bB8K;_p<w<%rAfAN>)DTCA)U zPi5qghQq`u02u}UAps!~?ISw+$4@x9xOsT__{Ak8rKDwK$7iZeD&tc||3ps=B7O?sHpvM`u@e zPw(j1_{8MY^p~0CmDRQNjm@p?oulKE)3fu7%WqfT>4E_Oed5DN~Tk_`b{R2k9O5r>N19|>11`eRuuGBtADAv zE-yzf3aH7h`%kAjh`=SI0Jif=Y;WwB*CIA#S^6DjI~eyEW_!yT9HVH=oNA&TQwt#P zD)+30lNf00be(V0)i&Y_2z9Vm+L?ew&CTH+t1p*DD}nQ-1$CYxx*)1UGMy>J`k>=;tRbNFfuv`K*F*n;~ZmT zf+*5v#~%+ateC%Dj^W*~QcI9{SJFvMARtSf%F78eHlkPgl52Fh+(1sUHF|`yD!}?J zdiSb3u?A=jS?wq~!g(Co?5OKa<}t>(zsH6QDQyHY8R(fvC^K$BLU#)dPP7OqE#AOo zZu{bNUHpgK4zxnO#iC;3h56e=to5(G2Q%%p3rI?QC|zNy)JNe(nn6SA6%C*2BiaoI zi?QOU^*bCJVNjiN0ZtyPASFaaN$1s0{>QIn=xP`2&~l^CAKx6&E3)Rlv3xHiMl9eM zfFZ6NN#fy|9^i4%AEki#UY-QIj>bF$(d|L-c=;#PX+-cTC(Nj_Brf3pGHECVdrN0| zxJ@e-8@5`7Ux-)tFG`Z6ea<;T&&PlJaq+ooSeGr1jC4LJ;)mkLVD7Mu8SZZ%ytKupwL;~^zp!A3~~I|CWcF)9ka-5&2DCga7Vak2CpD<*m{q7 z+HeDIO({YR)hJ1g<}sVI2b$FlRUuaTa^?I1tkwaCYbqWLU09b+e6+C=3FG?6o1oN^&l8kbgxw7iLSjD@(cMAYbjZSusl$slB9D_sB)NCK7fvl+MhZWcS zMcjS<6p4H);sNYHDItTUR7tzix13U)^M;Ciooo)tn*;}Uqj#p0fk%&mr)LYdH+egm zagtX_?*TK0{V#?QnA4WNtsY6dxcivbA$XBEMAc?v8kYZBq9DB7VZk0QAW(!0jlXwo;Ud`?`6K4)6^4m zo5+&0wh-{`c}wRQ?oi>CH&jGm7FFmPe0lKR0+E2mP6N(pHQBq1;!q-VMFFd%B$hg~KerFYQN8v}s*+voJ9fM)r^g|jpY0xc=I-U(DZf-~ zDhnw;Jdd}iEjJ&1eeviEEFXl8p~jS1$tF-%Axw6OhmP$1Ul~)(%UHSQ@qTMdPm`)y z?MIc(We=75l8rwRB2gpO!r^o}NHdYQ6(;F}bhfXit2N5F?IbHuU*6`xaCdV{pOoA5 zd}9ToTGXBWxupJSPXJ4ct%;FA2U3OEvZNhti?_Kw6=UzDk*tj$L^2L$V zQ|3SKh$zi{j_v8jb6(AEJ74uUE`oOxaQZFb?ux7u#_j*`q=Rm_5|qYWxP)dmIE};5 z8V76|B__|-J|ru&FY=IAsH<`{Ki1#@h8JbR%Y1!4^p8`9tOw2^vG18PTL>k}gKwZIu4qRPnWgpE5$| z0sncjaB0*!G9zmoS*%D8@`F?psb1P;4@j8?IMexe4K1cCz5lFX^KN9f6Jo3I9s$=*#hyfA&a{;^J+LIS6FPR1UmJ4m!*)^|Kt`>UUyJ<$*-5_<} zJpV`rrje9&lI*thclqX8`zF-2`)z$)pH|CFs0_wDz}F)}_M*d`v)fe5L}pbf&yOCM z4x1*C-M^3dhN`UFBe%;fkt~O^&sJD@0zwfWUH$1zzt#9D)3n8>{U!=MgIQU7$0c%{ z0n}I4nO|TDyiTp5C*WQYy9xrt3B@TVx=8l9OR7S`zQ8O>7lE>m2;H~uBPyLGoJ4-vBW)~wW0YYTzck4{T?7k5ctGKDX>Yu{m74pTo0xCi0F9oL}F}><839)ga)a8hKahd z*v`)79TvUF?I&W8Mb*oxmC`GqRoW#_!aX25?+?_aqqA>pidG?GwAI%zO3vS=K*ncO z7qaZ;<>?x?3&8mySs#=m|>lUtcDqN$Da(nt0Xc{GmSVHgH6MEb!3~k`2_Oh=vL!8sT>SG9>1+&77n(++<71 z;mgfCvC;1$MgIIq5h_czxTx zvd}0@rmfBll$ms$gM5`6YR)t)oj-hV~b zyLmqN_;sZr^CoR8o&ANqc*){yRT>#&rS|kJLAI(n$UOkto|(q&gbKNW5f`aN;#;%V zcrwx6-NZ`r=Ji8y=me^SP9V{`BiT+DeT>Mf(4VREkn_3~RvGQr3EQ*G_lVW5j2Vaq zLDb`qm0I@Z>s`cCrrWS0;&vcIw||&a%72`gBBbzm|G$B7cp+~$wjxk%3EC7K@=n{Z zcHVpW3<=M~kZm@{EcTC@Q-2ItAr5q_j^|S+ z%`^!kwtn2I8nJXJDQ`&YEpx;Y3}QM;Qci1gmNfNx^7*1_SdXyyL$80rLS80Ci7W~J z?tl;Zk}p<@y>umi++YM(kiwYm_C=vX;c4qFyT}=5kw6L;uDbQ?wLCe?UIJ_D25I32 zuYT7oCyYf%ul|b;iOwCfDy+fDrQT(CyiS>$?Wx55Br9g`-EQ|;Df2jBKySL)(P=sc z$>poV3yH|~goo}*&j;OYJ+ z<#St+qo>PR-Um~0JgLYnt^dZ60@aeQ`!r*6RWfLssw$?3nMP?RPmN85BdsrSJ3H7V zW89^7ZKeEbI;7wkxI-Xoba+BF4iV1GMKIr**ZPGq#5&s7#zy~abU+YdD=35J zV}l$yu%UdxmNNgs$hE|Dl0{i7x4lkhp#!KdhON~wy}=HMJh5%L7_e1BOuLc4%+0zH zuB)n=uyJL0TVl#CF?m!PKg~-~!Mf%c1+yb)oqPDQ0?K${>jHVJ_)wQ7Ge^c16USt^b%4yAW9`s&vgAi0n;K?4k!ttpEs)x-U2TL*wCNdhmgAcLy>;bIV% zJV7bv0fI-s%B0@AXyzIlY>XW=7yfOA%6kCD2_HUTwK=VC+BZVqice^()`1k5HtPh! za#uwhVzXv^VQFzg_NKQOn?r_ZhdButZ@BR5h8rFfXMYVg)=1pKgQ?l%I)Fof!$bmM zTK9MKrFwUtf5LRxKK_)E-c!wM`rfy}_toy+f$G%bIb3B)z`vH1=Jmy&7g$CyuLdfy z_xs?Y%bw%HaCz>9k1K6|Y8Z`tAJwqW=;T3CMmLS4pL#u8dS-pjt9Vp{MoHE;@-kOw zSI;2O7S~O=x4GeiKN>^TxpC}`HpmMY@-#1bq23lDGV6J@j&I;*gZ#N=L&O4Z{Klr6 zlUzM&ga@J{DYr?J-KF#jxItj`8|VB3u@Hexyj6<@X0<$1kV?WwA)36PAF?2{{( z8QJSrd1X1!)4Dm9xM!7)&g=tJPqc=bMxJ!*>wH6H>hD+XH&Lg4xxgER={Ww9^HyGL zqSW3F6@h`Elxo(c&uJ9l5!HJnB`0(tBk2cYiQA7RD!1t32_X+;HiCgP_Uu7V3t7AA zBIC4faZ@{ULXIcFVM=ptH0jvNW^hZ`xsfJFSFc1!e5wMA#jUjAl;=6xE8rV><2f|| z44&c^4}f7L{$Ue7!mSCPwx+61xIWay%*+tX(!P72l7uQVIN5z`-JIdam()mgt)g3N zmjMZ|4j0^mv!;wz@54sl%Z!kIzO^J4B^Bn>n~rm-y7iN-zNQrGhHIG1YbG)ejFLNfe7V|%`s-qamum1JzAs&~3GF!0AL4Zf~-p*F2?!S_mZYrH*tStxMcIy9dBqk+DqJ*q-fi3;1a~eOfE4 z3Z!_8YexXOT@q@f7qiS8F;Kn-AWgNC!{01mO{sPe$@%uP=-cQ$xsrC`Pi^yP!Sj{4 zY|M~_M4y2DQ#ZvLmi+&|N(fa$ zNe&_*eUa#wP6}e9ylZi060;P%ZQK}giUigPL`+?8Osiz3pYtbvk<%lOLuGH0ZZakR zkn3?Rp42Xc-!QgEioM?A5qdknc(gj( zjc|i$nvD{|dSp1K?w-=Lu7bm;XE=7RtvP0WsXDgpv@ieC9mOus9mN#qyzu9jlnGB! zk3~0S!Sq$B$`cy4fye~LVP|lzcppudZ{1jM?JCNA*au2d_EVTIT0xdI!KbFX+B)QZ zrA7OvxBQz5b%~Rsr$gsC6K2QUh?W{!qMcu(yPiRmV?*nx98@D2Kr6G$=aAKb=i7tL zXlETwCh;3GkF;7Xq)R=$O_8_0y*(ITtc%c_o{o;3C7)}15rfmZ z5ksP;s!2`TtRc|>uj07E>?POO20+5~u9<`9rcMa+kkRg7j_da8#MzimSGqlR{LX?o25+8# z*K95mUY-FKH+6XPr1fbzJ-=yu@|nGkArnD^5RN23 z&b}g?3cikwV9!m;Pi?n{ypUwUlN3+o!8-H_lMnzydnsghPfq=CGmzGB zkEynC9OHm`EYrk6k>>k?-gCwaOdabGq8#TEG*LvkWS1byQ*uR{cs0OYU(LxZ|-& zJ5s>%w&-)9iY+sNpJQ4o^K9*U=Ex>UP&yt+(wx z0FSABe;dp;TYpz2>e#MP5v zQ1%_c^Ckjk;m`K~>^U&otNLqlWhPgnyB6|P(8Uwk?9i3$gEutt_HXY<>V_l4!c~Te z3z0EQ8%3>7rdfqH8a0v_!Ho7Kd8NbwMMdmLg2Cx1by9`KMbpw=Sq8HPUwSSqs-tAy z9N{@NjA>qqXBFQh<^~*;f+&l49@(<4AYz zdQc}xRz6GS(_?@jFpOBFiMXo&a)bCHv`uGd#?=E)nbX?KHCdL@{+nG34)!X^I$KYl zOK~aU`gN5%C3wtHQD4J=-;PffvrooSVhp`20c#X@2=I`e_W zoZma+m;s3THLMeir^BU6Xl~TI^F_sh{TZv{_`kt5_KP|8pTEOtY7EcM_Sm~(E1)KBVFgG*dfww?EmeDRxFB$DXnm^kA=m@`F zt5=1SJc%Z?h(cHC3sAzEydeZR3GM>h(&m|sFM5N zsM;4xF~ylyS*<8i5thPLC#d6i=v5EMMS-g?{F(*#!HIK0gsI^XT0U{|yX z%uEPgbm%-_Q$M#qzC?w=OEJ#&2-QoJ9H*%vdX4Rqfgh(|tg`Z$N0PsLk@?HR^)Hyq zq}Aqu0C$bLuVjzP#+fSRb0I0|#`bQ##mVauc}dln7#9JX5uG(Yf?RvgSNAaB*Ag4B zbt&qxgGYO<-ZQm7dtI_MhhU_v>(tF}tl%F168{tA2>5(AO?Z>7vrLjm%8LfJC*qBJF%Dk1u4>Umgf#Es*EACtOu->#QnfG6ugN^~CMK}5ZpS+W=U zlPWIah2|q>zjI=<B5k$BBSdy$lft%vu56Go3z?4gaTYrab<=kCgw+5GFxM; z*U?ZVF$v&2;-T9%;~T{3ak|IMgB8Rs%UIl`y>6$YO>#Wg6{ENXzyYQxjk+SqlsVym!>PonmJ?Iz-+;S9EGlJ9-S&vvSBPEU*+kOxE{eN!{`M%bb(X#U?nUHv z%ap*nx5A;~)mx?p*G#%YImwbtqHCVHSTd-!zf?2BkJk|SM)hzgxYV~oXT_FUPZe_7 zsKH_DP`rg#V|usERbS*`s5q`d{`h4Zh^Kkx0auY@5XV2 zLBcD$EajK6vXF5qAd0zSitxN+kYP*6vhOnTQSPfIUw}u{j*gSdNQ21J0>muU$P^y{ z=tUx+xWI!>bNF=oSsJQN!k01SaZ}eo4%*XPESV!0lW2S;Tqmg5%MYB0T4#DlYfhAq zYq|y)?jbYdWy&~Q;1tDp_tk1s2_Os)vIU?SjJ;%@)?Vn!xxAKdJPgX)e4ccWItsNV zJl%h|Mm@KrnO=#n^G+BXnCZ-khOOTsbq`>w6(Dh2A`|elf=zgJmOFJyf3W6nY|1&j zO#VVRfkC>Fz)!AfzKy~`(pFucL9uGIvQI&%lMOGsNFGi`I1m?rxb|e?UsU%0RU?5U zKy#dNX`IoHaevIGVaqhyNlKIe6d^sXnQUFSf`Ps}Vkqm(G?{|su1%y+(Dw-tgZWX- zXTy_b0XDbiRbZ)N-abXh1~pzgn8qwXy;DlGTi73PhYRDsjP_j6S}}&WFqp0}^6koO znV&h@iG-(U(6TI@-TbtPNXHTR8@sO?594*hswOdqxK4uzfJ4M}cqXsF?0gz#dWY$! zHuMB9DR8U|o&*d@)+Rn)t&AZR8bwmia`y=jFq^N{pGcIA6eS>3t!XaM{Z?vCJ~PV3 zCc5ziyO)^6Plk~24Aze?4dF6_)Ig&=b%)TE_Vv?kxRL(6+S!1SL-LPCUS%^{s8f=h zEF|HApNNMRBYClXUzs*9d6|YC1-)_y=fY$#cJ?kHOK=|!JxKyLjKdy!c_N${9lGD%eZ32LK0cNR!-HIO(19FEW zIns++}Wp*BV-ZE=RjGh2(E#KZT1WUK&oBu*HnKXYN4 zURFr+Pe!kO-p2ciojs8?Sc^48j36eRBZah|6>(Okb^u|p=FwVnx0QMq@_Erg#8T0P!ms0{&6BKS#Y2ab7<^aNPnt#RzXa-*msOQ%;E%vt^10Jg)Sg-J zG5Yjy!f4uM`&rPKKN+;(@nLC-<2)NjI2`@Tdq^wdxA zixoDaU`KtNJvBWnMCVVUe-jk-F5f&8aQ0{69TYOuBYA0hl$AX;B}6$EF}EZ?$Pt+^ zg-HJX#!BQ|0JFy7C7Wnk@in5ux{$s$C! zdyz_*5^?^?#Lk9R@C1kaU79+`4R6Oq3Gp4qTmE~%>t@#@fj0J;Sr$aykv6?AY3C`v zlM+JdQyCs!hNIrnxy9$N@@97Mi=k^7(;7O=Desm~Bv+0IMF2aR=VI_nY>$`MO17FJ zUZtqkDU#=rbWvJRyEnl%60#e!t6{_pWdr5g>Lc=2J&#b?&`y3a*Ktl4EQ2RkjkU`| z*P} zm&l>TS7+zc=S7M>W_o|((-vjo#&-(}9~dqA43@4-ju z-^gPKN{>QNgpIVNxQb)$f_k-)vzu?Opnf_Z6ye`0=#j8@f#eyPW29D}Y&jvFA;b`_ zq-eE1k--aLlR`vAN8;VbfK%SfB3I3xKKRcX%J{Du;llL-BaY|035rWircdC|=g{?h zW?NfcaT909tEN^*r3`$%;U@8s)Im2;sVg#l91S0bCfg;+ACACn{>ot9Avxc^CNwjq_zm zHqpXy4rxwLEf1;dQdf^DmPWv?oA?nD$0ep+9|-xnJ|fnocAd1s0UrEfh=96m>>={j zd=jVwOtWZxwkQYDZ%PnFQ+Y_}_ay-a1)4J{kmS$@6|zpNPP*;T)q6}p`ukg$xsW2N zaR|B{4Utb7)Rd(ZHO6{sxz+TVp2ihJ)5gZO$kYrHXo~7C;A&}qz(J;U9isqHJw9W& zv+(`8YYL5WTA|!9z8kxpIHh#*3c_>h?40Q!v^7C)*mvu(aNgMZaNNg)QIZH?#Q0YT8u&i1>Z?mPF{Qu8-a4B~>AS zDpzJDcctKY6SiueLPdjGW&3su{)~Umdg2=EPvYhnjxy!Sn3i{D=!P zb0c;S&@?}@-!{-*ZSv-DJu}?YopJZ;YP*FUdiq9t+?7{m=*lc9o(H&bJe5JYTb-6) zL{9444RMA-DvRIlkma2{_N{A{a!llou!GFAJPlZ1invk_e~-F1V`2UZ(X$vSy)v*NS4W#$>{zeJ=Q1qdYX! zb$+9sce^G6akQzB_1&@t{|sX215P7v%|os?L6v3sTVo0UPYDZHr-rK+nlPC&#knCa z14aFVo6cl6n&w9WojezHMu&G0uw7k5*rT8?^^_ zorO}E)+?TC4maR`{QA;lfHT+4xogn5()N^#wSGcZH&oU=nt-(_UU^sy=Y4CJCpz2h zuysj@2b~F3{9{5s0WIv>JGD?(o1+gJ9y+XU3Ft_OW1M9YmPYBxE)T}07V7OGI)oXK zLw>z&F5m_`>T#4(a3Q!lzCsaxzFrhHEaJLz!rTO1ahwdi*sYgpNklhMQh7a_zkiUP zIMYeE8vatRx!1Pn4H12V$$d!xe8`V{J{UXO{O{MI_)S0QwxvS@9L1 zOYX6t0##9ws}R~fAaYkRE3LP_ZW%;;4_0@Ip=Z*N$HKp?lmE>^a^8@Iguo7>5cQMOESMkQi#BZc|>< zuzkQv?5v2SBH*%6HBF|LjHfD6VIt<-Ov}_Y&O2r{^IKH(=TC-)36JW7A4PbyN7dDl zh2>H;nH)xTcroc5TRSetBB`D5_|zuVkk4O~rh^`U9;T899WRW^EA!EGe%1ZhDSBF> zgL^`Jt#cHTa}KV*2N*hJzx~V~iSbM^g#lvsMpTUjVr?xO*ks_yjG3Kua_ROSa4cwE|0NPKpjLFV*;uq2exWE>Hdi6qLVr!(Q8V*@9&o*ZGEcW|f zlnM5%O=MY3jWJf%9FwMv!HNQf)DJr%Zu`&kIerd+N+YP#S zp-q~Je~fP}gZScrUr3&ECzSV=_l}gPAVw>7&r&b+BI)-d5q3KGpxF$==<$Sji|0Lm z(VmKTXco++l)gjsR{sd>$Rj2#_2JE25O`vk~jh0_5_F#i>VM&s|yx&I+n_TWEmSh8{0^SH_~#y_HS@jjr}=L~?e z=V`mIg>pXb=@r6Iz3;E(BWLN(x|!aFeo?R)aYJ91yzL9l2D`g!&9cncM$h|gi3+90 z>57dV*k=1Po+b&HK7?uNrG9XF57^bTKc6*xoPGV2Or`4col%S3`=L^ZXztEA&w15Y ztAoFt>4~W(rFOy;cmoosiD%}Z+45upPnx@-G`FwJ)7ihC@E}Ad7o0!6Aa2hDnZuTL zKz>~qxnNo%2o{Dgp%;=RS1`;sILSkvxdrzk41Mg+?`kx2Pq6x(qw24?K7Mb0`o#?O z>pfz{syIEy>#YjCsmcLO7@o374^1k743f5Dj|qPHaHNDw|HV0j00DxIEQabAksYmb zV61_zeb#Oo0bRsJ`{CSHDo_Jk*J|xu&A1<*HZ#(?YK`brX(lG3p~oA&%TjmlxFzuB zs5d1s!GX5~Zc&2Paz-WMG5bq>)`hrb3{)@~<_P7e%L7hGMPv*j6Scd$T~XIK)crqR z4yi7y44UjkD&G4j65W9vsHWTkKnLXK0V97=QU)3njhVAE6>aUS2W;@>9MI8TD7LfI zT0*x{J$+|6yA*Za{YcXXc>E-##X~HK8kLT=1+#+ntK_`B)x)QGw4RlFxpED)5c#0v zHB31*R$;j_h9oPy3^u*?n9di4s8cL?9#n(w8kIHi;|PPTCoId`hVtUV*(#u#;4c?j z4(ffD4`0}AvMR(UK42p=-hXMqK71wQcZZ3>6ryy-zRp=19GpOYC65WPA*I`;uw!zy7D~KHvEKHp+8F!_pvMvOg z(4L1z+=aM|kp*IO<>h zZ0UsYgJ%+Q6U&Ie&=`R{zLDH$%kBJ>J;y>aTqOVg9kV-|z%d`O2oN^Adc>|62+S@T z7SEg0d#@gaJSx^f0P7$Z)JaS>F*_p7L0+y|+i68co+xP2yK~iv8${0xs_D_d|9YJH z#DCY%lavqd%ZTKoeLa)RPf!Iv=s(|EW}$lv{{PV4(r^L&DsTc+iao0}8yL|0_#;?# z-QylG;yuE=x42-$y}Brv2OKVF%6!gZV_C%xdTA1e{(kOF8R0%dP;AFlfPQvK8meZ} z@;m89?~0A~hYmfyNri68EgOY~9;tV-c3Bbo+(C=7P4OO}_}7(X?1xBBYT?t}bou8k zk4lPqU$q&qeX5D3Zn`#M`(~!0-g1FRGUc-zDkwMDl2ht={$Y9zz$?0)dKPHAf~K*n z&`?(qF>;d>Yg4s^A1zj+vX-DfXBM~`z{f>e8d}PHK!xup?f5)^MX+5Go#2!EdXrMU zA^~6PMv1DE2Je=Uv6%N!?8}mSz*PS}eF=`jkwvy|9#dSMPC|@ULv40gI$MK^Y`u|* z^K}!ul=DoAZQ)K`9)Rq5Y{Ak8D%QJ@~ExEwXh!>-JovzsNC2r+7$ikJ%E`#6hBc}Xp z-fL$`Pn?WBS!y_>{djIAeHc9>OotjkbJq4l1G*!csfQ0$X7#39n6m1FJWAwaMqy)M zFyAs`;(yB=9HCTA4*P0-HL>1te96;Nl%FSx&9tB1)*mYOhCpxgpkL%0&K=wwx@J4k z{QAV@p}YSab0(@)bzSI&)@hB*QgT|V1-UCS^oz8b-Rhj}7w-NI3^Y2n4Wr@!@VaxI zY;IIqsx|@aVEb_w+>!$?tRLOsd1$tWuqb}KnW!}(IJc%ImWA<6LcFH1=k&4Goga*@ zTz|3B(()(s^XvbHnc(r#ZjlQ(M*Vs8c> zsmZ%2OHu|xgZozwFz*3Iz82wChTF04L4C8$&<_f!-Fs|{r!3eaPGN>M6RpVzCvjRG z!4=Za53TCV+coy=se72leBzcwRE=ij%4lHcmQc2qjua0lPco_#=^)nOS1vi*ve-pN z4Df{HJahn5l4RKDUWyO56P$HBo-!(vi`z6$$FlT1wX9+Xh*yji>LnJNXMM$VEWBWAXN<&Zk>OIY?9;(Gyn5m$q4D8um&CMz1ag+vn_rE~g=Chue2@lxR#S?%P23>^|Kfa#tzQ z+3}+>it%QQmt>FpeYl%f7&<3!jNNK##?472gg4(kxIyBGU{(CI1Tgb3ZAr|1%T?%$ zcca9n=!>`xW( zOG8ZA^@Ezm(V6fes{}(iP|q^Vy1T*S;6*quCS;urw;*lW&_~_KPqm=y**i0gg#Az3 zCu$s*eZ3Wd5eb>gHU`!nc?Djn5I)7D-W=*P$aS@ZUyMBuZu`cL92>*B z`tkO;w{z+#%z9n*TQkUntFBiHwQSs8OAwewYj}y#a>$(hAsg(s&IMdx^T;F_Dvs{WT-QyEvNC6$ROOrU1A`?PZoXc{rfL)EeZO5PD&|5uX~ry4d7}cRUzXn zH1asS%^x`6$aJ3sQSxl2%sz8+aF}PvZ#m8IBPIc3!-cD3x~LPtru_WXyC!YEN1tNF z1caFM8CS&TsS6Id)+`HqW^jq79pAE=~>_aTLRU!LtZ0*;z83z}SSePLFj%Fq@U$h;* z(29K>RX)w4U z=QFqw`#0f7U{=!y$+zD^BBO7;>>k)-vknEEDwKl12Q_iT$m8D>Yx%r#uy;WK7kXV=&jCc@iClj*BhZ1OJvL zroU?%VDpQo>4(egpLbAc7*uD5YseF7obU8{@jV@pw_lHq344vX(-OwFr}It;kt5$j z3Qa+2afjyuCnFHM<_!(9hG-ntYfw4?Lzvfz`syKVh$QZ#tFxturMUwB&u+D`{j`b+ z@A_~qWR_Ob33l_+(Y3wt%p%O>XV>4$ZirK`Z2&han^}5Z^?mwQ(c0B0@4C$3_2uhY zQ&VFtcYg$E1dFkEwxZr*eMk&9k2=hh;uP8qN({fsEtexp(t+-`L^y=UOT9>gYUK0F zw-@^$;8bFKSId=aO3GRJoPjLq+)Hid9%wL%7fpHzt`nICXfyq^q#y*a$qaf7KgW~r zllS}~yZ)B}bDDB?1gyTWJs1F-ktvWYJyxIecPq_*Z&%Wo;Cdj)pjgv1USa~Ns1`dO zV1QRso{JU1R|F;=>w9(9HHSm2BhkEBqVaik`^Fn&`syAis=cXyQ$*Ei~NEsR(JosiH?<&(Hmycm3y37$cSwa>Adboer@TQa>IY(#&Tbw+cn z_{fhP1izF*-O8`317@oq;_ysxuc-RApS6E1tm33ditvmf=XVlO?ZVf@ z%=ds(HN5RV{yUFT0!=63;#>Z*vC1MpKe>w+EPh;G02Wpej>A zviQ=RBWu0vgFQH_-3STtq;Wd7n1Eihw^ng;!R^%H^mGJp(=+9#n);^bK;MT)9vde`n7>Q`d`91xp&_%t* zb*9guJMOLEZ0vqgfo!hfRhv;=UuC3jBH|aE%HG`(%u@4MH9+kAj(CJ}xJYOSb!_k% z`w89<^xu!Es{??+WmYc z`o-ym*Sh^zVy}Lw_8;zbFE-ox9cPn~Jv$wbua;I%$ir#>sV~4aagH!n&~gLG78S~> zVzrJ0FYx&l%5K-*T<`k^u+2w)Jmm#U9aV=``oeFO=I>4kJ>~m?E5jo%a4s`d$Og5j zC8JvpRW_qo`(i^L_&BdfpqZwBUeOnB6Q~&iGdDLZesv+mI06R3@aI_nWb_clVNFt=b_6n7eC{i64z29*_=;ZUBM=qab}*;<$?cslC?}vZFzV$o31`h$fkE#TR+xoUOmeVpxaPT z10i#f-5Hv%_oF-B)q3m4iS%c$mBe}+ffQ_ID(QKko0c^rCB&QnBsfgbElFrBkN@Z& zJ%fLF5%@POM5{O2tJeC&8damkC8yNBR2996L3vdf6tGd~c(cy`ue~!5hkAYg_>?SJ zvP5#?XZ9%b+Ox79#tSea#X^_AS{%Aqj3owR{63xSdrqhG zJ?H$c?{%H;^}Bw5`FQ5}EYCB~a)0jo^}dIw>%=L#2p>;brZvq&p{aTE3V5^|xWdF^ z!v)VMQG#0cH$k*fQh!+7$HynokKb=ce%{efSg{53oUgcNQMtO;F{C!t)omsPCTM() z5%7C@XMm{kuP^lkp}$?5=;i@gx%}M9Ech@y0kO|dy8W+?`tJ&# z))Yc9%bq^H*#~jFT7d(ocVEbJ%Za1Bm}*r(q6}|sGoE`tSAdd$YFYarr`xC4BF!qb zia6^-KdIjJiDa}Ay}Vo`$@z#>vA^VE@iHHQSnK{3=o#u3^WJ2ku>|7i+Gl*-T>NWi z6}kNVV~rksC#?T>-t)h6`p+(d?@V}&?`+`%{-0fe1w{A9Y(}C`%K-X#8}-?6P3hW} zJeC-ZMM&otN}824+R_(8GRGW+5>d9v?TA^eSJ3iS6;hKXQRqPTB%wo_7xi+LH;OG*(PVQ)v9mEPM_?V;Ltg)ZeF_tzT!_A z_7tB*%q$+%t%nL0iQn`u5i0H-O(eM|N3 znN+5s)%9RW-3ir-*mF-4#>En3YA!7_IGa;#%WCVVN|xOg%=FtUbn)!Ao#W+Dr#l%- z@X<%i)Hf!WE$(i#02xz@|K*bv=Ow}%%z753NkKEd-&iz%v}!;CH}rp^p;O7DRlm~E z*T2%x5PQq>MWQq_Q8@I66lWXo%|VVp=-XD(E? z?1@N>hLyu+Kc~}BS7*GdObkK+luq!g8n3=rquBF11l!NaSib)K7M~WM`W-MA%g+rU zT~RH+KJDA%zspM$O;fMb*#vESh3lIjL;6|>CpWv?cR(&&K;Q>}(Wqpcv$4F_LQEC9 z9T84brtvBE^=m@tMLJrq=-2@tb>bMzw0{+xZsiRA95$0PX7+UBwE^my2`E17FgxL0 zDlwXtK#o<~1luWU3L7-FMNqa9*9@N&nOydBVsny1+XT=$dG3pQ5eBAIr&$xG1R`&Z zma#|h-O&iac#rPKjXxL<-`|Io3_?k&vjll8Sy8q}kB z$6|E#T?!+2(-+YLtw{dx4BCqaZ!h&h8Y(p`s{)0%Xx`>!Op@QTcXFKI$pMW0>6uQP zk2jD+qP$}L2&AHD7HtN{SmNbgp9NHOAKZ>WndznKz$h}ObAPP!?n|)DxtVTSy9~v5 z;vDs}Lv!~YMTCn{zDmRJG_j=eZ=A}=U!if=YmVU?QF@qXr{dP0)A8{nE7M)DBj7v$ z03#gaXTT_qndBh=7^wn)k))tN*uMayKdbRGVD#^do8QNB{~SL0-u3=Ye6&<<;|SCy zmTjkjY9uv8+;i_hIkU!qKTezjK(HDL9I5Cj(UWs86pJFMu+z6Te9IkCk%7I;GJ>C$ z&Ca?^J6JB4@5Go}G~wRtupPsw#?GWjX!?j)C9R>5Jv|3O(~fSuZqfa`G46|EYj#gW z@L-$_lTa4#6Qr)!VeK4qT-(?tWz0v!51vs`RfQ8l;eX^x9oI7hhL$VHw0X9ya zY!8gdIxy-{kEW7k7^<&$s`t3*^2-svW+MjxzhBDS7{sD)JJ4;Oj=vf{iUv^DaR){eEgu}^Zdp*A3 zm4N)yF`6G9#B5^ojG@>PP6$akJngc*yAlne@ zz_-oMUxf>fqcOwjo(J`|5 z_oM5wD!>YF8}46IWMmm2`a-AS;&5qwzT8x4u++kI(>VBPSLkw2vVRwP{~d9gC;M>w z-spwFJLYfb!FJjLjk|6(j63NRS0J?5_$E8O4=?F9S#N2i%&9lo8tF8}p9s%$_S5#h zO(Q_zr7lq;seH3cVzjryL#4$&xPf7T)Pn_V1RBb$_#)_~T{dvfCFq%n!9<~$Y%Prq z@ds4TG^yX%dLx-Y7$O5P<$zGu#JaT>=iEXN0D%gMK^;}K(=Fl z+5g7)cux5A(iGJIdZg+h|2SvCX&ShFdDATN0MA6(1fa@ex`5J24=*mfDtiRtO5MW0 zz71r|C>%#ZGNG=%OhYr(!>g9T4p}Wv9`CBjEledT*>6x{ZRH7oA#Dl>W{+uO2aiA$ z#50J!l{|pC1eOM-FQqgq7#On z<`Y}gMB7TKdDD4oR!nt)3~L@kxSROZKDUq;Id7)-d={P~S{1Qiyyvn!mK4zu`P=)B z4Wxajmf!jW=wf{naD4dQasX3?zz?1Zvd8(;5hQ?|$1Tc~m-%@(3jB0>%-_ z&zUPB?!E`QAg<L9#DO2DXvNWJ8|MGJ~IuS5tX#-yhbEL#T$L##{ zyn2(ox9;uQ38TIQiGhA#xej=7b!#D5xg06{o&l$C~xr@z~VF79~c`>c6b&YMBT`q8zmK~-O_FNSh zyhrSiW||(p^WNdsvZsj#TYXNpZb2tgR{@>7?n@A50+1j^mhZF_n9?>0{`f+ukMQM} zfTi@&wUbsES zAcxeR<0HbqWm6jmAAuALbCfQIyj#+~3#4wfHZHq6NOkrDSVnyG||v74Eu^{V<`mlh0vjH4eVl-turPe~}N? z%X2UjHVWsiOxST zUzKXU+kQ@=-E-hm9JM}zYXDw?2Z^W{A9AgJf|X%LdoT@%~OnM zkNv7Yy=5t8~FtE=(BvovK*;>Oa!qrI%xx-FE?PO3UbosE^fMDRidZi4M#aM3Kh zPz4P$;Z$LFKx?vG%|0AZR=?QieyJU%oetG!vq`ZaNaRf7;r4)>d;3r>OjS#G=TzcO zZtVe!uX9WrdWL2h?Vn6uSIFYR$rIXHX2t`EHNsvulM6cEzNILw>`3)Y>@uyNp#t&! zJ^$`;S#=q`i4M|Kt0Cy-VOVzYi9+_fPdoW9tM(PfJ4PT~IX&N}CVK^)B3CH(1mif(xk z!U}C(u2b4M2G7;Ts+M=LYnc15NT=R&q{6^~Nf z*)%G|Vu1;;Tn7#R5cU?#)M?b~grbVjbaenCH?6B5b*D(=fnqAakod%cwEo+fk>ZrCyAA2m* z^BPg)*oS~6ejP|-aY7;urS{GJkBv_UxcI2f83YU&YeLAdYY*pL6HPxU5s-WkqwZ#| zeofL3mON9XMcif-hB5JpRe8yyFQx*!zr03%C1f7KCq^B_tYA_ySRZy%{+p)+Ha8NR zHt;@wf;{$YC19z5-h`q`UU|zxt;IW1cAhv4O_s&IsqPi^()#cr)$E47D|20xSmCTW zr71LF5O$usz8Q9yO!MLTb;gVXB?%OZ{Y`}wlH0k2mq6sew)fj>%YV~O1+q+c&x0FS z&3vdT-{+Dlvgb>ds+2UDJ#Jhfj^S@e%Q*e+s}ZEXAC;je3AnWFH?%DB(b(k7J7J#Z zeFov~w6c0l>fXMu>zalUu$Qz&`GV=l5O#rhtCY^LQm1vAr+{40@N2AKi zcm;Yp-i_KQaq>K$cZG>#{0Si~*kUUENaZ^9_@vs7x&Zat1pXQdz7%AnfM~}cerc;l zpyU_`ZT%|aPaDM+_dC!O*k6C!6LZ_sWbPxMvuyF4lYy5y9)#-4YWq6x4OyyX51Ju> z+PUFJpv7zx#7xrI593b{XyNvCA$g1z3{M*((g^LKz{D>-4R|qsoMcgTdAFAUuVCbM z=mi)Y=^@yK&Z#pSs{WW<^Mp3fRue-LJBVO$tZI5^-F5HQnn#l-<&1Dya#h>b)1fXT ztE-hM$ES{oHxv=g?kw%JpF*FY?5QnY)QN*SHFXpl4C_Y|MIvmhH^|!Jr+BDUxIaH} z2yFSp3Tl~2l{$74=aRRShIu@(6i-Q8w>llA%fXy2z-TPs<+cnHGdC_At9;y1Ze%{8 z#us^@to023sFvkohe+4V+*r~}4^*DF$Sh1+`(z9E$P52Z^z`;73Mpj#{@n{x4cL~X zWEbni`!y#T~$B#n|q=mM!r&eZmR=dCM?g^Jb7Dh%_qrsskxJ*Ry%}m2%!q~~gWC>Td?|*vw(5);jY3xC zuRkKQ3Cn(4p+wz${l@Dd_2!G??`M*>MYEiQyPdpHV=|uwAj(xI;#Li$ zS|Kghq(qZhbch*eQe5?BA>l-@yli8uC}monOLTX_1B48XJU^ogSM)!gm1cFh(#BF*eeO58^vs({{3o;WgFA&P>C}DLY9CkqH$# zr1kW!A4G!q>N(3ayRGQ}Io8p>VwmGxKjreU$$O$236rB8suCR%_j;9$X?>%!-#RgV{f2<0J%+kk4b-FwP1I6tCz)kJWhw?jK4aaNPjg_Q{gb7a0^$aN!O zO{09ITJk`!2D&5O4U3oDm3qqbXy2;5xcx1ac(PvjrfPJbVdB$3^lK1rR2FYCrNSFP z9x1`xPv`&W_jv$XFj_wL<$5yQ3FMJDGR#qDZ!r;``Wp9qSd z=(~8K<~4Zq^tAz0MR;I4(oy3=kDS3(Qh4{>jv?BJW3sP^>T#vJ!}tSQYsBlc-S@e6 zX+2oVsm~tcSR37{;d_FsMeob&-f@S%so(UAfx0z|urHA2LYEc$j+NxL8d;6X-;3W) zs82<`?=q>@ljkGAs_FT!7<-tn=N|6x{q)ot9a)P!+53cWcfCW0&V#~381Ya21b5eo zl$ayW{lrQlSM}g5x2{BJ&YA3zh=>rrQuUQTbS8eQX&&?V-Oln;x^J1Zom61%X#>tX z^Ye(so5NU`p*E3w#3I(;O4GrJ%!9r5<{QGBoWx7Xi2aq6!yJeCI;^1^Yo(~x+1f{+ z!!^MTi;zGtQ;D(S@OH6Ui@qoFmMt|GveNoKsHywbLWG!qDsIf|Oy#*P)l8&S(Q2cQ zoQ(uWa;^>W-D+8{b~&z>W822^q^@+O|P{aTpbiB5n6Xq zvmG>Wc+W_OI6XEo)c{0zZjAE=6}b@g&@E<4`j$6Vq=d$Aw`Z1NzMVeS0s$<>#KLfvs4`uWYt`Kl;-%GJoQtd zc!ES3R*Ht+k+!7DTcXA-T+c-&UWT2&dP?+ScXzWmS{^erzJ5r08W2ba%VP`VQ8`u9 z`?SEzXCn<_<7V4~)QTxuBwWX4FSt3Vmpj577ApT4m^1TT5n6k-30&kj-dSzBKb=)@_@@^x@G^$Llsd*~!)l0aS8n~YVIGdZmiyorS?o7C5&rATi+C}; z<%v_$g^O`$QCh`%dqYuLO`N*D;p`Pf&aaY9& z!d#*~I;I+heRA>paQi8sbeck8j@8P{v8J?V8Vc;<&nN!y z1LkcV`t$UyR3+2cXLiI4D;!(l4Zdav+K1ay@Bg59I@7P#?=?f9D-alI)w$SdNT2Mm zvbj5sSbwq8^ate~hqQ!m2{CPu8axk^e}_~Tat>~u@EVw7K9bdcR4%!gE_7Qs=&Whu z#`$-rL`GE>m_2a~J81_?4ds?s2Ly|R;um^rU!~||R&LtZo>gpOV3hWp10v`wfGCjk fKTbve@^c=);urtF>|Y9l|L=?clYK#a^x=O1%aj@> literal 0 HcmV?d00001 From 49f7a4baa8152ac82bccb76d024b0ca1a6bc2345 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Fri, 28 Nov 2025 17:49:33 +0600 Subject: [PATCH 12/13] reconfigure tls Signed-off-by: Bonusree --- .../elasticsearch/reconfigure_tls/_index.md | 11 + .../reconfigure_tls/elasticsearch.md | 1014 +++++++++++++++++ .../elasticsearch/reconfigure_tls/overview.md | 54 + docs/guides/elasticsearch/restart/index.md | 112 +- .../scaling/horizontal/topology.md | 2 +- 5 files changed, 1154 insertions(+), 39 deletions(-) create mode 100644 docs/guides/elasticsearch/reconfigure_tls/_index.md create mode 100644 docs/guides/elasticsearch/reconfigure_tls/elasticsearch.md create mode 100644 docs/guides/elasticsearch/reconfigure_tls/overview.md diff --git a/docs/guides/elasticsearch/reconfigure_tls/_index.md b/docs/guides/elasticsearch/reconfigure_tls/_index.md new file mode 100644 index 000000000..e7111e587 --- /dev/null +++ b/docs/guides/elasticsearch/reconfigure_tls/_index.md @@ -0,0 +1,11 @@ +--- +title: Elasticsearch Reconfigure TLS/SSL +menu: + docs_{{ .version }}: + identifier: es-reconfigure-tls-elasticsearch + name: Reconfigure TLS/SSL + parent: es-elasticsearch-guides + weight: 110 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- diff --git a/docs/guides/elasticsearch/reconfigure_tls/elasticsearch.md b/docs/guides/elasticsearch/reconfigure_tls/elasticsearch.md new file mode 100644 index 000000000..70df9f5c9 --- /dev/null +++ b/docs/guides/elasticsearch/reconfigure_tls/elasticsearch.md @@ -0,0 +1,1014 @@ +--- +title: Reconfigure Elasticsearch TLS/SSL Encryption +menu: + docs_{{ .version }}: + identifier: es-reconfigure-tls + name: Reconfigure Elasticsearch TLS/SSL Encryption + parent: es-reconfigure-tls-elasticsearch + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfigure Elasticsearch TLS/SSL (Transport Encryption) + +KubeDB supports reconfigure i.e. add, remove, update and rotation of TLS/SSL certificates for existing Elasticsearch database via a ElasticsearchOpsRequest. This tutorial will show you how to use KubeDB to reconfigure TLS/SSL encryption. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). + +- Install [`cert-manger`](https://cert-manager.io/docs/installation/) v1.0.0 or later to your cluster to manage your SSL/TLS certificates. + +- Now, install KubeDB cli on your workstation and KubeDB operator in your cluster following the steps [here](/docs/setup/README.md). + +- To keep things isolated, this tutorial uses a separate namespace called `demo` throughout this tutorial. + + ```bash + $ kubectl create ns demo + namespace/demo created + ``` + +> Note: YAML files used in this tutorial are stored in [docs/examples/Elasticsearch](https://github.com/kubedb/docs/tree/{{< param "info.version" >}}/docs/examples/Elasticsearch) folder in GitHub repository [kubedb/docs](https://github.com/kubedb/docs). + +## Add TLS to a Elasticsearch database + +Here, We are going to create a Elasticsearch without TLS and then reconfigure the database to use TLS. + +### Deploy Elasticsearch without TLS + +In this section, we are going to deploy a Elasticsearch topology cluster without TLS. In the next few sections we will reconfigure TLS using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Elasticsearch +metadata: + name: es-demo + namespace: demo +spec: + deletionPolicy: WipeOut + enableSSL: true + replicas: 3 + storage: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 1Gi + storageClassName: local-path + storageType: Durable + version: xpack-8.11.1 + + +``` + +Let's create the `Elasticsearch` CR we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/Elasticsearch.yaml +Elasticsearch.kubedb.com/es-demo created +``` + +Now, wait until `es-demo` has status `Ready`. i.e, + +```bash +$ kubectl get es -n demo -w +NAME VERSION STATUS AGE +es-demo xpack-8.11.1 Ready 26h + +``` + +Now, we can exec one hazelcast pod and verify configuration that the TLS is disabled. +```bash +$ kubectl exec -n demo es-demo-0 -- \ + cat /usr/share/elasticsearch/config/elasticsearch.yml | grep -A 2 -i xpack.security + +Defaulted container "elasticsearch" out of: elasticsearch, init-sysctl (init), config-merger (init) +xpack.security.enabled: true + +xpack.security.transport.ssl.enabled: true +xpack.security.transport.ssl.verification_mode: certificate +xpack.security.transport.ssl.key: certs/transport/tls.key +xpack.security.transport.ssl.certificate: certs/transport/tls.crt +xpack.security.transport.ssl.certificate_authorities: [ "certs/transport/ca.crt" ] + +xpack.security.http.ssl.enabled: false + +``` +Here, transport TLS is enabled but HTTP TLS is disabled. So, internal node to node communication is encrypted but communication from client to node is not encrypted. + +### Create Issuer/ ClusterIssuer + +Now, We are going to create an example `Issuer` that will be used to enable SSL/TLS in Elasticsearch. Alternatively, you can follow this [cert-manager tutorial](https://cert-manager.io/docs/configuration/ca/) to create your own `Issuer`. + +- Start off by generating a ca certificates using openssl. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca/O=kubedb" +Generating a RSA private key +................+++++ +........................+++++ +writing new private key to './ca.key' +----- +``` + +- Now we are going to create a ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls es-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/es-ca created +``` + +Now, Let's create an `Issuer` using the `Elasticsearch-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: es-issuer + namespace: demo +spec: + ca: + secretName: es-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/Elasticsearch-issuer.yaml +issuer.cert-manager.io/es-issuer created +``` + +### Create ElasticsearchOpsRequest + +In order to add TLS to the Elasticsearch, we have to create a `ElasticsearchOpsRequest` CRO with our created issuer. Below is the YAML of the `ElasticsearchOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: add-tls + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: es-demo + tls: + issuerRef: + apiGroup: "cert-manager.io" + kind: Issuer + name: es-issuer + certificates: + - alias: http + subject: + organizations: + - kubedb.com + emailAddresses: + - abc@kubedb.com +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `es-demo` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on Elasticsearch. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. +- `spec.tls.certificates` specifies the certificates. You can learn more about this field from [here](/docs/guides/elasticsearch/concepts/Elasticsearch.md#spectls). + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +> **Note:** For combined Elasticsearch, you just need to refer Elasticsearch combined object in `databaseRef` field. To learn more about combined Elasticsearch, please visit [here](/docs/guides/elasticsearch/clustering/combined-cluster/index.md). + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/Elasticsearch-add-tls.yaml +Elasticsearchopsrequest.ops.kubedb.com/add-tls created +``` + +#### Verify TLS Enabled Successfully + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CRO, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo +NAME TYPE STATUS AGE +add-tls ReconfigureTLS Successful 73m +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo add-tls +Name: add-tls +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-28T05:16:12Z + Generation: 1 + Resource Version: 884868 + UID: 2fa3b86a-4cfa-4e51-8cde-c5d7508c3eb0 +Spec: + Apply: IfReady + Database Ref: + Name: es-demo + Tls: + Certificates: + Alias: http + Email Addresses: + abc@kubedb.com + Subject: + Organizations: + kubedb.com + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: es-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2025-11-28T05:16:12Z + Message: Elasticsearch ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2025-11-28T05:16:20Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2025-11-28T05:16:20Z + Message: ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReadyCondition + Last Transition Time: 2025-11-28T05:16:20Z + Message: issue condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssueCondition + Last Transition Time: 2025-11-28T05:16:20Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2025-11-28T05:16:32Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-0 + Last Transition Time: 2025-11-28T05:16:32Z + Message: create es client; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-0 + Last Transition Time: 2025-11-28T05:16:32Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-0 + Last Transition Time: 2025-11-28T05:17:42Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-28T05:16:57Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-1 + Last Transition Time: 2025-11-28T05:16:57Z + Message: create es client; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-1 + Last Transition Time: 2025-11-28T05:16:57Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-1 + Last Transition Time: 2025-11-28T05:17:22Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-2 + Last Transition Time: 2025-11-28T05:17:22Z + Message: create es client; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-2 + Last Transition Time: 2025-11-28T05:17:22Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-2 + Last Transition Time: 2025-11-28T05:17:47Z + Message: Successfully restarted all the nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-28T05:17:51Z + Message: Successfully reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: +``` + +Now, Let's exec into a Elasticsearch broker pod and verify the configuration that the TLS is enabled. + +```bash +$ kubectl exec -n demo es-demo-0 -- \ + cat /usr/share/elasticsearch/config/elasticsearch.yml | grep -A 2 -i xpack.security + +Defaulted container "elasticsearch" out of: elasticsearch, init-sysctl (init), config-merger (init) +xpack.security.enabled: true + +xpack.security.transport.ssl.enabled: true +xpack.security.transport.ssl.verification_mode: certificate +xpack.security.transport.ssl.key: certs/transport/tls.key +xpack.security.transport.ssl.certificate: certs/transport/tls.crt +xpack.security.transport.ssl.certificate_authorities: [ "certs/transport/ca.crt" ] + +xpack.security.http.ssl.enabled: true +xpack.security.http.ssl.key: certs/http/tls.key +xpack.security.http.ssl.certificate: certs/http/tls.crt +xpack.security.http.ssl.certificate_authorities: [ "certs/http/ca.crt" ] + +``` + +We can see from the above output that, `xpack.security.http.ssl.enabled: true` which means TLS is enabled for HTTP communication. + +## Rotate Certificate + +Now we are going to rotate the certificate of this cluster. First let's check the current expiration date of the certificate. + +```bash +$ kubectl exec -n demo es-demo-0 -- /bin/sh -c '\ + openssl s_client -connect localhost:9200 -showcerts < /dev/null 2>/dev/null | \ + sed -ne "/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p" > /tmp/server.crt && \ + openssl x509 -in /tmp/server.crt -noout -enddate' +Defaulted container "elasticsearch" out of: elasticsearch, init-sysctl (init), config-merger (init) +notAfter=Feb 26 05:16:15 2026 GMT + +``` + +So, the certificate will expire on this time `Feb 26 05:16:17 2026 GMT`. + +### Create ElasticsearchOpsRequest + +Now we are going to increase it using a ElasticsearchOpsRequest. Below is the yaml of the ops request that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-rotate + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: es-demo + tls: + rotateCertificates: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `es-demo`. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our cluster. +- `spec.tls.rotateCertificates` specifies that we want to rotate the certificate of this Elasticsearch cluster. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/esops-rotate.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-rotate created +``` + +#### Verify Certificate Rotated Successfully + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CRO, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo esops-rotate +NAME TYPE STATUS AGE +esops-rotate ReconfigureTLS Successful 85m + +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo esops-rotate +Name: esops-rotate +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-28T07:02:38Z + Generation: 1 + Resource Version: 893511 + UID: 43503dc9-ddeb-4569-a8a9-b10a96feeb60 +Spec: + Apply: IfReady + Database Ref: + Name: es-demo + Tls: + Rotate Certificates: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2025-11-28T07:02:38Z + Message: Elasticsearch ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2025-11-28T07:02:41Z + Message: successfully add issuing condition to all the certificates + Observed Generation: 1 + Reason: IssueCertificatesSucceeded + Status: True + Type: IssueCertificatesSucceeded + Last Transition Time: 2025-11-28T07:02:46Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2025-11-28T07:02:46Z + Message: ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReadyCondition + Last Transition Time: 2025-11-28T07:02:47Z + Message: issue condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssueCondition + Last Transition Time: 2025-11-28T07:02:47Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2025-11-28T07:02:56Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-0 + Last Transition Time: 2025-11-28T07:02:56Z + Message: create es client; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-0 + Last Transition Time: 2025-11-28T07:02:56Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-0 + Last Transition Time: 2025-11-28T07:04:06Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-28T07:03:21Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-1 + Last Transition Time: 2025-11-28T07:03:21Z + Message: create es client; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-1 + Last Transition Time: 2025-11-28T07:03:21Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-1 + Last Transition Time: 2025-11-28T07:03:46Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-2 + Last Transition Time: 2025-11-28T07:03:46Z + Message: create es client; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-2 + Last Transition Time: 2025-11-28T07:03:46Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-2 + Last Transition Time: 2025-11-28T07:04:11Z + Message: Successfully restarted all the nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-28T07:04:15Z + Message: Successfully reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: +``` + + + +As we can see from the above output, the certificate has been rotated successfully. + +## Change Issuer/ClusterIssuer + +Now, we are going to change the issuer of this database. + +- Let's create a new ca certificate and key using a different subject `CN=ca-update,O=kubedb-updated`. + +```bash +$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./ca.key -out ./ca.crt -subj "/CN=ca-updated/O=kubedb-updated" +.+........+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*........+.....+......+...+.+..............+....+..+.+...+......+.....+.........+............+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+........+.......+...+......+.....+..........+..+.........+......+....+...+..+....+..+.......+............+...+..+...+.+............+..+................+.....+................+.....+.+........+.+.....+.........................+........+......+....+...........+.+....................+.+..+......+......+...+...+...+......+.+...+.........+.....+.......+...+..+.............+.....+.+..............+......+.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +..+........+...+...............+...+....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*...+...+...+...................+.....+.+......+.....+.........+....+...+.....+...+.......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*....+...+..+............+....+..+...+..........+.........+......+.........+...........+....+..+.+..+.......+.....+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +``` + +- Now we are going to create a new ca-secret using the certificate files that we have just generated. + +```bash +$ kubectl create secret tls es-new-ca \ + --cert=ca.crt \ + --key=ca.key \ + --namespace=demo +secret/es-new-ca created + +``` + +Now, Let's create a new `Issuer` using the `mongo-new-ca` secret that we have just created. The `YAML` file looks like this: + +```yaml +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: es-new-issuer + namespace: demo +spec: + ca: + secretName: es-new-ca +``` + +Let's apply the `YAML` file: + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/Elasticsearch-new-issuer.yaml +issuer.cert-manager.io/es-new-issuer created +``` + +### Create ElasticsearchOpsRequest + +In order to use the new issuer to issue new certificates, we have to create a `ElasticsearchOpsRequest` CRO with the newly created issuer. Below is the YAML of the `ElasticsearchOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-update-issuer + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: es-demo + tls: + issuerRef: + name: es-new-issuer + kind: Issuer + apiGroup: "cert-manager.io" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `es-demo` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on our Elasticsearch. +- `spec.tls.issuerRef` specifies the issuer name, kind and api group. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/Elasticsearch-update-tls-issuer.yaml +Elasticsearchpsrequest.ops.kubedb.com/esops-update-issuer created +``` + +#### Verify Issuer is changed successfully + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CRO, + +```bash +$ kubectl get Elasticsearchopsrequests -n demo esops-update-issuer +NAME TYPE STATUS AGE +esops-update-issuer ReconfigureTLS Successful 6m28s +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo esops-update-issuer +Name: esops-update-issuer +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-28T09:32:41Z + Generation: 1 + Resource Version: 905680 + UID: 9abdfdc1-2c7e-4d1d-b226-029c0e6d99fc +Spec: + Apply: IfReady + Database Ref: + Name: es-demo + Tls: + Issuer Ref: + API Group: cert-manager.io + Kind: Issuer + Name: es-new-issuer + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2025-11-28T09:32:41Z + Message: Elasticsearch ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2025-11-28T09:32:49Z + Message: get certificate; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: GetCertificate + Last Transition Time: 2025-11-28T09:32:49Z + Message: ready condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: ReadyCondition + Last Transition Time: 2025-11-28T09:32:49Z + Message: issue condition; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: IssueCondition + Last Transition Time: 2025-11-28T09:32:49Z + Message: Successfully synced all certificates + Observed Generation: 1 + Reason: CertificateSynced + Status: True + Type: CertificateSynced + Last Transition Time: 2025-11-28T09:33:00Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-0 + Last Transition Time: 2025-11-28T09:33:00Z + Message: create es client; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-0 + Last Transition Time: 2025-11-28T09:33:00Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-0 + Last Transition Time: 2025-11-28T09:35:31Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-28T09:33:25Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-1 + Last Transition Time: 2025-11-28T09:33:25Z + Message: create es client; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-1 + Last Transition Time: 2025-11-28T09:33:25Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-1 + Last Transition Time: 2025-11-28T09:33:50Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-2 + Last Transition Time: 2025-11-28T09:33:50Z + Message: create es client; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-2 + Last Transition Time: 2025-11-28T09:33:50Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-2 + Last Transition Time: 2025-11-28T09:34:15Z + Message: Successfully restarted all the nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-28T09:34:21Z + Message: Successfully reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 6m47s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-demo + Warning get certificate; ConditionStatus:True 6m39s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issue condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator issue condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 6m39s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issue condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator issue condition; ConditionStatus:True + Warning get certificate; ConditionStatus:True 6m39s KubeDB Ops-manager Operator get certificate; ConditionStatus:True + Warning ready condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator ready condition; ConditionStatus:True + Warning issue condition; ConditionStatus:True 6m39s KubeDB Ops-manager Operator issue condition; ConditionStatus:True + Normal CertificateSynced 6m39s KubeDB Ops-manager Operator Successfully synced all certificates + Warning pod exists; ConditionStatus:True; PodName:es-demo-0 6m28s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:True; PodName:es-demo-0 6m28s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-0 + Warning evict pod; ConditionStatus:True; PodName:es-demo-0 6m28s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:False 6m23s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 6m8s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-1 6m3s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:True; PodName:es-demo-1 6m3s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-1 + Warning evict pod; ConditionStatus:True; PodName:es-demo-1 6m3s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:False 5m58s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 5m43s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-2 5m38s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:True; PodName:es-demo-2 5m38s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-2 + Warning evict pod; ConditionStatus:True; PodName:es-demo-2 5m38s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:False 5m33s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 5m18s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Normal RestartNodes 5m13s KubeDB Ops-manager Operator Successfully restarted all the nodes + Normal ResumeDatabase 5m7s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-demo + Normal ResumeDatabase 5m7s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-demo + Normal Successful 5m7s KubeDB Ops-manager Operator Successfully Reconfigured TLS + Warning pod exists; ConditionStatus:True; PodName:es-demo-0 5m7s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:True; PodName:es-demo-0 5m7s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-0 + Warning evict pod; ConditionStatus:True; PodName:es-demo-0 5m7s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:False 5m2s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 4m47s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-1 4m42s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:True; PodName:es-demo-1 4m42s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-1 + Warning evict pod; ConditionStatus:True; PodName:es-demo-1 4m42s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:False 4m37s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 4m22s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-2 4m17s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:True; PodName:es-demo-2 4m17s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-2 + Warning evict pod; ConditionStatus:True; PodName:es-demo-2 4m17s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:False 4m12s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 3m57s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Normal RestartNodes 3m52s KubeDB Ops-manager Operator Successfully restarted all the nodes + +``` + +Now, Let's exec into a Elasticsearch node and find out the ca subject to see if it matches the one we have provided. + +```bash +$ kubectl exec -it -n demo es-demo-0 -- bash +elasticsearch@es-demo-0:~$ openssl x509 -in /usr/share/elasticsearch/config/certs/http/..2025_11_28_09_34_24.3912740802/tls.crt -noout -issuer +issuer=CN = ca-updated, O = kubedb-updated +elasticsearch@es-demo-0:~$ openssl x509 -in /usr/share/elasticsearch/config/certs/transport/..2025_11_28_09_34_24.2105953641/tls.crt -noout -issuer +issuer=CN = ca-updated, O = kubedb-updated + +``` + +We can see from the above output that, the subject name matches the subject name of the new ca certificate that we have created. So, the issuer is changed successfully. + +## Remove TLS from the Database + +Now, we are going to remove TLS from this database using a ElasticsearchOpsRequest. + +### Create ElasticsearchOpsRequest + +Below is the YAML of the `ElasticsearchOpsRequest` CRO that we are going to create, + +```yaml +apiVersion: ops.kubedb.com/v1alpha1 +kind: ElasticsearchOpsRequest +metadata: + name: esops-remove + namespace: demo +spec: + type: ReconfigureTLS + databaseRef: + name: es-demo + tls: + remove: true +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing reconfigure TLS operation on `es-demo` cluster. +- `spec.type` specifies that we are performing `ReconfigureTLS` on Elasticsearch. +- `spec.tls.remove` specifies that we want to remove tls from this cluster. + +Let's create the `ElasticsearchOpsRequest` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/Elasticsearch/reconfigure-tls/esops-remove.yaml +Elasticsearchopsrequest.ops.kubedb.com/esops-remove created +``` + +#### Verify TLS Removed Successfully + +Let's wait for `ElasticsearchOpsRequest` to be `Successful`. Run the following command to watch `ElasticsearchOpsRequest` CRO, + +```bash +$ kubectl get Elasticsearchopsrequest -n demo esops-remove +NAME TYPE STATUS AGE +esops-remove ReconfigureTLS Successful 3m16s + +``` + +We can see from the above output that the `ElasticsearchOpsRequest` has succeeded. If we describe the `ElasticsearchOpsRequest` we will get an overview of the steps that were followed. + +```bash +$ kubectl describe Elasticsearchopsrequest -n demo esops-remove +Name: esops-remove +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: ElasticsearchOpsRequest +Metadata: + Creation Timestamp: 2025-11-28T10:42:00Z + Generation: 1 + Resource Version: 911280 + UID: 7eefbe63-1fcc-4ca3-bb5d-65ec22d7fd9a +Spec: + Apply: IfReady + Database Ref: + Name: es-demo + Tls: + Remove: true + Type: ReconfigureTLS +Status: + Conditions: + Last Transition Time: 2025-11-28T10:42:00Z + Message: Elasticsearch ops request is reconfiguring TLS + Observed Generation: 1 + Reason: ReconfigureTLS + Status: True + Type: ReconfigureTLS + Last Transition Time: 2025-11-28T10:42:14Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-0 + Last Transition Time: 2025-11-28T10:42:14Z + Message: create es client; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-0 + Last Transition Time: 2025-11-28T10:42:14Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-0 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-0 + Last Transition Time: 2025-11-28T10:43:24Z + Message: create es client; ConditionStatus:True + Observed Generation: 1 + Status: True + Type: CreateEsClient + Last Transition Time: 2025-11-28T10:42:34Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-1 + Last Transition Time: 2025-11-28T10:42:34Z + Message: create es client; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-1 + Last Transition Time: 2025-11-28T10:42:34Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-1 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-1 + Last Transition Time: 2025-11-28T10:43:09Z + Message: pod exists; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: PodExists--es-demo-2 + Last Transition Time: 2025-11-28T10:43:09Z + Message: create es client; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: CreateEsClient--es-demo-2 + Last Transition Time: 2025-11-28T10:43:09Z + Message: evict pod; ConditionStatus:True; PodName:es-demo-2 + Observed Generation: 1 + Status: True + Type: EvictPod--es-demo-2 + Last Transition Time: 2025-11-28T10:43:29Z + Message: Successfully restarted all the nodes + Observed Generation: 1 + Reason: RestartNodes + Status: True + Type: RestartNodes + Last Transition Time: 2025-11-28T10:43:33Z + Message: Successfully reconfigured TLS + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 3m43s KubeDB Ops-manager Operator Pausing Elasticsearch demo/es-demo + Warning pod exists; ConditionStatus:True; PodName:es-demo-0 3m29s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:True; PodName:es-demo-0 3m29s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-0 + Warning evict pod; ConditionStatus:True; PodName:es-demo-0 3m29s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-0 + Warning create es client; ConditionStatus:False 3m24s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 3m14s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-1 3m9s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:True; PodName:es-demo-1 3m9s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-1 + Warning evict pod; ConditionStatus:True; PodName:es-demo-1 3m9s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-1 + Warning create es client; ConditionStatus:False 3m4s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 2m39s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Warning pod exists; ConditionStatus:True; PodName:es-demo-2 2m34s KubeDB Ops-manager Operator pod exists; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:True; PodName:es-demo-2 2m34s KubeDB Ops-manager Operator create es client; ConditionStatus:True; PodName:es-demo-2 + Warning evict pod; ConditionStatus:True; PodName:es-demo-2 2m34s KubeDB Ops-manager Operator evict pod; ConditionStatus:True; PodName:es-demo-2 + Warning create es client; ConditionStatus:False 2m29s KubeDB Ops-manager Operator create es client; ConditionStatus:False + Warning create es client; ConditionStatus:True 2m19s KubeDB Ops-manager Operator create es client; ConditionStatus:True + Normal RestartNodes 2m14s KubeDB Ops-manager Operator Successfully restarted all the nodes + Normal ResumeDatabase 2m10s KubeDB Ops-manager Operator Resuming Elasticsearch demo/es-demo + Normal ResumeDatabase 2m10s KubeDB Ops-manager Operator Successfully resumed Elasticsearch demo/es-demo + Normal Successful 2m10s KubeDB Ops-manager Operator Successfully Reconfigured TLS + +``` + +Now, Let's exec into one of the broker node and find out that TLS is disabled or not. + +```bash +$ kubectl exec -n demo es-demo-0 -- \ + cat /usr/share/elasticsearch/config/elasticsearch.yml | grep -A 2 -i xpack.security + +Defaulted container "elasticsearch" out of: elasticsearch, init-sysctl (init), config-merger (init) +xpack.security.enabled: true + +xpack.security.transport.ssl.enabled: true +xpack.security.transport.ssl.verification_mode: certificate +xpack.security.transport.ssl.key: certs/transport/tls.key +xpack.security.transport.ssl.certificate: certs/transport/tls.crt +xpack.security.transport.ssl.certificate_authorities: [ "certs/transport/ca.crt" ] + +xpack.security.http.ssl.enabled: false + +``` + +So, we can see from the above that, `xpack.security.http.ssl.enabled` is set to `false` which means TLS is disabled for HTTP layer. Also, the transport layer TLS settings are removed from the `elasticsearch.yml` file. + +## Cleaning up + +To cleanup the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete elasticsearchopsrequest -n demo add-tls esops-remove esops-rotate esops-update-issuer +kubectl delete Elasticsearch -n demo es-demo +kubectl delete issuer -n demo es-issuer es-new-issuer +kubectl delete ns demo +``` + +## Next Steps + +- Detail concepts of [Elasticsearch object](/docs/guides/elasticsearch/concepts/elasticsearch.md). +- Different Elasticsearch topology clustering modes [here](/docs/guides/elasticsearch/clustering/_index.md). +- Monitor your Elasticsearch database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/elasticsearch/monitoring/using-prometheus-operator.md). +- Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). + diff --git a/docs/guides/elasticsearch/reconfigure_tls/overview.md b/docs/guides/elasticsearch/reconfigure_tls/overview.md new file mode 100644 index 000000000..eb330d63c --- /dev/null +++ b/docs/guides/elasticsearch/reconfigure_tls/overview.md @@ -0,0 +1,54 @@ +--- +title: Reconfiguring TLS/SSL Overview +menu: + docs_{{ .version }}: + identifier: es-reconfigure-tls-overview + name: Overview + parent: es-reconfigure-tls-elasticsearch + weight: 5 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Reconfiguring TLS of Kafka + +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of `Kafka`. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: +- [Kafka](/docs/guides/kafka/concepts/kafka.md) +- [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Reconfiguring Kafka TLS Configuration Process Works + +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Kafka`. Open the image in a new tab to see the enlarged version. + +
+   Reconfiguring TLS process of Kafka +
Fig: Reconfiguring TLS process of Kafka
+
+ +The Reconfiguring Kafka TLS process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Kafka` CRO. + +3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to reconfigure the TLS configuration of the `Kafka` database the user creates a `KafkaOpsRequest` CR with desired information. + +5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. + +6. When it finds a `KafkaOpsRequest` CR, it pauses the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the reconfiguring TLS process. + +7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. + +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `KafkaOpsRequest` CR. + +9. After the successful reconfiguring of the `Kafka` TLS, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. + +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a Kafka database using `KafkaOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index 9390e730a..51c86fa9b 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -76,6 +76,72 @@ es-1 2/2 Running 0 6m28s es-2 2/2 Running 0 6m28s ``` +### Populate Data + +To connect to our Elasticsearch cluster, let's port-forward the Elasticsearch service to local machine: + +```bash +$ kubectl port-forward -n demo svc/sample-es 9200 +Forwarding from 127.0.0.1:9200 -> 9200 +Forwarding from [::1]:9200 -> 9200 +``` + +Keep it like that and switch to another terminal window: + +```bash +$ export ELASTIC_USER=$(kubectl get secret -n demo es-demo -o jsonpath='{.data.username}' | base64 -d) + +$ export ELASTIC_PASSWORD=$(kubectl get secret -n demo es-demo -o jsonpath='{.data.password}' | base64 -d) + +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cluster/health?pretty" +{ + "cluster_name" : "sample-es", + "status" : "green", + "timed_out" : false, + "number_of_nodes" : 3, + "number_of_data_nodes" : 3, + "active_primary_shards" : 1, + "active_shards" : 2, + "relocating_shards" : 0, + "initializing_shards" : 0, + "unassigned_shards" : 0, + "delayed_unassigned_shards" : 0, + "number_of_pending_tasks" : 0, + "number_of_in_flight_fetch" : 0, + "task_max_waiting_in_queue_millis" : 0, + "active_shards_percent_as_number" : 100.0 +} +``` + +So, our cluster status is green. Let's create some indices with dummy data: + +```bash +$ curl -XPOST -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/products/_doc?pretty" -H 'Content-Type: application/json' -d ' +{ + "name": "KubeDB", + "vendor": "AppsCode Inc.", + "description": "Database Operator for Kubernetes" +} +' + +$ curl -XPOST -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/companies/_doc?pretty" -H 'Content-Type: application/json' -d ' +{ + "name": "AppsCode Inc.", + "mission": "Accelerate the transition to Containers by building a Kubernetes-native Data Platform", + "products": ["KubeDB", "Stash", "KubeVault", "Kubeform", "ByteBuilders"] +} +' +``` + +Now, let’s verify that the indexes have been created successfully. + +```bash +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size +green open .geoip_databases oiaZfJA8Q5CihQon0oR8hA 1 1 42 0 81.6mb 40.8mb +green open companies GuGisWJ8Tkqnq8vhREQ2-A 1 1 1 0 11.5kb 5.7kb +green open products wyu-fImDRr-Hk_GXVF7cDw 1 1 1 0 10.6kb 5.3kb +``` # Apply Restart opsRequest @@ -240,55 +306,25 @@ After the restart, reconnect to the database and verify that the previously crea Let's port-forward the port `9200` to local machine: ```bash -$ kubectl port-forward -n demo svc/es-quickstart 9200 +$ kubectl port-forward -n demo svc/es-demo 9200 Forwarding from 127.0.0.1:9200 -> 9200 Forwarding from [::1]:9200 -> 9200 -``` - -Now, our Elasticsearch cluster is accessible at `localhost:9200`. - -**Connection information:** - -- Address: `localhost:9200` -- Username: -```bash -$ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.username}' | base64 -d -elastic ``` -- Password: - -```bash -$ kubectl get secret -n demo es-quickstart-elastic-cred -o jsonpath='{.data.password}' | base64 -d -vIHoIfHn=!Z8F4gP -``` -Now let's check the health of our Elasticsearch database. +Now let's check the data persistencyof our Elasticsearch database. ```bash -$ curl -XGET -k -u 'elastic:vIHoIfHn=!Z8F4gP' "https://localhost:9200/_cluster/health?pretty" +$ curl -XGET -k -u "$ELASTIC_USER:$ELASTIC_PASSWORD" "https://localhost:9200/_cat/indices?v&s=index&pretty" +health status index uuid pri rep docs.count docs.deleted store.size pri.store.size dataset.size +green open companies 02UKouHARfuMs2lZXMkVQQ 1 1 1 0 13.6kb 6.8kb 6.8kb +green open kubedb-system 2Fr26ppkSyy7uJrkfIhzvg 1 1 1 6 433.3kb 191.1kb 191.1kb +green open products XxAYeIKOSLaOqp2rczCwFg 1 1 1 0 12.4kb 6.2kb 6.2kb -{ - "cluster_name" : "es-quickstart", - "status" : "green", - "timed_out" : false, - "number_of_nodes" : 3, - "number_of_data_nodes" : 3, - "active_primary_shards" : 3, - "active_shards" : 6, - "relocating_shards" : 0, - "initializing_shards" : 0, - "unassigned_shards" : 0, - "delayed_unassigned_shards" : 0, - "number_of_pending_tasks" : 0, - "number_of_in_flight_fetch" : 0, - "task_max_waiting_in_queue_millis" : 0, - "active_shards_percent_as_number" : 100.0 -} ``` -From the health information above, we can see that our Elasticsearch cluster's status is `green` which means the cluster is healthy. +As you can see, the previously created indices `companies` and `products` are still present after the restart, confirming data persistence after the restart operation. ## Cleaning up diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology.md b/docs/guides/elasticsearch/scaling/horizontal/topology.md index 416fc1a13..33c60c80f 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/topology.md +++ b/docs/guides/elasticsearch/scaling/horizontal/topology.md @@ -168,7 +168,7 @@ persistentvolumeclaim/data-es-hscale-topology-master-2 Bound pvc-a9160094-c ``` -We can see from the above output that the Elasticsearch has 2 nodes. +We can see from the above output that the Elasticsearch has 3 nodes. We are now ready to apply the `ElasticsearchOpsRequest` CR to scale this cluster. From c7ceafaf95e26abf7115da38e7679047a1160040 Mon Sep 17 00:00:00 2001 From: Bonusree Date: Mon, 1 Dec 2025 16:53:54 +0600 Subject: [PATCH 13/13] error fixed Signed-off-by: Bonusree --- .../elasticsearch/reconfigure_tls/es-tls.png | Bin 0 -> 56575 bytes .../elasticsearch/reconfigure_tls/overview.md | 36 +++++++++--------- docs/guides/elasticsearch/restart/index.md | 3 +- .../scaling/horizontal/topology.md | 8 ++-- .../scaling/vertical/topology.md | 2 - .../update-version/elasticsearch.md | 4 +- .../volume-expantion/topology.md | 4 +- 7 files changed, 28 insertions(+), 29 deletions(-) create mode 100644 docs/guides/elasticsearch/reconfigure_tls/es-tls.png diff --git a/docs/guides/elasticsearch/reconfigure_tls/es-tls.png b/docs/guides/elasticsearch/reconfigure_tls/es-tls.png new file mode 100644 index 0000000000000000000000000000000000000000..e149322f9c27bdc9109b2513f2eb6eccf0f2360e GIT binary patch literal 56575 zcmeFZWn7eR*ETu|e?&?|M7mT!8l+Pi>F$siy1PLo1O!B+yGu$0q(lWo8l+o9y1QX7 zyzlpZp6C6&ANFtWFZ;uue+)Bo&3Rqdd9HJQ6RM;jg^fXsfj}UzpGu3XAP^|u z5QuB}=-1($T^3J1c)IBzt?i6J;Iv%*cP*9yhXjE@t+rIta?z5P<1?|hWi~RkH%2mh z*gC+`2!w#Jhl7!cHPVIB7-?>4CrGu$1<4LaKTxsF`?K zoA8>_3JYNfc<{jhwn!HvN)KBbJ7+!*LE69L^13DR1) zxH#~!u(-RsGrK=#ws$gTVdLfHWnpD!VP|K8Bbb~$?OcpJnCzVI!w49Zf5#9LN%>i$VGCp+#I=oGgw1GrFBKv(evoT(7>u3^&bUWa`Yq#>{$k zS(KCl|2m#e#MJrf<-bm%igf(@;$N?9EdQR#!N|!Ospjc`6r@!_I@`NCnIQj;aP_5s z45ZyU7h23DKi<^8?h{-C zI0=^jxDt5rA7=#DBS`B6uEsX5!wZ2J*?cN4qUMpZHsz_MHgtJo%XyQLb*(rcgYv^e ztAGG>>MRTTB+FWpFvF(M+CMv;X&pZ~C>WTkHAv8%7yc@FVg^7COcc zL^C66XX@#x$!i)1BRPrS!`0=?TK68_U7{_b8S);%cw_Cjzuqx`c`1_l?_U|aBCpY| z{^}z#=D2ze=)ujtcJ+r#Fd_EUbH?Kqi>qfuDwQP4)$N1hLkjK3YR=BiuCDu!iEt5!G$vTsr50lolUrnbgH9d`U5T!%!-5qf zD?`PY7#L|M686&#g-UefCxcyyj~}zMf9xF`9{zlJaeDjq?b+E`Ha52T`T2F9yWL8) zwYAnGrM_X*&Q4BQ+1bs_&G)IPHA-}bCMV_4YyB_%PT4)=?BJ%qQ(U|8=Lo0QN7vUZ6=C`+7G9_%Sua2=aot{g!8@yVcyfx)+Q77 zI~y!i^*udE<*@u|Vo{-H!b%%(E*Lk7qv3U=vpx55P`UHb!>4U(v z>BiHwu}VJdp$jtvLLa*(8xw)>IA}56Zt&b)SzS#|P8Jgt6}mjzl~3lHs&%GiV{@FS zww_D(acxj;j8`GQWG*Vva*X^iOqA{=v0{Zm{T5`M=so5>`vCy z*JmOK*DTfJwVV7T=zSpIQUD+O98S5P9&V;{+3H+goR=9krNzc_S`WX!C49Cu-RKSX zmqXXl*;!m&?be?q@%8K1B!h^Ei2eP2KR-WwiNvI&jnRsi&9gaqc^i|p1H;3^eSNGC z9+(cl*OtlskeO+;GFaFd$D~?3ba}q%kBNy%Mn)zmD9FUf7{_9m_U_$Y>pjDSW8R-%rAjCD-Mf!o`zxyX3fROPOIx$e zU`y52)xzg{F~!B!1_lN`K4)->Z_UjnCMFa*)0gKnml9G}kIPN}%X;U<$mrGx zlbhEOE-K(7`)VC)^_rV3Xnb!{EiW%OhuoqKbl^$(_3Kx$W*MF+Jg~5^bX+fghqH8b zbrCRXZ50?yPfs5l90cOh(&h&Tqpg45Y}oB7)vqUD(s8*s^UPE_>f51FOiM{k6*O*n z_4@VezP`SPvT+s$7F)l*2I49z_QF|eoC`9=LhwXMNl72z-E3xsU7DGh8Hfug_13K@ zjfkd`mEt?J3ZLrhDfnD|baqke!RUUvG%O@;Se~8um5x;%D`hwM5d${}R zVS90QY-wrv*sSYSSeTXP)dkkrP3crw#lkhyv=tAnXk-@-Sv`JSxwYlc-CkkRCiIaW z5x|UD*^kHwIXgRJ*8B7X-AMc3XRhq$&!2541_uYblDRkY)BV^VKeqN~5k;3!#t6## z^y!mS6uC)TWL0IQtb&3@p_;Ak@W=>1KY#qYcNRFwNlAlNRk^vjQBjIAG8j}j)zt@8 zBL?-Jy9;+htU3n z$;rvOI=)z$`Po^=`Hs7y($3DiFcUYaV2{9-hj}65bFl!^tST!)MIc_l7VVRjjbls{ z@;!kq7bH6S;j%H48SP zQAjCJqRYz45<2?zGB7Yug9#qoc9TcnN$f|*#vFN4R#sNPPmYa_GQ!GtynB=_6;)bP zlU=P$l{q&n?;I0d6_Ox3&pctaH#8TtJA^FUlx)d83?Nl8gB zPtUt|?+Q+R_C_GQUawg36IBEjsJ(uxOhUAkHc5=&dmJim{OnmqBf;;)J!5SR>ga@| zBs|eT(ahZ3hw=j$K`(k2g^R)$*hL^1uE!WxXn6~|^M_a|gD;MjhHK;@jjX9T+-$m> zwxY%E&N2rdBqr7z7XuTXH0GWt_nr0if<7HSMy!mwn%ZEQp)gL!ojZ3PJN){xv}Dmu zDVo{W(NTn~g(zd;5=z*O_l?!+VKbeYT#Tq^esyM+2L7+$I{uJ#_x{ZQr(Sz;3!;IKw@v zHTvZY#>UL^*eh`Qr8BA$0!w7+Y%NbxTK(c4GAPW`JK^$0Kh?D0diQHAPrXPtlBB#( zq>)tVt5)`8zvr{m3I{OkAF0b-8cdbQl`{h}LLkVW3fJV*6%dz*7M z6QzR59;A_qWg9VK2CmgrXH54uM?S7C>OZ*pf{TF{VYX~h;G<)tUQ(pTPFcA77tWra0z@!S41)w#R-764$@Xt%ry6Ltw@JHf}p(fB&`BMD_a7 z$p`;)Kawcw8J|kawNH*@=WRC-za;vau{X_ZPL6-HrVOb{F^7@bn0^_l^4@M7-^5dm zVYjzQ;5^uWvbm^Cg%B_h^PS3P&}7zc_>_%#ZAv=#=i21t*6eMZ$Vf7MOvD%G{1EGG z5~Ncw5>_moTI`M}s!v*OXmpOYftKfG;s_dIy@K0r6Lv;q>2((~8yh{(!wvhU^Wpv= z1fl_bGLre%*qkG;*>$nFhZuKvk9;@CWhNh#;d0@MS}<;*EKxRxU|Yw@@YP{>5Kpa> zSzHXr8NpicdGV0gn%iY5u^rzXa$gl!>~GXy>@F){`?IaLUXt!SUJbRrYt5Dg8@ zn5!SmHODWlUqPwuf-d zqB1!`;I>&dKT8*2=aI}1wv)U0b~tf)$RSrSn^yI22kxfY2QI_i_P=A+ndGESjp+Z$ zXa-2(vbsLruP#6qhkNkUd~YlwD|w_;e|3o-fiOT7eBD3QB|l9#Ix;kU@+;`Q*u4+q zGZi07w?$`5UW6r)R})lzyCDFE_iLMP(S}17UABT=hH187V+)^GPvTnlpwj!-xUM5l ze3dl%mMEHAyhxmIp0`srov z7$)6a1bNY&(g?Fqou^5%s73gZzxkwGLyYGG`xtG%Z%yK(U6XnF+$xeR(?WF5=N9Gr zf1PC0SQ?#;{f~PMd;E|2dzkj-@6;ws?(DlSUnJP(sA6^ulk!v@PO1w<@%v>QHjJQc zyUaYgg-DAH3`7mpS{rTjmb&*-fl>LX`j9_sY!e|J%MS11qb%z8Nw}O6NtO%DihilZ z^y<#*gQVV#QT>vV*YAx?b8&Hz>7!rSWI9`t*Z{%z>$D`8wP8=|9gE9%{Ep90XoBA2 z{_LL2$<_L!l_!!%)RaaON)Fm-XYT2H&(ODdbxyhBV`eDhDbBX3@pra(1ij;wNG?l_g~_9`AT*xT(v_H~F3O z)!`xn78P#YbG6v|L@E^DnQO@`uZbZ69w+}^>_ifgH;#Jv~7x zPMjC(ww$GgWF=N^os@?`V7Zq7?fUYpomG(h;k69e3@wdYEIzf$+RiljM?QYqUsJ7u zk%FW)b=nE?0&(x7_@!#)nG#A_J4bC)D-18Z*742P5Jx9jns$CyrrA=w zL3U1qs4G27Nii#I`T6rhHm2o0*L{1Nf%Q03Ipz1!p()MCkEQfEgQN_(A^~BSRyT%@ ze&^^ni0>W`LJyYgAX|p6-mmU51xf<=> zdxa4fS-IMV_YZt&zgJr8IX_S3Lu9P>r4#y=ecnENeYXDe-a4aP*0(cigZ-@yqmd$P ziB~}t0k}JTB)+?~DQR2;Q%v1wQz^`MWBrrge3%{O4WphzX{4y^l)Bf(iX}gQKg5P9 zxVy9S$Emh7d2nH2p-7FMBAwR$<&hkzfQQ4kz?%Z(o2?BgeoOe)+1hcj(z&_F+?@rp zTD^OLs{Y#~m>t4dNDlwhkb4m$PwS&st+ertM=Fen(yxcZS`5KLgpeps~bcn+rjt_s`aG$>mC>qj~p*E+TvT zO$xuGS!L|EZ{NOn@d5(={Kkfm&1m`F-d?_6W^iyYEf)!|Q%-SlaY!7bR4pwnC^v9E z?goX0g*iDnp;G~GfkYw|HP=-=7E|4Geli~}E*h>dy0fW0iUoYAgfB4=n8k6aj4x6A z?xT#zZBh6qE~`N1pND3z{7N#tc_v$N;U0Wn^{dqWKC1dQl|P`rUec7nOY`1-(_c6* zpqGSXtAjS?KBGLUVH@qHV&in~TNhE$Zvel1e0(n0+VUR4gOAVa{A3?42J*=8do1MK zc6#dSu@Jara@{wlONxu{1F$UuFn)64wf-57gDgfS_bo0pLHJ0Sp`n%*Ky~sV12O98 zwehNA>F~0$GB=b&xw!avO;y!w-@Nkj^6Kh1THxze=3|`7*6P3ZrLYY6R}Z zZvssLuH>;W(tWM)zz{tpMlgFQH#naXeevsyGuOP1T;64)??m zOMH7q%57sQZx8)tmIj*b);}tr>yxGpm6Imnc;=3DJiX1}uMj3LQ_00i{P@>h=ljj( zoEM#pz*?-0R$NgcYA+${VA8MS5D?IFzRE_9PERK$Cj5`*0dh>5MO@AA)F;ftd=IE09Z$k6|MADFIrC(q)8b(SNb zm4MfwU^kUEGh0~d&jwD$6B7L>;R_yCR#wRDsHu%7KRJfnqSzQN(TyS(1Y3H5*Iwtk z>IN*9NgECrC77C62+mE^7u$1frza=FgM)PVVJ$7<%<=V*BfofY9q1#Vq|P8OCMPEc z=1WmdZf|@0se;0Exlwb0a(1=d)c8*6`r4YYu`$>QuPCP+@p+XKUf=^+uStN*I!>=u63I4fQ572{@n_> zkQw3CN?p5ljf#p2STOs~#|tO>t2e2rqnBb7g&r|7a&U9c)YuzJN(QSG?(Odny*J_Y zJNqO@Q|~k{V`gS{m7D)*1H*v@fgJ)g%I~?kl9G~J%DF6?<*FLyhDCEQ&BY>Gf?$34`E(&jAQFfxhuJvEpdW2muOf!IYceq`hu8co%!_H4<sJ#6)eFV#uB4HouA`#^j<43^8Qm!@GiWT*w*9@hII>f0>+kO``l_nR)m!5^%JPHCeqsUwf+p#e zot6kv(c9lX;Af&E6_a&3@U6OUQw37zakTaJ?cGJmzL%1NX+KF<=UG`^P*AXBcDR9K zop#yuoAqk8rD!AjUnen@l?cD_^392A>-*f;jmpPZqf}{QrafP(x*V#BQAE<>e=aZe z(Vi(!&@tz@eoIN`ATxX{XaBrS$6mGH=@weI2?{y)&&ya}*}WArRB+q%4zu6hP>JdE z=+!z&IG-9`7}h-hv+V)=b47JEO(3J*r@sDvHYzHk_3{n2s&NcbVEDjK4YkdVds`G>T$6apTsTwK-bci(Wx7pu{)KL#?S z=_=55#>c#Ui-tp5kf^@_L7?+D0eKX#$YexBz_VXKN(8HHH~#VIu3BgPf&))U@KD2W zb5qn7dMc7j$C+D{`}y`^O?Wa<)k4ng^pUnVCO_yo(n;~D0+Elx_*m|CbJSpRa`{gm zvc8jkK;gHu9Fdjv`|aDf7$A`hp9ItVh5I>o+AI?Fw);N|x5)3fOQMZwCoJu){#-@6 z-`nJKt38SYr=h4T8?xx8X>kM+#N~#b2}{%^iGbB z&{408{>YLD2SO7}ij;sr68(zCf3IDsqN-Xoaf_IvAS;WSk?}PWSf^1~U?7^_yt$$= zqorMdKLdIM61${fy~dGh-&5<(o*p5$wS>^nc8InXNf5z+txbgy!NF2fQ`^|uBH6V& zF)ckjJ*S%dFQH7()6>K6wpLJ5vcJ6zi-&`8^{4|MAtEB8q|^s7U}<@|eQq24F#`hw z@Qn4Yt9g%hfVWPOw6!;Lf27%VOlw<0p?2xI7I=PDq$!W*O<}n-yzzvz<9e zF+8~TH|gsSHTn3~DAKT=?YDZ=MGsD}z|`~)?(KHSQ!AU#7o%_v*6dD?IOJX}jAz~m zC*+9Ijig`8On8%qzEp$kOSq$FaY^rkB(O~M^X^w?}~ zsXv}se`jZhIuHw-Gn6sH37CutgWKj36B0;yom$4mw7uV8lW=vV@*e_OUR6~EyE-o~ z4`_0Tr6V(5t4z5nwJys)hl(|&#Klq3&=k@HJi#46Ev3i`+;|dh+4k2P6dwg5Lqk2` z8@9H#ZLf*$5EJvcZ%l%>jUeUyIW^Vz;e!)+5@7nkvqRMbc>E;h@87@A_?@ngR+yxw zqyUKt#J-ZDp)hN?~nDTN~py{A&00jPMYisWN_l#Uz!+;UGx)|BnRdQ8Y zTUuaOZcW%X+3uHSXJ;R7OzCQC*E-I9H$Qb=<|F4MkssK5tzCJqr3LjC#>$&-<5}oz zIlT7n?rd4LH;U-h(DX=5OyXdp0IRs$Nza^VFpDXftW~>f*vF*B;(6-Jv8J+nXZ&oJzOJ(`Hzy4-vu}2Ou-f@`4gMe3hrjK#vS4NI zE7as7ovu)Rb3+w3R9ugFWOVe14I7ewENlwKS|h@|&!4Bie-FOGfbgzX+}-e2@XEJr zXZ!%u36LvN;+8WOs}y<&kThq@(b%28rs|0U8r;?yD=a0@^-7Vi?O&) zeQ;fkE13zog?;*``Ii30F-r|j3m&!Rf~Rb$n(yux~{~ zg7_kKE(|2u<>lIxyB7Z9Fk;KS+CA=r3vMC3L`7ynz^Lw8*xj#oSsCAp7yM|d6DBQC zWf?P!LkrLkm^G9hVbEb@Q8qHKT?cbHY@?^5XCNb(p4o4&TwCi(X~j$vbY5r}Pm+vR zzj=O;+WBZs|G0goL+Rek8=oLuCer$S>0GTk`|X_m(y5J{27aGYZ5f4r;V}n!Gc2L$ zM_MsN{wFIlV~-CUN2R(`gI3zy@=^@VA`{m8PbD-&(j#5NsZgs5%Cn_bS0b3_oNizp zkG|X712Tj z&@23g$awo|g-k+4+s|`8 zNbJO8kiKCn==M8jF99?r~}m}r!0mFDNWpW5wp)jBsNv+w_mW$5-Z_!LHkDv}zak=Kt```gtZ zRGQwsn&aynaY{Bd;umgC9s|npAY$r_HNcNgDx8$!1s`2^G2uy9g7R*Hq4x3aaLN7e zk=`earRt)mZJI3eLk|Y3(k+YA{iD9VY>OgmJ{ojxWnMvYSM;0tJT=|3;i@y9 zboriwwPmhMEaa%0JM4C;F}jG-VrFa&VHh$ju*-+^1(wT_=ofA?$2+JZ<%(6m<2o|0 zA<{Pl@WWA|YQyACfWGpqh5B}p%58MR%YPn5%LseZtTu<#`uTagZ;}hU$=y-ZS5wOo zRmR-4cQGoor&bfzC{i0zSLfs7J5p?lOgeX$F)$E{7Uy8&vRjJeueS3|jXM~Nd7LMo z%4WL~liwSe#&faNKPUY1@XU2-D`%62ac8%3W4tOf^V?H0TvS(bokU3nbqlsKV#+VI z;cVAt5?=6a%$-F7AhdBw2@x{sdJD4*DqvSn_r;bC1a zTdTbz9DIqd`HF*~y@o&Lox&{_3S{;jPfmsJc$bggU0P%@fTBxMQi9qJ1B+e*k820M zzWzbmHL~^N2H(q`v>}$HzWQ z;Q8VE%+6psx_s^)vw;AKFLQEY($E4BdTrAk;hfJR)-QlRGcnsWqGyVk^}Js1bB+HeJDRa7WK)%6j6 zSR{^h$DgNse2l0f^u((j~m<%D5z)Fj(1Q5152F*#W8ov7hB(vmtA1Ie$#|=L%zJ1^uI4=r}me zCV6RMz8Zf>kfVudPv9{U-85)i@hqLF+5Gco!I;p@)RY6c^B@3$)vSc?d$MLJ;+LsL zll`V0^aylJ42@3n^braugbwZ$($sH3roe)a;n{$}qF9zNI%+g6*4fcfs#mX&t2Ame zb#+dUy{)PGLsPs^$!y4`RWg;tQ=)SEm+5nz3gY8SQr9a#eQIbBD3p_=iS{w;p<|@4 z^X5(}Deh!rT^bvhU#msOis5R7 ze`pOaHE6o+wj~Ike&{$h7Ob z(BqvPt-O1kIHKBg_M2B)0iY2oHpiikSdZ&|r)5pNynIE%>T?b}EQ*Tz^7#H}{~*oG zXdcqKJ1=qBjP^ynT*qx`Z|@(f(zMqcDOy>XY!C<%bv)@BOJf#ndrcln`3k5=dEdvd z0FNK5A`s7?@Hp(p zJI)Q4yriPKAtw5nMA&ckx}iC3{t4-||4f%g8M{T7-C4O7>5FGo#r;egOi&gT%_K}r zSXf+y)cs?hHf6!fex(S%=#(>eEiHMHkGVmu$B$=)oEPsB+$~HW`?MkTSakrVD(54< zhy$=ph-VL|D{?-H`?_K@v)Xxj0#I?S%6R`8@JUyf+~$;bfN?=YRaLC|J-kqHO5FAb z>_MW1rKR7g5GYy_|MZTHDNmt4r$i@@s`ovOjETukO=XHuU}0jYtg347>cR_enZFBF zI_CIOp~te6xbNa(3zcGJg7`)!Am5kHRf&y@^Yp^XRCz?Pi4;eKw`L721MZQ`4Q+jE zcEmhV;trYWOYhis?{D#YuIT+aW&q}*)~!Twa~kvBQ6xF9%I@d?wF6N$2Zv1dW@=73 zw$RZLh;#E_`jxF3x+O-GU%q|&y}vrb=4-nitL@S?sKXD?DF z^%C`jTnaSRUH8YFZv&SN^Q@?OI9F6ahcRgK1o0yC@-?0)V#hL*HsD&0euk-EBX16-z~*PhA0Dg4S(FsR9rU!YBtGAsDU~ey@=DBV z@e|X?;Pau$R&<1elT%n!lzpM9h|&#}k;$d;)nZK47bG=y51V{={niqqn?q0#bod=9 z+Crzd$Wu0(`r4}wmmixqc~V1(94}GeS-`|7n~7u;v8|2G>ny@2xQTMiG3_^S2>VKs zU&`dF{g{`@?fClDZ*r7PAu>`TNCN%EF9bDSD0ti8Yq1y-xZ;|chx@DCPPKZ}(a#7a z^ZU_mIi>R<*`EiL6%`p*x-MaTKXL^AhiF~MAw!NH$ZeDxpIkQ=GE_1F2P5QCCFiz1 za6)3%1}Z`L5U2moL{)FfcIvLA-?K;bC{~ zDrMkz5nmSK{4baDp)lKw@5XqJm5aQ+i)_TtFIL@ysvt>_efo4+eKR?ovcUPLKw);B z&K3_>T=y^$zt=}CMn6)L)2(e2M1UH-eyx{$cjw9B=3)08*L5qTwMiv%)eY?>xY&3W z!;Xvrq!Vv=YtF#(<-{?quGEbv+J>i!_d+GO?Wd^l&=JqTVA|%k4-VXJ-(E}?ib&?P zrb#XV+1Nr)n(MD$vu#l(d)RwTh81UPwXfd4za^9V4S4ka_Wr>^mywMOTE#RxbPpgj z*#Nb&YWuI9#dMp^IUd#6&m6co8dZ*gK40W}1j+8=M#esgzcDMJwdRcv3Gb5(Zj*6^ zOIuVwCEMq=S?V(q3o&c(cIR00_|qERrkKj-w=($3sQK*PQQiLOqAs@g-^qG2{N9S= zo?Vs0_a0S5KsX6O`SM`!hCx?Z+C87VSDID6JqeFcQZ{oMCI}q4xdiLS=GLroYV6*Q z?DTIe|C2k_9buKGC@XuV#j8xmdBOL(Zb4F0L%mhVv%_o^%gw#Fv)8cRuQ{CWlJUc{7Q}$~J zdi;`tf+vW+kr7Qgd{cHJ5!_5^HFa%?%r}r&Nl-^Wp+rSI=HO7fHH<6!paFwCEnvjx zgM~|y8vXpXhYMfo*JtPw)WoqxO7xb4QB4D;?Aj6%|2%&($la&;C>H$%q<2@V{tz$p zNU}s=NY6uHx^Qc&sW%1Aia_iwO?S_@e|`7#&NT$kb#j8f30J$22#}G!0aMaJTj*Pv z=ef^Iy`rz2_GK!T~ zwn#X3WT_?p@$W4V_JJpax(_2kagYMG$awKVPUKElSblGsBNkSkLekz?Cm~jF)25IA z%yE>lvD?OUYTQM&S&VK|8OziS&N55YJh^1{d2PMPiaWHMf*!aZ-C07#Pwc)0rZswf zwYCG$>;a&8;71NJ`-aRsMgU?T%VJ+9kmt|bXvDZ8wV^=!omV4)`i~Yg{pB2W9USrB zYvf(<>grBRPU_)H3{AbkMIBN)l#F+*b==-2#aXQ1p5tsX00yHezx9r77_V3gYL_ znVz)G)xqqx>h7l3g!~HUrl4#fh`fEGcTcvHDVgiIH{E~EZm;&$tEH=l#f=*w0Rad@ zL&MQ#G!dn?>{v^eJ5($4n&nYs_XyC|h&BXICv+$&hM3J>)}^C<8XwwFR2*i0zRG6N zSknEZ#4d&fmMHe!JBM05oR9&pnd|6QWk9o$3wq^%?2FCcV10V2E+@BXKasn)P%3IG zh8BUy2GZU4(5~sT?+;U_&KsD%5=P1u#l@=>{O+JDMZIxD*!#eyHSF${jy0Z9`-9aG z{eV~5YZMXO!3)cIreOI8(F_`-pB?5TPToeNXhf29<-9!>4b>|=%V#Z72p1MCuq0C! zK0ErQdbyu3_@(Lc1}^H?Ib6Kw8ETnqmK2h;d=}`To%6sif$lEOZxeUW4oYR9EiQnv z#mgJFO08Ae%!N83zbjKwrIk}zv(SQZwOaD?}218V#>F=Y@g6|{t2AR4zQ3MAY2{{!nP91&I#f1fb zKpwuJZg|=hlqB}ssg^qW2Z+nmKVQQgj5IYjZ~OlJs{BM#f8wOl7XP^`bI7bKk!0)q z@a5!<>SrxVQ)xos&C$~7@+)-o^y8K0iWU}h>Ag1)Sfo4-U`@bT+Gi6|p}$CyqLt4c zz?5m;cxdoc?oQ{QI+vmo-2h)5%R^P!$`hj!7Gd8?t0uf>2$U-Ff*PNbTfx^t^`=LL zfA_G+ekQt`-n+PaJUncujY9N-F9+&iAj-YEFvBK)rr4cstG!ij`vt}=XD1f{ucZtx z{kO!26heE10$PIkL{`4U3zFfjzkx>F1XKe=Hv%Cm+ZhW2A%g}jDXILS=0HkxE3{OO z2m%@8W_SlhLyu3QDx)TyHF1V@4gZ{#SL*FQz@z9GgsNq}_r(=N>vp%8Ns68>=8f;e&aAuBhtJ0bJmseaSW3CR1u#i zLT&EGjT=%@QYeTem%8+H!zWJyN=j_4ThYO^1o$k8OK9TSJw+>|iwc&#ORX{XH#(y( z{r0kC7)u)KBPuS=OFFIAH3wv{ttcX8s8`sD%LlX6=(~G*HXt!lohC-D(p^EV(=0Op z&G-9NPRJ?NSwp7JRwLWk*cR_o55LzZ`FPq}qFZBoF&{rABC=@$#jr z3bZvsfch(|HvT6C6+=YHd$|9nMXt)DSee_XnFGsJ=`8g0pho2&#s)rhR3Erth4zGl zwx>ZVL!S@}p8ML|?VYuqni%as(|ZA1pYDHAYd6JbZM_Ow)r({)@3W(dkPxXk>gO2; zlu!|W^vLo{z%^_g;*-wlCAVZT6om2bJ9;Yy>+l&;sp)fSKCyoKv;{V~kPjg#sn78a zaz7qA2tc`8s%TrUog(ZnL`R1n)duonZ>ZXO2z++!!4!ED_w${qRW_^}OamGC!pg;a zph{otA&ZvI@pQ*RM5~pyWxNeV#k;0Rjr#-jdg4(pg-kaEUbHB+%H?@mLU#+*E72go z{a-OO^28u528A+E(eF6dkP=Q_Dl|i?%2X(RI+9(@d=%3BanG8~51aA79H;Mb1 z3!MikaA+hcFZ361*r?ORy4tRLkglzN_L{H|^z`(+ihK+V)}Z~yLc7n%*wxyK94$A3 zy=MEJvI#??AC%kY2NU+@Jt?6P5&D&8^0Mi-Ur^eP^cVGSZ*S8or1T98OnmnBUSH%T zRd8_Fa-Z>SGi9$(#6>-R&%>li=On{`GwqY~R-QHb>GeeNgFMn(QmcN|1Sef(EZ(OhhW-y}XzJ?gS+sZ7kKH*9KKoc&Tc2wClD!x& z0ckJP@}SW5-r1bczJP^;glYr6i zvw)x=8RE;_Hndda$tCWbZZ?5f!V=UX3du%FMLAGZYs8$q_f$rgf!__V)iZ*5X? z_iF2dLn5wb*01|GL~VJCeYz18|5no`3LsVi=^(TcLEQ^P5&Z5OAIr-Vpb81Pg``7q z@HU`KnECbfC3w*-I49@kS0JK1V7;(~m>wRD?XE~h!cNrC)P(*YA<#`g#|{Jw5M%)z z2@;7$w{;y5@vEt+`S>(cRBS^L4|snf)^Q)B_?_ObSKC1ew;%V-rnAz89Ujx4i#qy0sU1Oa$^cwkEWaa z(-rbNP1%7!HLLVK+#tJi2Yl!YN@+0Psi`R_9VV|DoQ}Kux3#s|Pkv%!XaA6uWeml} zBzNedfSMur@NPUbR8D0V7Z(eQE31cImB}jW5i@}gIN?>UZ8#-AGGpn7pc`m)T!uQD z!)E=BP8P9#a$(^z%o0G;t2!ct;G?4>?MgFkEv++f?J)Tf{0q=lW7eHi1+`OfCbpyH zBF)dNxhm=!8o+Ha zYkx3>tpJLiCMY#OeHyA#D2^K_4HMQ9m*Dd*G4V?5;=I_?KEDHvBM@7mNr_Qd*svLf zby22|1y|Qij)gij17Zy@__g)*WWncX2=EnM`}^z0mSY#rP~j106!P3L)z|&!`t#rt zGL=sCZ0l^5tM)HhMa3p4eL^7@n*D~$4DHR#4$co}%+1WW_AJ1MprfN}eK36rkYZs9 zy7^!Y`vSj!i7N5w5SKvpw5;q>TmCnoHTqbPbA-v{LV$)!JfqG>i|?(EaAk8%P`g@n7ji$D-2TzWoiIRe%1ttZ1dC z;;myO${CbD6Vca&z8U~;48a2jI)Doj;rb>&`<`|ubA$2#x^_U9QC?XI{H6~W$@wi~ zGcynn96+ex<>8S_WS{R9zC=ZAc{H#X9$%b$@8gMrw{dWD8)jX?g^F56c5Xj3B!RFS z&b{K;Nf^BC(EyD_y1KeR!FfN`)>5H^L zI5;_T`z^qW*oPKSE)`$3hT-A;UR!H0ZGTHZ{5GQ}6ZTA_&+^Z4=j@5fj+GT4=?AN- z8XEMPhj9vBi~7@-0h&omON)qn1xFz-Cszt?29$y3=H}S8tA<(q7GRkE$2+-AO~UED zH5C<}5O2)O+B-TxK`8^89&GG~xaceUD|2&m;H8sy&!6DtJp4E2Op2-Vye}*)OhuKM zlk)&Nh(L?P#@654TLw-8A0rpYad!3{P|3lMBmk?57v2hbIDitAc%e6M-UMU^OK#cxK#KxI(KxJ3ndI*sFZ*kJ3cvqmRArb zLZ{oC=2=oGV8(?=P5Ub)YvK6LNP~2d*b7I+bLj1@Kb{m6ZJ(!D}b5d zD#mM_%@q_PBO>I_7wvt6x*0p8QAI}oLg2E(*YYh!)g zyQrz5aUC&8Mu|XFXuWQmYd3XbXnAckdUm{frQ%##`UFZe@UPIs^=sP$FVqx?R39+_ z?I7^;U^*qZRD+%nMPlRQ0Yntu{qE-*C-&06%y_itW?x$1(06oBiu61W1r@@HSitE{g$0q`?`mw{K&)Er6o zJGqHaGBB`v_6)!aFuMMm(}I|o;b&()6cp)@i>@v&bMDpyJLbDoZuv0|f`t)#A|M_R zSw5>A8_1K##KIcvd!Tym@CZo|^~@ExlLsmWIv;)ApRFF;FFi zR?IwwQ~(8f`uc#OB&hp~)QbVJ-$eZA?Tv3Ye}7j?&$1X?{mzaD0cw5(}w%+pSnv!lQa1HYlulW7Q!xA`8o6zKf><-?3KKV z4=wfJ0bnncc<$Ghk`h~}bDj((b1TO)muXin6zdKQ4A{@q?RP4Y=<*hE4GI zLPA2KnM(Ar%(~TbjV?Z>rhfqSjL$A&qrV_V&Ff!R19fb2cN(k^bcuqK2S&#awkj;h z*0B#Mj=8PAdFBROIVkG~!*1VD5{w|@AO8NGhLzO;P}t?6|K$kqO^I^Du-*`D-K?EH zfTOU4oDEuAxssHys3C#_5AeY7I6bZTG6DN(|4Uw^TM-0&!eCxLzN=3F?HwPN7&cw} zX^(-$9PzBET`aRCY)Z0OR`xXyXN_M|wpyu3t+Cp0n8*$;jsHIMWdROFyV zhAmoDMbX5{^rrID# zD27r}BpL72i76`!<+#svvAv6!MMWv~HLExw3%L?QOT7TN_<4Aaz;hrF#1UN3Zx8!2 zc$$n5Xdxuh74(~s{eplU01D&|2XKj1hL53XA5s=bx32iH_wUyr+@fC3p?8E-tRa#w0Ja;o{-#zyPo(tO=#8%*^r({EH>b&qbML0qk7; zpu@#kZGL`!-R@iI2Vvq_kPrg027CuJ^_LUi4IneRu$))cVRaJn;H$c3Kk8y?V zL01k@8K{ddvkogK340F#$r6Om|M>%X4u6G~T>b!D^g>T6CgP1uF6wnEZf);>_e&ph+0RhM!4MK*SB1*CWLl#T{MhWTZGoY7!Us3{w{n7W-4d_zYc;<8X35Fzh z5QNP-@IZc1a))+M9Der$JoKYSk4U&|uB6dJtLa9~z#jl+8mqQV21J#eeg6d|w#f1IzNJo8*hO{Zgido^jzKv|jUZn*d8 ziSfp3){{4^g0V|r3?6W zH!(0If-k8M0Fu0i^aS#xkKew1kh$69y=$tWF$C0O1NIaoau7laxNV>y%JeEet++y^ zz(FXAF2jLWG&;Kmhy3q)s+lrlVn;_uTiXRx{d%=3r8-lSCaGCj$b8!iE>+oz;qh^J zMA>knFMxo2O>yQ=4-4ypX#}7j5E5oTSYn|+htssY{};kQdPAdQRi7 zY|O51ZjV7s_=d+4y1uK)R6r;8Sy^`!l_<%`;#C-J`e*`8OpHV(^rm>AO#qAx2N$=p z)EImaTmaRn#JMVK1*SQqVu%Hx{N49~8v-Q&{CALw6V4|FB#)QUv~PH=v7&n@pyhi- z%+?+te0gwJOsoaKT9}v~JzGbFKvDr-+BS55p43gbxlqP^Os<=qhX@WYH-uF* z#5Ixu+taD7&Ts!)dNJ1FYsLC1pnQmrM#BU0rwvN&RcrA2>3)2WK!(rlm@; zjF=>~+_Vws@OD@)EI`deVR{PJ4aqR){M-dAZNh$~J{eaJS;+yX4!TvPSSsL?+(7vF z`23H&+~lTozI1hY0vG|L&8J9$=ec?qg#S`@D0Wt zm8NZ?wj;R^@(zE`oj`j6ymf2dw8h0mFhIZ&Re*90PUs6+yZg8JZFO*Q>hdTm^z=;K zeos{Z@M8OS3mSQf>nx-b-9Le8jwe;vR=^y!Q3ht8~|14B0(1Ud}SwiN7^ahpf zb*Xd22ZSBZi)i5gg6GefXHg!?Z>(4OaF}&Vby@)Zfm;L9|Js@w0M;UKshdH&Kq=&$ zDekKsIBCF4G-k_BRf%G`;48kE#%HtXarPYJw&714e&f|=wz>BvZR^>DoWFgWsWkuz z%Fzr{o+cT14oP^FAPet>jiw;CDoWAx3#8oP7U$;D9w{(FNZsANOGbw2A|INofx+1^ z#yJlX6L=p01g52Z0#-4?R67}ugb@h90)e-lkdTm)k^)f?fuNEMFt}JQ1KWqJtc7cM zj?bfnX5y}%uXuK9YN~9WtGhd-l=Dmx_g9b-^Yg(d$sM@kFpa>C>noccH0LHe8qR}H z0(qy(xakj|DBdxEn=MaubX2HkJ@9N`TS8Jt=e`3NDBw5%dN|d>cWVd!$D513K*E*o zZ8^!ZLT*Es0FsbBI2|B=f+_=s+6^@|>7t(Yfb$te!qAm1baK;+j*AQ6{)=7sa#vwv z=#Id9f&jg=ynLdy!5APqK^~e zsrToF>gg6tzEk*q7Znvwx28LXE!+*~i!&s(NWK%AL*VsX!3b!{5Rq}IMZ4FZp&%gL z+``7ziHnY)EkO&j^YGsXbP!$^couCw`xbEaAb-_2Hr~g_gw#L!{;MhyegV?R6-0RE zSK7?4(_8SipF!ITkd?rrUnBZ^KH{O$@!8o>4<11cMNCQx7jX`OAwV)1-aa^m@G&99 zQAYQqi?~0PbKeB#B8ZA)WMp;2!NnPGz)@lgS@~NfptZ@GR5Isbp*Hv#5&G@XX3ess z=ShvP05mP;cW4U}0H~Hw5Fm889^N!KrnR>R8TDbvXhw(Fd5F>DV`^(_`@NU_&~s#pjs3jln>PjPOf zvo~D)P`1Q5K;>u%n7vH8UkWTWJOc(c^rcY8bocfG@`sP$w*K-Os`#J-V@NT8scZ9L z;9gjQ)e-iMRd|$KX6?VGtCrv>Lo44KBP{l@rsj2l1|p4_-yepi z*+uwAhV1JMon=C0E8o}e%ua>YrD>Q#9OmP@K3RaW;Yar%LO>D%wf?6FQOuvpPbe04 zfTRZkF`yvu@$rE%65QM03oZJE+As~Qs>INnfHDevr_4pOObrbImpXt8+ZfI{xtQ{^ zgG_$7w`V(d^jkC=ehxCVRcP}upMxMIUZa@Hux=Bwd~9qic<0z{j$U`J^z=}C0LnW_ zI4YR><4a!X6mTCAWus;Qwm^*!{#;zl%-|XUtvbXPsK1@snF-Nl)FWYt;BM51xVRlE zy|=gLv$u>w6KB>0E`-l&{6gzs*m3{@?!x(@|6RLcIgwwsVq z`mJgpWk5@&e-jSRM6K92M=vpQzIh%p3X0zD?q$~xfR@Ej3ULYvoq!z>jPY}X-7Lal zlTuSjq2a;DOksZvVBoy$s9^Nduh5132Xu9b%TfNnkk-)E?reIA&QG{Q(@+!v$_<(- zV2znV=!Q-UN&*0OZFtk1sA#yYprQf89>69i zAlO#Lo^`-s2JbEim4K}jwr4|y{n?5a35`4wDL_=4BN=G#w)jz zzuS|1v5Wup>lcg=V3c|7^W^V1Sm_Jkb=l}FwqAh%LMGdx*o0oWg|$20%clNrZj&o|v1j!1Ruk-CzW2Fc`&5)ju#d zHwS^Mm6a7peKncEWJ*Ir1ETiTkBV^kbs)|36eel3Iaz}#ACy3V+@t|p!l-YInX&|6d)X1_)U0=B|#y4c5QG6<=8CL+D8nbb{>?UVg~Oq{U8o#PfH_j6`sHR$ZzQ2 z2zU?Z=q##zpt^uJ1@sxvTDQ2ZMnpU;!@$4*YeH!FD^Io5l$D`O27TMu`~u+q-~u=? z5F^LI2y6mqYufydC}08sSM!7;{r|28v|J}u9hai;c^^L}M|4h33P1vYFjGM;=UzC< zO0?I#97t|jiEeVFWl0?)k3cqUH1lZ{j}{tpMs~S*csOpm=DLK~)d|AAOA~O=)6oIE z^;%gC6~OuCD8pdO>*VALK$aSpf@=(hX&Hwf&Yzz-Ts^8ooH+*r9(%+Q|< zA|j{ISwkL&5uF-bD;Y=D|6Qvw)Of&gFl+O}K!9r1wAunVS!{+jCi_KiN68SZ#Q0yN z-2^z>$WryfSFzlL(cf(4YCJ6oleY+q1OblTSmhD~n%@VvUb0HG{UxKREh8jHAJH-=Clkx1CBj@e?P zY2W*Fht5}=u@*a3H+D7JRW&B0FA0HQP0ozbr{LA~Qe-v4<+dzfXI1R(`hK|agrdS^ zc|(Rnu2B0PVgI$e;9aTqpbI~61b~(mN^h_VoT|T_G1$=97<}*pDLM%RBtRzX8X7LQ zt2$n*7Dl+221iVGwC6EFg*jJa{Qzb@Md}$(a4*yWKT3Oi*YJ=?^ER-*!!Di-ex92< zhxZ?5KVZeV3nM2#o(Td5MKC8QsE|9q`3fbJsQ>$Si^Wy0V*JWBdd464UlfSMRg{$a zNXgejzbi|g5bkM2v^$*EB}%{y?H^a!zDzz_zS7YMy5 z2y;$j)cD;WH|^JaOf4fVgW&ib_gaP4)7f8 zV2rm1RT<28D0Xg6VpXL4-&^wx?8snl35Zr|P7WTz)z$TSQ6%nh(G+wGh&6=#hG=2-65M(a#{Z{l*Z&m3_69?plX{Ylnel(Y4 z(aplZ+QR((pd8>||GiaXkQ|^*gVfsSG}Ew%itGpBjasyIy5V17Bm)+A zFq$(qrb0-Awg|{F(BJb42sl-)PV*sCSY;G>IGYeTMxm?G1Zk+W!t}`AqVb~=9g-;d zp4BK}DqI9QBE+05B2P8hm@`S4F`1s=AuBQMo9Bw%F`n;fir$(2D51OE<$1=02fzp^ z{3yKkCt!XJ1H^0d0(dGAe;5f%v(FbHy8!kNzKe4BYcM^6@CUA;UC^39o`WP0L~BT= zkbZt(cC3Pv<`ASA0J+9d2yI7xitq!#xwaEi=$O@VhK95h<<*)K1%QdNY|54HN!im z!l(^H%CO2!79uQk8BFv~@S1@w=rBAV0Cb@u1ulp4wV3(j`oP0%E#m^!Lh0FRFA{*< zTX&fF2c(|CeigVLhwy-}3PXg`;VJSu9uJVA3c>FZ zDSy~bjGIOMONx_{*Hew|JIz!(&#uYDE>OCyF+I1d^9r@ABYDJ{4EU$pIk?Q012zpY z1|Xn7@cM*21;&Q*^uT`@Yh49*0MJOxIC`xP&4rq@`3b=M3;Xg#XcPoH2gfxu0M*+OYXeT92y&>jQ$hTG5VI0D_VPZusL4a6zHr6ykE>D4 z@3fiU5~|e8Yu`)Iv3!FqqxnMiMHd|r7C{##BA!KuB>78Gyd63!xy&AGvN7X#ecj}j z@&QD;^oUKK_rA_p{tSl$&UR78?P5Q*>y>$H9&*u#?yaf&6>8Sm4XN=)DR-`{sTLZS zT4|Re5HD2kCHIr_v!WtiB~AorX;6w zThizBU1J3jHwzaL&z7v+l&MfqD#W78N`)IKhhJ)3DX*niK|IKfjd_m{D^0$86$SC^ z#dCSacbYntnwr~fG<4TlCxQ`m@3cn4>0uWaF z9l6b^iaF@|z$qh3!{u;B>$!tzg}^6L;K5eVO4eq%gXOEMTZk&tq|GuhDpOuDlaf;Y(_SP@ z=GS;%_ve@IG?g))+rwMTbf-)Qi%3_Qagv_EjOh+a2%J~!E^cfOA+5B{fWn7{rP>k? zg6Af?Z9j!vPBHZ|`8D5iSt8%$_C+;Ulo>tx+kMz7tHd^-9kjmib}c;SWaeHW0zoax zv2RkOmdE|18i8*~&nO?}l{O&Ij06CpkH1h;sqmcFH> zY4@*{e@|o^{{D!!_F!P(mQSl>^l9E(eh#5sr=X9k6GCow&vtmIfF^iC)w3NjPXqj+3ka<+ z9DzshTUo(&qzLF>f4@5lm(NBNmuCJ&KjYS>=KTEE zB8}gYPKGCWa)T`fiQMIh!^2!`=bfwT(LKrX^pD{b_V9MEMv--L*O~H+un^_xyh5xw znSQK1AoWc9TG<W-_iQ6 z`F*R!Rr0pWZ7dN->QJ3OCvB0s^lrD}g7;?Cv7)T(Z2vaoE+0o*RjZ|Br*<6e)=tt_ zS7$pHcevMd*;_q8Oz?bYX@R4mo-KX~6&(z+fa&M5)DB>Yy1M$sO5|N|j(vFULB7og zpeH~$s)b{~cK4KX=7M(Iv?J&`PM1C^Ztw(KMkiX`XEowQZ*)i?%cy| zo1INtTkD>w>0%&R9C~ol#C5F_P*cCEudi$p8R+kSpDVq0=J{Aw);uUkfaW1I2f5m5 z=%|m`D+#fD5g4qjlWo}>#xUSPU)v&|EhTG#<8wH#zM4b7qv9; z%eY#t!SxXHcC1XHa^2O~QOt8$Lc z;*reSqQ}swIc^ zOn#{9z&f$oV18`du3iY{b1DxXp6u@f*+vmM0Qy<+|4|{ih;PmP{H6KGs?|S^Vf~X4 z_8eMZ;03UgVICDM>GoY7N;_56S1_~!BW`dXZnx;c0tv{QFbhP$7dJOy@dRZ$LKa~v zEqVgxWI})ci!{EhZ;l|x>s!IcF6&oDd5%CuQKDTg(aCR@)HTa7)S~0$Bq5CAmymFW zz@E>H3sR9M2(A3|ca6TrJ8f(Fip|Y!P8ESlK97_c5g}P=z_39e^5j+N@v&!Mn+##X zx7spY8Q-2x8nCUdR`cT9_m%NgF!MKZUYFuvyULQVoZE7r>tg7|+ibM@T9WRrucJTS zZ{?}ak5m&B7~xpar+EKHlt2AU1uIIP9$$w!-NcY0H9O(&jsyasz+-8$z;khiKwx{k z;f~z-29(ZFDA0ilcuj7FL9rT>JB*p2M60d)_hoP(XKSe=7;rdy`*%S1V(Xsyp^+nb z0p%9dIz#CKWfhioEWha@Z@j0d7mBITezoCf9 zCfM*quw!}n^Mlmv-od*o6^mBys`Ob1OErlQxjOGm13y#G9&9ghqSEZH{Q3ad9T6g1 zs$KG$h@OY@qN1n~#S`+>D~twYm)L?V2{+<`Z(>t zG+yfJpF6l{?cXJ~bbTkOmTRuxAxxA`rrdZfa+N^$~-SjU=)!yD~=O}ywV|{&r z1I|j9UsLP9_&ceW4RLKwzo((9!%=$JI>^;Y_cAS1GD;ee!}Mk*E$v6LTS1ccA~A`C zda%(w!lT`%SCdJxzgE*9>tw!8OHMHJk^IpQXOc}JzqcJtIiu5_|1hDp_5h9-fUuJz z-^0Vg;R5lW3xnlseXSpYsw`em<|OkJ?|8LCa}Xc`!WCNB6l%gm2P)8{5Oc1X~2H zMQoEV4S}FXSsL$EP5n4xQyp>a(L{1(OrNwhvc+(ANKc4~)-1NkL*A^`E8N(=@3OR> z;#TY)jR->&G69BF~GIk(&>W$vH=_mAE6nhN>kN!AH1NNs>q~7q7)kRw>>Cn z{+Pe9e2W=D);W>dL4er9`XhbIJ@%FD`6=-}?#&_O6eqMskkf(YhJgSsCJYm+fYc;y z)=}9sN_L}$&6Qv{N&z(@N(E0e3rYJ@jfsw1Gv`C60IOsG+B^P3IhK3a|1 zT?euVbFwx)ogp;l8$+Z5jPQEf#HUXd2Pqtb{v_CFIPYdkf0Fqqb*_4n$8bGp1rU0F z*D+a*(U9VqJ#=IU38t`pJ7QD%(9T@V3H33|=bOCbDG{0J3fjW;DHVB%$^s%DgH033 zE}&=`wwUs;<15u+V_@(EE@Ey%KMZ;4D$p`SbDvrQYQDAQ3+>+zijs*lU|WB zE|sG^`aL&2J$(W*OaQ`x#sQQ3y4Lv*w{CWGt8wZR6W{v&9qVtvtydO8E>BsKOTxlX zl2zr~kM87Z3#`oJPrQPFFK}NlW1Oc z%w71irMtb;9wpsnb9?ie_1vd`5ET)@;^xPy?M<}gM$%M%8;TsoYq$qSswfB`Z^NYb zdZMHGg~E*r8|_&WI74o1=t~8gX1>*=R77@{nit6DqOPQJd znVwdSbsK|~`0VOemah+yb#`B{i%Fo4_nGSbEHuJIX-wLcM8p>a0Tr>=lTR<{(-V#m zuhZO2sB_$Wy)r#V$*E3!pL(HVv(=bq(uuQ0$VtcZ&8yKfKS8}k_f~GqPE_UYq>XOd zxssgpSSFp4_u{bGmg?kryU9cSqsERSF$2L*(u73_=XGTutlY)!T@a*8l-{&>8N=jp zi>v*w1q|Q7Te7s7;^ovFKDs#+uo}e&0EB6%z6y-nyWv+G5mEB2Kra7-vq}Gy)0}NY z`@4Js!mx^m?1W52;bsnYhcya*qA1@CVt;%V*~A|asJ(-Z*5w0R<&d}tlBnVu431p ziWmm@Kr~AO-G*M-@2&cOh{iea9|9{w5>8{7-}s!PS;U%F=F$1h+s3#tlQ}jBJxF96 z&>wsnF8!KO-+`No2xo*ztB`|xGK7P@-ATu`PHukF7Hl(d;5$F3UEA1zwsz5~`FJ(5 zNNnQshuGLN5<1wjfz7E;iyvQ8gZ-!&_w!)=8#Vis!cQ|kti%r0wl%ft>+49fK;1|B z(@XJ5oA~5P?HaQAK6N$r1Eaxj?=!u<$22jM9;cU`b+ySru99_YEw)oBqdnm0 z{`gIUv+tznE0wgo=_4=1VG#)+^uT1BJrdfa>E7}nAtpWzXa~*Ep*jlAXl@?eyLo6P z^vh(-N)NtqT9(5-Y zM@5i)8a&Tyzgs1a3Jdx#ci)T<#V}uLEPiNG;?M2B{2|k^Le;Z+=2HFWH0EvhuKhOC zhGe`W@8an##%?ls#XHR@t`iF|L-QAO{j}u)=nyX%HpH?KN5PY2Rlk&ykr8|eXMWk2 zs~@_<(WPJJZ?W-G&@rkiJ6Kif{T$)md;OS~N677)#qIBkC(o#G`(P*ta@E0ue}L=( z8GZDWhvn6j?N`=hQ+&g@*>cX*?e1Gfy2+N)m9uv*n4SLp4a#S}EAFu`{W#FF;68pl z!Ztf@dOOM2beQ{9ESf5B2=>UwJ1&lP#P@tBI@K@+P(w_+;s$ASYO2ah>wjCR|1K8u zDkYll&XbGN(GzT_6X&xgtkNm}*82PJjC)c9-nmC#%5-njgoYMxNRhR*s_=vC2teRGx93yLX?t*p+NNuUs zr^))JJQU}3HuxMA6!~9}PsyI{jJp^8cxdM$f)awCBDY^^K#e6frb0#btzFwUG7?=WU{;7@4QdI&A*lfNq&G~$AL&jA&OR!@paL0J=^k8Ugj1^()bTq&_ z!rymrBWvi>fnr3H-CDIr(tyI=dA)t%W76sNdyzR2q1xX5Ag~oVj zC3>ag2>YD*7cX=8*anQInhjFr2yx%@J&AG|77}bb86Mv93Hq~V?%=gkS?&^V?O1QN z_;;vdp{iEtrRjArx)~>N{0*j_3R=LV@;#oh5dw20oTP@jIv|Ks(bYkklH|;#d&kc& zA4f^uYsLFJ<=> zC46|S$-Q%;_xpbd@lEu@M&Vv%HY z6VW)lLDU)=kBPuanJk?|;NK;ko)|N%X)Nzqm-H^xN_1f_UUhBi!=)i|{+mB-OL*A) zqd^d{B#{YM?+CY$*bp94i&xCx6}5z}d3*mUeZQrpf_tS$^* zr6dS{M8w1>oiK%83^*7Z!uGtUr$Q3gE|{_|tf8PKh1nsESzXTSDAKaf0PicUJSDQ! z&BL20QSOjZ!@VW7{MI4YRowEiWTs_){be#e-}^k6;L z9nb%c5w6(NJJ>}E7ksbE>m^mjn2FQ&nZohw{5>)0!r5c!6ILvY&hXIhQFEH#QtK1cmH;WO?{%DE$Q`rFb{k-Xjz z@!^TXqF=eV5JOvjvto+dvsRUlyoSky_-lu|`j1^aKg_#{;~hIp{^Jxew0fLsYRpaa02d!Gg4|JIMF=hJ1d>PRw9we@vw3u*a(AGuBcd$pX zBX1h5wW54~a{DR$5XRYIStk z)KKflwmiqazDE(0wB8sgVK22c)9Es;aVf7XyEhRFp1SXWi4Z4w#>~m-4BPkGDil43 z2@5GG5JLecS!R1+Gm~xl*#tj!XYb3HWL53_ZWH^Bd!6F>e8Q7fra~_ttx9U`;^IMo9I62!7&@Lp+##hibf0$tNgfNEU%Pu`Gt?DSw zSa=;nFu+=GSZkW|)lFRe(aLZOZ5_X$o;Fd|8{zm3BqD?-g^>r}RV+)LCXRs!>j}FX zwpI7QBlox1wdGbJu~rIZX%wsR%%{&~v=Viw`ndV~xDk7e3+D|~5~hb-(Q^69blq&p z=1v@{iBe?PuLu?BDpVC7y=GQfl+JBK%NN4O43|aV(pMq6^aM{Wuijm5*lf<$BvdA;7-T1)Iqf&<}Cst0E$7-3z+H z@71K|-yJ6_w!@}*oXEqBudSscy#Jod)|S@UV^+Jc9d_5ZE}P z6jHD6NyO7%6~u}fJ`idN;NT!#aBk6afa(Kg??qKyHzIjbWb>ZT(ta==q=_>jLL)*& zqG})cs5S^QsyyA5PNWSNPFWHCrApV$8&f3rNIs%SKx(1|Tit9)!MH-RgjgtOi<`fq zCXl5fjRJr!7)!J8LQ07sMB_!V)dPmJ{w2e$M>w0>8PmVZPkgBC>q9q=YS=Af ztRMe5e<9s@AG6~52+u_Y$yBBv1wM3@>GUEAqL_xzb#y3|wn7osfR z5)rYuqUf#Nw5T*gsX#&Tl5%0(5fozpQ*x$)gKrdMGldh6_(J?QKH|Rd!t}&hPDMjA zo3}Y=BA1W+AYVktqJ-zxPqGf=Gv3fC&$RkPzAr4Mr7F~K-*eGq(2M=0`ED0JruX&z zb$_+Yo%+Ag}9nPk9-iHxAY~>hv#Foc0 zQs$P4g~#1rFgA+O;!%|F>{Kh5VW>fw#jjlVNvtxSY`L`LK={t`-|AJD_RHqs-7el6 z2=qumxs3yG&t8oS{%46-F81yvISx)YGWIDv znhPLhKLbuPxRe3o4e+7$K*#;SVNFaOs<*D+@{<({Rd^NIUyxHUTS(oph|VBQro-R4 zX?{NQwD#6>d~EcvJWXyQtQ&HnPnahxZeyUzkfZW?z1UmYZoT>IG8ThOmvxdWXg`|9 z!0+Ms#DtEPmRsaK7E2X5w9Zk>zIP5oCx@(+bSh&Ure15Gt1WROR~LVrJ)3Lr7ki%? zFJZe3(g{C5;=q@{K!_#d$*-(*hMF1(4{b7XW_aqMuQ0N^sO0$h>2AZ$Q>FAN=m?yfby3=y$#^{fkwk$CVb43 z8`nYW$=5K;sY_3%4C3Xzm6Ln@VHC0VYu@I@Q|m7^t!@V-5AyF> zc7~Lmo+93xTe2XCzNV)M9e?yI6X0W5ohTi5-}pMZ?a|Z~3EUMLEgik5Bna%Gp~V)L zocg?#`{TuNfv#Ua6Ev0SRqd>JPl~|Focy=-Fxc;R{q*vVQNWT2_P{fhQWU35_B#^O z3r`Biem{ewgXxWR{k#yCnfcmPmE!AI=qL!NVV$k)u_Er)*{m|3*_4uV+2C$vWFNv9 z^m?BR@Mo{LumB_i@{_;txDqonTY&}wY|f?k?${2tn}N>!a^s1Nqx?9CppG2NraiAL zPFzGC1#+4V^ZCf4=A*JvL}z-v10J>HpUnv+@#nPt(=~)9&W&^Hv-LK3h_^*=4}pGJ z^X?0k%MC<|bwS%(2DgG1+?-({VCf_#5+V%R15QE0)@)0kUNmTD2lFgsD(8ZE3g1dQ zosUaKx7wz}zyHtA-B(=a9Co7*r@b(zKR2zBFXVhNHCL1`8?MRdq~AhPVW{I=S2Ce4 zbn;~|kt2UXos^jP-s60FR?hzR*y_QryKzlsQh3Ql={)W}Qg@g7PCg26d_DL1aP(&< zQ1b4}(Ojt~WJMx6yndCZTzGdLWMp8H6|r(enU~lYGCg@imR2q(PlfXZI}7qH?3+qY ze}&&1eAV9C`VVBDpo{^A66~#xkJlUyyXh67_i?Nj>jOBL0zi>>>| z+t7DDtEstHc=fKNgqejU<7fAUh+{MDf%9^pxv?>H`8<>-F`)!N1P(GD7J}@IHbY4|wuGKm=|Buz|mh zH!>tBLlvaPRJ&`#huB?qI3JU_9cIQa(_#MJz%Kc&cx_I*gF^x-X%}}mUChNnBptt^ z@w38NW{F$myyo1Tw5wZNDvX5p)!w*nj?h8!0>UNi-2e}Z0Kc`6;StQdR2a&Q8iL{1 zsmbTy#njio#9ZT{J=!|Vlk0g%bwEf;T5D_}W<4fv=-^-))IPkwYR{nPxKy99kW^V& zy4VUvo2{*4SN{r%-idmZYZ%C{jmq6jx-0&9Xh`JQ<`auQZN=o|78Y`bMq3+0na&Ol zMyoX%>Q3kiZ~pAME zJr)A|{H^{cm1SkOpUGY^;9~+&2LwGE-gL`sw2EOtJ=s&rrI)T7s~s-slyhm==BP5d zIsui>|A?QOcgvg(fJy8$2sQcn`C-DUR;Ud}1UxEiVJ)vp5)=4?khBn*WTd7cYDe#!mv3gv;KV(-U~ba2tEtXOmtqoWCliGN^m z4{+X&M4!c5*)sanjyg&(>;@Z@n-+X+d-lHMbFc$7{?*B{;|Yp9ef2cPP3Z_lZpuMj zs8L`CHAwq_d;tnH7*t-I?vO=XE687V`cWeS2u8cQ!j=ROtrm^U!f_q!Q^9yd7G#Q@ z+~%lSua)z`u@w9KBle&b@BW)l@Nq%a0AM$0ui(_cloA;E(2hQSoYv71Touz}9-#sK zB*=XsJXBr@a7#+I7dAb9tP-{K;?R1LiUF=e{Bn8!bxFx*OivuJ?h7J)4wcR+kitKI z{W>u+qO73-oOi(vHdMr+AX%g)O5?=TM35IM&`5Papo^HY>X3)V!S!{>P`)E3S;DCZWBjxpL0L{gNFO8LyBA19!5Rzz@xQL~Q zBPlhegYZ=S(F~zq>bict)_QnHo}Qqz{x$tD=MM`9$8XPnsY4fI#S&LJJr+FVF=x%q zogo9*Hwrcqo_!E@o>K!<_jqXo24M2q92!KP~v!Xy@Xh zgn8@sZH91gv!9(4gL|f=bTg@+5u}>DLwA$Jv z;4P+og)c=79qD;8F+IrxfAH9=CLTYEaZ_p%2te?fK(`HyV{j6B4u%t8MkA}Ns`{m` zuXy_UseruaJ`mp~MkJsU4NdZ*KO3Zv7WW zJDA6SnDnm+0V0>{IgN0Lw6E*#B8WNOsQm!K=u=8^5M_hr5XkN=-aFjQWMrZ5pB=pe zOy&KECfHdp5|SD{wzjcOPEEcbCbPBRy~+vRxgPOFe|(7@C(WLcpP1Mmko0DA6TA?3 zqxh$Nvf-~bc67AXB~Fm$=MzdzNNysagav(aE(IAG8D*tqeM=u?2ng&-T;7PeILrIU z+vpc}1Gkl({pZi0m(5fLW@h*4=(bwz?}@qJE`pbEZEX!sGuTe$!HRUiVZbj5{H3%W zJ!<*af9wf&1sJWT=jU9HYr-Ys{UFW5`apQuVaoJpPs67)XBxRvKZBB>{7+L0EZ{4O z&+kO)f9@YlPDof;;64Gz0Vy3FH8QDerRfcN1EI5)!s241$B)4#1#DpFn?0kqjRHXa zfos9PtP>d6c2)SGcqo&OFv{Qi_tezX(`o}US-9}B>B=7NU*U}>U9VpQXidN{p*A=O zYR)Km3NdG6XXhQ>n>Psv5^kRCJWLZ6_-_9uz4?sL$Osk4pelY!N*iZLN_-#-C%L;n zvwsKkDU*!jO*duD`16lwB+(o^;n${x-V$D2Kh#flrQRGL@F$%fthy}1N}AaQ%bO?+ zbt8uI@(S!EMqsCPaelnD@spB_;?H@@kpcuD5P-li72Lk;!_R-c zBOvW^ZA;&hKf32td(cAfmv>-Re}N|;{^zHs%vhWMhnJ%x$V*95PUCNNc6VeU+i`Mp z<4^S=5TvwE}J%p`0&1S*i+h=KMX)cpjnVtRPa{`^rM{rUqfNcO^ z+7lnooyN`xo(1@_FJI8QFzthT~{m- z^bb$j>lz{2_i;%#}tE%^PM;)ga5+2Q6fe|3r0Qn$XnT3=^T6f`V0GP=j;HwPC^GAv-50E)LFb&nbPrLlCL9 zgAyK+<^>31!50R0QD*a6zk2oR3sBAlo#($fOv}x0Z8+DZOnn|<3qA_CceVcD_~xX5 zz!l>?o@TOxA7m`d%yo^8OX_OxD#|9$RrDX86g%+n5KQO8%Lu+MGaZ=8uTzTNO1@4? z$`I-U(RnK;v(MSD8T_C4Nxtr-I`(W?m*9=&@82?7{ut}eAI!XI3=F%6*)OA_(l`xZ zQ#&Km{W9FUrvHwA2UxCVr^;fQqkmehJN%R{$T$kHV|$XM^SIH)E<>|f5U3S80QvUQooMYJBaaQ z)YQ`zbEZEwii_T=W%Ap7f2;CIR`%tE0iA!FIBKW_-(WlLcJMGYeM~3O+hT0ka|LD& zkh+d{7AVQdw{~~0*Twz%rtHu(%m2{Dg(<12*P1#_ zURzK5+02L1>e8)1mHvjT7BKSHH8a}{+QXOqygOF7T&CaC)9r79MR&-*YUgCKpt+w% zY0~)ToN}Dhpma3@$kD*{P>`1gALOFKLYasg2(w%W^5Ax=;E-odCTQ$wpl?I^Le`x7 z@yi!LaKPtrWA#+lwJyxh*EVr7Ar0d(_5@zF4`mi7li=H*t9@X<0(N60w|Uak1U&2K z_8Y;P0+NB|ty{39r97XVi|b@Xt_MBv;_%R;g@8eHc2>S*;?r}~80GlX)Zpy2$6#nN zIoV@Dcg><6rz$1C>+OE{W;BNLhJyUsRaY}D$JFTJB2=-kF;JG2lr($v2zkjgz^h{sMyWsZiCE;?{ zr@?Sp!DroW$9?FiD}Fj_bKbiYloXHIfkz+^-Se)ZLRVWG>+gkkxaMsn6UV*6iozbr z%iZws7Y7>;a|E68LuH&ysA0!NeM3WiJx6szz9w@Mm+2Gq=S2}rvbL`qjxTQFv40(u zyTS@E)6TCfBJt&*pp);!^b&mF(HnEQ`e!{mm6e&9iq9{mX-hVIWaePwd}HoUU@qlg z6L3cT{m;Px5Jb&=-Q7Tx`qM&>=xhun!x0wHvDjR^I0x%uW%wX>%M%6i@PfU$;5Eh@PL$)h=in}rn;SgQYd%@)yQ8A%oQdbrsiWi zh)F5$SJ6s=I{E(AL)$tM&@62v4_SQw4h_+YxjCGeX1`S+1Rpe%2%pUgfEuQ4f7QsY z&bRmATTA_$vq@#icF|Lk(;M>*)adBis6O_qd>y=qKuwNQvTDm%(huhFw zR76}h{Z$B%dYvb{Y1m9%fhRlPD1{#W&8(~_lW_lTRkaGHZ{Hr0L-oYQ!b%^m;p@ND z>bYIFE8;%>qM)umBO^U=XF(3Ll$i~WQt+3rUP^IeBQiM|`py%}*fH(I9MgBXx$&&5 zU%FgSPC6a=Bd_f;U<||%4zvdk)Azvws308Zw%x;F(nv3H0X|hV>|G3%Hn1|6p~Z(m zpa8!B4>kiKtbPiUrrnI8948G@H{a*Vv=-=Jy|#ikYP_X&sCk}Krf!mR&1F!u z-!oTNaGCfG8l~O&raMGLI%Z}a;M;zj3S|vDJG&qCFmwVSb*-v8`SZuH#S7LTZv(w` zeqrHS?iN}{tPH211N_6x6dwC7%z{9*{#*i=FYW7uQk-Ks$s9*@)8=kYz*Y7T8TE~@ znZVZBmjtI=2mkfW!S@;AAt+-0p4pn3nXz-UC-Gb7qK~Tc?eYuP^wOw|Xr9dp4TW`c z1HXRlECi1@kd;ms@=pq!9SsE|JPJDAm6rBHb0sNBLq|kIvpD=;A%tnc|5xyr>({BQ zx#m>+r@{3bJiog9n8dw|cY+G1Kaxcp8UJlMpEot)v?TKaR_6ay!zcgV<^KK}u-u6f z9v>R|5$Nz?&!3+OCw5^k0No5iH>{Qg&wp{y^}znZd-txvb2JvYf}mYRQX-OB5mB$; zX1fc&%xwr0YF8z5-#paxrL zAfJbV^AlLXkYV?Ng)KKWC67fn__IOz^J{rox=B&#wDTVL z$Z>E$iSG!;3qX|{pPB;kbpG;g<1;9hp@aoj1JG{=93zJK>R!H7HaDm9XEcP%f_fe5 zDIwj1M^B_fXgvxDWC22HE07D9mB9qtqT?MY55J5L!v6c@&{cmlz1MdBT2C~Ko_+oz zTK&4Dt=K9Co7Ha7QVflOL^vi$34TeIrHitPXT{O--+S5Z+Q(1pz` z@E1buL`X>ZuM!inCkLR!-9JAk*#Kbx=KyFU!2`4Qt{@&JCRmX02?|1koFVS}2P#q! zu|d+Xc5ujuk01H=%?Gq_kSw4thh#>dz|6!1p6Sqo27oLc#O%hcK3dR0e*Vl1t~4Nq zfNp_Y!2Wep(>BSy~Fa(ubo7K0{ExgZ9bv{G=x7 z5MneyB7RxWu7kOVEFc`vFoT$eg@FNTRamm!1~nf_)Bc|UQm$EB4tVbWIMD-Y|9@Bw ze(36tAH9o39&i*OFt1RP=;Q5W+d^%?KZfA?L$Hf1Z~I#OOSz zTRS?gfIFlZM}x#agLT+o*a-eItgNi?j$9usxK6lX58rTyDl<}$kp=i&K+_IpV{py2 z2IbI`fZ`qsBuT$RWjQ%~B?bL*19N+OKk(o+C8wbCSndo3a}01Sshi%0CJNv_!1O`G z3uW`bzyM?>&^HT!K?Ov@`8Euc{k5&}{O^tbDp3RQoGKE~{^+;d)iN=W0CzjsxePQx zC=_KQZanD>A;Sp*k83qgPXXDIckfgU4Jqg*=I3MZ?QL!0q2ZH~#zjSmfMOr|@tF*Y z8t8{SuP-$)Fc3}-D-)BRnOPh@_=YmbLN{IQ9adIaDub!=;K3CLTtL7F{w#OFhXoo| zKmfth0SfP-j*LrjXkx*%_V(@DB+=G(cBRF|eL-j0ADLdR&#P{NKM4R;;K#PJ9uG_7;U5L5$A3~Z zif`Y#Rqal54W^{QTc369YxlSo27j<_=p+G5jx>tB&GMR>(!fFDZnHhSc9To%RZvO6 z35LeK<~}XF>W~9LEp@S#K=sr%d;}Cb;OKO1c$=38PYQ58i~pqN$AzmG|Ni|4a9P5^ zS#k)&BqC%E4Q4Wh$dtK=%rZ-oB*~P7BveEwl7!3&Aydda&-3qi z-tYbP{{Gqj?X~~f&sy)BwbK3E_jO;_IUMJC9*06d{yPwWy@4SuPzj*V1Z>}&awGz3 zG9oMlK3soxdKyOsF)_5Wr-Yrpm;_<0^>K9c#HP@J1O3R@@vL6)8c@#6$RILj!z}=y z&z1UiU5=%wX?bVY@W6l>1bI3-^f8@;!}*1whd_CiCS_Z%BtuvWbaO|Zzlk@qBK3!} z0eA(vPq&#r=WZ=*CFWfdjP-iW;fZ-$Y;G`%!Z)8lOFiZ{A+HO40d$^UDNOM`EIOv( z?;u>&77h}LWq^@1vS4J~zI;%K;9J~FD@XW1UH)dO1_(N0{YwdAEsc%1q(3-Ny5@R} zzc+#z4hjU|!UGZ63YY<0CV#sjRDvl46uWLe+1OCbw4IM7ub2t+SzwV_bOC*{3=Hs) z>p21;(cfN(u+f*vTm)DZBo8=c%}q_a$jQN~3|b%q5E)@iF-fc;;z}yamUCd&?zvRfY%d1)U~ZR!R#`;4Po-O z@cR)Fz%)Uk1FG2X_4PrNRG)jdon)Do(bnEqdU03^@8WP&`|sbLco38FAY%0zbI*W# zK_^&&fdh9wj~{!(D!$j&UU_fbf;<+$4F`k{z#R@+W42x0+Yb}Z>*t)Ze2q%H3^+Q_ znc)XGrC-HrUJPLs7gSXC?Apbw5lZTQXe0*yFeMc)LDgCnaOFOx4MY;U%#&X z{{7ggQ*#%Yi*T~n`hWiXiLC`vFpTS5(A5`aLknh;zpE2&gJ8?1rC{#2!1}kL}b{QyhNh_dMRWNN0op10yyWFjq`)* z0pJBi#l=|XCB+2-f!+UBij2=GVPRt4LRc&7KPV@Hus%U-0bDkKNUdybg-)G<0OZP# zyKD%vU^NkjB6I@z!^@ZO&^D;=z%qfi>fjFoN2T8_ED~oHE1)D>6O+iKq#RgkR@Tg% zt>~i%4o98A>@5yI;)lbB4%H=yFF{cSr>bg4DAE zM{d*Y(^oB0Kp2gej+=+SAyV{7=JSjLn?mxDiC}POXb6Mg9`5cC9Wc_=bi&(*eZ>|h z))>{!5T5mXRwE-w1(aLv#-_vc+B%LbsEZo#C~zs@UfUXvguf?}vfRI40!~0Mm;pZk zGe{0J)@<7au}$DJyMG5?z0+>B&cc-21g!_iDmYkMW8+|j`wk))LAVOeJD}|#_M@ez z9{~Nl$(I1wS+MRYgHsmOQ&jDWZW5$@{FtadFX4TG&Ij^Ipk6^rL?lycZACfk-?Si! zeI2B}$li>7Hujaj0Sbf|93-tc@~Beb_YgKm28$d}BAS_LN!G5$MgiiH>PFB7H7JnCrIR6JfaZn=TJXSJ2ZFOsKywbue%pn^ zj~qTZaLnEr^!$m5MzXSgK|u=Iz_|d^?Qh5_waRyy0Vs zNV}aE2L7%}x`&$^Qp*QQ2VrWuyF;b@@Z!Eq**eS12YmW;Ru)Cx<3d7@q71_^*v}6t zL0|^prG%CWgjD2JREWx8AXNfV{xCc|JSmA|ih_(VP34@jZP~)h5+xC0-3+rBuBF_E z4&4F=A~IXJdarzb6uxL#S)n!4o&@oMCTwK%>viQ&o^C|c9Pg4MV%PDtz7b*tu&(3c zas}_-gSl}V*H1)}nwoRi(ZKz{9`5eu2Is&lE>0AOsH>|($sC3A4QCPp!P^(1Sx?#& z<$gN4hcPj(e`}d7Er%uoO(Icc+toW&Sxl0f-fyOnM+m)4mdfv?_wJ<5(LF5TEhT*UN{_uI1(xV zOzi9>cAcluWZD}ibiDtIfzs1`B`iFQi?t$(8HnV(AV+O|eQQ(GFKb`6VZc8~jL=KZ(?IXiV0#?G z5p8WT8*c&j7p*cX(xaOp9+?_w3Q&dD8U>XKPA*8utGoWp>_mvi&zAM`2BuHph2In` zL(uOp@pQw-5HrLNA4C_^LE??~{T^a3Xh%k+-q!Z`YiqK90#3qx#}9mp1%TxspcEVs z;JKjW;NSoXJOrJn?La39HrkXaD2DUP0bqlHqgn_w52mHvm@5KNh32IG@;NI zlZ2sw<3sp+kcJ!>8kPZ7xyGg@8FG}ytz=0A$YCw6IDtbBh6{IzsTxp_p}>i~vNhnU zer4JVIEZ|*8iFQK2~jYgpd!~?15$wVvxD{ z6OKDzehyW24I@>A76fMiWEWtiYF0uKs)L9Oe2&+A+>r(V6$4;`JQJEjVH(>A9|$K| z9P1FI!NH#KyhO&hdP2_Mh?u_kwTIhVTO*iNR#Kv&r3LwHUT*GMS~CD52Xk{_m@f1} zatDy_kk!=Oiy-=)IfcXMPC_rqSF6uE@owSNpv8g+yAxLr-m@PlleM*Vky$mJcg*4j zr`#N^bIQy`LU9v1P#~Ix?k@Obl$FP?|5Zi33tktToIo>Nk@cJ{6@rxqpDG;7ekt!O zh%TC&=TJA@&=4m`KO?>r8lcJD-6|bd15Rn@q@v&(9oF3%oD@GG46 zh;tEg1(@`hd>&gXD{$>Wp8-{{L9ie~G8^ESJA3coU}{DNGYt*r@#91uKU9;vk8%(= z8&PAPq+wt{+STy=J0uM~VV^-f4R^d^_|w&ovd)!g@9vB$KyV=XlV>rSk#N~n(OF_7 zjW@8BQE^ouQNry9FEVxHJLm$j?$)Ltd``HNdyu)8|8Ueue}8!*y_TjX6sEAyL$2yQ zB!YRIK#pcEO1FmNyd<8y76fEJAmA&O2GJaOGU#qybH1o+Xh4htloxFq!M-?2hzQ*b zP7(;g#G6LN1o6~(X|U44AHaX1UxOPK@uxh&c@;SgU4S7a2&-Xr5UxTc=Ur~@nf8QE z2;m`;K$yZ=o$1S=#5i@^7dh3BA)I(4a^?} z6;%SL&n48z3CqWc`yPnmJUjw20N}IZ9q-Yh71XoOi=?V?`%VP8VWqiExI0k6lXDpx z_e>dtBaJFt>u+tnj0h%s9~ptk5v#ywWoPFFkP~JKTnIo(0s;eZ?uv_w*!J&VF=SEX zl%rrFI*HPvCIl$hlPKR~*?^??nuSd@pd?6z&KyC(yQpH-)QEen3b~H$AW$Aegv|fTub1>~cZH z$j$v{yge7XaTk=7mVaz_BM^LPzM$2k)P_JJ0}WLKQh05@lF%B3G!Ph%w>vPJ5!kAo z17!RFj$yeUB=4;Ie}Ie^I>m`pJ_!00$?blBm;qnqSi#ax0%4fAq3P&AhAQeZ&J-$7 zw{Kq^c&$PGBgvYwQxVIIbm`8WZDv+7BsfN+4ff8Ono)3uo}Lo2Mej5nTt{B0Z4Ce zAH$6tIdKAPYp{pd%KBg`l70F7xnIvYFIqFJZDeB#{W@E(S)pME*)(GW>!zj;6B4MH zSqPsQvHq^>^UbQ@yFAXo3rbMD1tOjD@NloIdJ-A4{&X2%&~giPF%y=lxufH$LJE-= z8@2@@fP|Mgfv`+R+`~}XqGw`4YiS`6d<1A{0?2svYPR@ggD+(`1<`RyB!p*YWf?+O z6HGIxfa57WdSnQMUPf$FaU^oEqF}CKAhOiVn0m*|xjjgpA?9IWuDMMJ&Oz%6PX?A# z*ts_+cN2W4uD&F3tOSFAxtSTrIm~TsJ>e66{Lt{$`N?W##YPOh|1{a=m6n#?xg zKg5#?3Z!O`r6U@JZ6avUe36z08>XP7G>j)oIvd14rXUyejggrA-hONEyu{Fxu9DF4 z_8J*u4kdvgwH;xC>+R$z_m3E495Qa znH_IG1y9aVdYC|PWCYGSi8=tW(;Cn%+E?moYj>D%9jj_w-pPBbw=E*)r%Xyye0)3{ z4~^`0YtXFPFZu9p3?F`*WP?YVDX^z!oUUx_pX z&MKni)NSoATLe;a_E;(ufaqIM)WhK&L!hf@b=Od89sjmWoM;Td{)pQDw@7PO+BbCULF;fAk z0*F4)?wM4kszwqLxXCf`q9(Lcp~nbDS?icEbe?DuUlY(mgC&&$&AcXY`T!;bx{kn9 z5bz?nf#-)d-j9Z8=^%CiV(3wF>$DgT50os?j(o|$K%9?H#lRrW^*yL4TmjuJtHW+V zUTI`>vO>Q;?7&K7%LP61?fd72vaW zY?p!OmFD~hLa`Fk_h}_CyZipLi#<=fBe;j1Cg%b+kIW7cNdFI5x*D8;^&^m%xOVLT z504JorNB?Fr#CVYiFQ=uX%pu(k3L=%0E9c*T9VIHXAJ}j7$^@}LK(8^bmX4<2iWe`|LYj~^ z1ChY{0^AK;1YydC9y@j$!jo;TkXgZpGeF@!Tf2k9Zf5sqL_@yN6%HY!d2mtF$inJT z@7_Jy^UsrD$9(joJnLI_VgK?%YP1-XuOSYz*4K~izA;Hw!^1b*Vnd~&2 zO`BzL6G9(28-zjlZrs7@>NEu%;VP$Zv>0I%4?lf1biI9huw&5r5=D)Aw*>0SCTlMk?wz0Pk^xr=2Jjp^Xgj?V1 zoN|MQm&bBql?stcc`y}@l0PLCfr%2YlbjE~!GMRWF)k}>by>u3SWqy^2M05=vd*D( zhA>Rs7*4uJGmB(7`*aJ)4WmOh4i>Bd;&bX&fE5}ARZ!|%{ z_U?w5N0$|rfIvzT(j@&xI*2r+vTJyCHAp-(l;fDB^DP&z5hey%L(0o?yiVkS(p=Y= z_(z9bwT@}aDcjxI#Yh^{U^p%Au_+;Il^WH2$IHXRdsk5t!MBXWREUQ2LGm`%siLNx z4&An$g*9-RKWneZ%MUb@bQJ5F-?}$6)xln3Z6k9@gXgr7`t7IrZyx*6XfyTPC_*(5 zCB)&;mYNuE#$>gUDQ=wlxP1GIZp)ZJh!=IKwe9mWBEvf)2w-xb)Ni|>(D1in`+0@7 zEPzF<=z2~!wjMJx|7fRLZCA%<=VQ9B)>dQqk?9sgX~gCb1*9ARv>Z=-zgRz|1m_ zRy}!Ts$1E{`6VSrhK3ut%uieEi=O<-6M4>e*nctq)kTvdO|nU=YsZ*}m`%%HoX^_- zy7Np{Wk0LuUY59#OWBOANuU~GaP9x*8qegj=0Fj52sMaP-kU3zZO1K73CX^EHeH8( z>hoFwyO|7cX%kz?y-!`FD7!@F{P~ew<5#xhMO$}O4nJRT3aW2PzIT~{`zkq?kkg|o zKFv^SD#NzK#?9)Kp1RbWr(Ar8{rX7*aFi@{7}4)dQ_{yL@#!O<@h(~NLi>*4qU)L* zin9M*P)|~37SFqixlLw*BNyA>M%J$uWY<{TQzG+^)oIncT={_G0^4EbS9TU!!l#U) zU9GF4l1d%;vqwEw^SuI&EBf3#Ei7zqX<3k$2RDW|6(R0-#={RW9e{Su0|!IOt|E;@ zEDq=tiHVPkFDtS2{5&VkKaJ$TcwswH^c#F}EPy7+QU@x8CsPh-zT zCvdP9)6E$O4_5s!m6yBKmlQ-H{QhK8iJgzePYdzT@72O6q^km`g?ue^8_`jXAQ0fE z*XdHD!kZP1jg6l_6S$la{ve^Tv9^ZFEK&R!1!o8eCnes*=!jsf1o8#kJ|brzMmN|~ z)@W{=C<3}Gu3$Mh@VY0j@Kurfi?qKS-i(ss)Xfwy;s3d95!`mnJZ?K;cjqq-wZL6k zCVsCdL(Ak(HdwI^?b%2Bg+}&LtB;~ZjEm##I+QXHbReF8sPqDW82qXwN+kazJ>bhn z5Dt+T&H&5|0i^-XiZ)K#kCeO&ILcnK_6P(a|pN$KC#u$LG1e(j>YH+?T8jLSm5*Ch zHjka^*9DpTPxY)MOi`a?nS`0s*_bY;Mh2mi*#d?UU6evO&iw3jD9zHy0F2a*bwN{7 zQJw?((ot9o4KaD*>+1`=3T^Z~w_^e+7!hw`#`@i<4yrsa;Z?KqHIB!xhH~As80OWs znmA3yPxo7EPW+BE1|q(i-oOH*2Il5gc$10KP3m;S~%B#Pa7_2R`>)3-qE z?pFLkuPJ(YP(v!bLQHb@Mz&ZdKx+@!Kf+#yUcbe90c``w zGEv$hvaeP1Bwa$AI`H#?X;9*yW^WLak*_#(C{uFPYOa+dWH|6pL&MXP{eP+btc6g1 z{%WEgl`-lzgGH7p5N`otk7qllC&Y zDoPhx4M61t$qbZG_mSIwns};`zBg2U`yEbR-kKJ#x!GAX9j_IXZiEl5VDb#X3tB<0 zOnp3?j&qXIH2fna?}o5`itA5SHj&beGN-?NT?R?-e#;`h0W_}nxDIT8V3en#+Q5J` zP&*<=U4~YX=L%4Yd6b1-K(j9`O>+7(iM$Z{9RSXw;2?c08Z?<_PoH*yVTWuz;311# zH>CSeQR_qChlv+XhOqQYv0vW3zL2^)oDzWr%eQLc4lY**Bx%!7zPMjJp7EQcDDawB zTCapNbq>vs^#71i3nwKk-C>$2IeuC5Q)*Di)`qeJ|KpFchZiT;0Wrd*x9vZ3#dRgSobDr&3h8QDvbY_%wc6xwv1CrfmAmfMo3Rmby ztrmkT4ni!z1U&I-Aru3=7O_ubVM=-56R7?Xm7~GF_2^L^k~y3^9V0iKQ_y$Jv|A<- zTdYvYhq^~X%h=zt9b{x8jN+K&MG7Vi2}%45yY7#%CQpz^N^Q(HOEmC9I2@8WXeEqI zPAEsFIw(q{t^nFrS~k`Bt!3D~e30EGnySsPt+8xII6NlB#ec@aqg(j&X)uyukd1W5 zj*nmrp)4Qx`xgcDMSp5Z&gUrZK=ZtK;}zNdlcw_E1jH=>aDep8>=z1t0JMOBvp`c5 zg6)cWaJJVoYz#&Q1_+MAj@O>i8DLod=#)O^*JMRWwQJ>XCe>zkLxF-Y3*k4JsnGhK z#FR9JW1=#n(vk-s(%v1(e|<0hVe8guwhBAcLT;7$?YxjWi`i4@PHXufR{_&&4 zV{@<#O#=)iuGgVMgINXL8pop1GruvO+fcGPnkmwKJEcYa1{FN`tOMv zI*>d7-!t?9Ao>gH?rxNJon|MA$~d;aunCC_1iQ>d&AqIjh>+75x2h;{JNDgQd**xH2XuWX{ozALPN4B@sR=Nqv z@DiAr2FbPBGpt_^QH6&G@8P&!qVGNP=Le&)yGiw2f}EV8*q&ESgkDLSl;m3yiWVf= zF`CuP$JV0f7q%ydP*4sXa92~Jcgr}>S=1@#%)?d*{Y<>}q9P=LF6PZW@ZYGNA~Q#g z)*XF`z_h9N>~V3?LXCmvIZ(sUia)O1Zi?y6=s|ezzK(({9lA$>ic--6Uf9@}^5J<7 ziXYS~3o5?BW^;(U9B-JzQDH;>C)DWBGX^nhCd-<97!MelDzkUMDWQLjlI%7TQqkZU zm|3Jl_gRET!LU_0L)_w4+b&Wl^4d86&d|7c`K|e6QBkF)1ka-8v;^$8$6Q^etB5rf z8|}`_fK1C(LvfSUB1Hyrn4*Mj9Y+EEqJoJK4AmGMC{5I5@K7B4_T7RW7c{EiB+^q; z(OVF~d@Akvb8hr7fXe!ECThYt@SsrN>cBWUI4Y6prfOVA;|#j`Vf;d^V6DUw?4bdM zA*B^X=a$L&d4_`V_F~tD?oV5DU&mMG{&o$l1O^ao_uj8oR)H;-D$+;BZwCs zY`ea2Ik!cm>eFVD_d<&fxCTV9l-xyVt%ms#a2{40GDO41ej%$yG*y1XC>REyFfGB* zoM!Ac7>&QGOOi4Mw71mlk;QJz9Z5r5E+EmcH?qzG4nZD;ho@`IA{SLuh(%FiBi{Ge zzjmh$oHXb|Y8gnquDHS#|V#WERMt3NLCqoxuOf@X*!^quqb^8?t0Ci~u| z5ZVKh03vcqR;^J-lSib9O7c4pEbk8df`F~-%c*B=W0l=LrTVF$U{=r2lC*vwp^p#( zNx8kG++O0TQK3oc6N&d9&3t(<>8|SB!oGW5+|e~&>wX(%g8a#9wR7z37t~H!v%gS2NNuppQ^V8XfrW!nZT&`>J{>@>R^zQw| zoKc_Ilf80%X^ELtP-i2uC$Ag>iD=HKuhGxCd^n1kG6-F+K$-D?s&S{Y&}eu-;okC) z)YcgQws?6^x(<|mYgYXbq0RoY?#?ixfMh^8lU060ljjGjqQbC@XoN+v{AFgQ&5a!0 zn}>(U6tY&atDgzU zYQyWF<>b+96UNAIUjh(!7W#&9m(s>gw9OV@v97#9@=}4scTM_UP|ZCtv3!4iQJJ*E zf7tF_*!SZor`ExTZ^s9>s+fbKt0hu3sH2a4$T$xS@6o4DM=}2?x*DE{L)y|~HP81A zFxZT-B)Q*WC>MBYJ@MUA@=}7RsQ3X@&1NV(dJqd}0m-o#Sy^xhcmLg*XbK1 zw-cpiYeG4sdRh(~bs5#{4}JMpqbr8Pc5!TAFB^lv?Mpt%N{)v!d#CyJ5{h?5xE~1& z@m{?dkz7sBu$d$KPaT>-Y(wOWLneUE3ls>^1Om)`s)Es2!m-!4?CJ*`Vw4cD z;0!192m1SSu`z(TpoJC31b7b`VBPw@1On|vXoLUzY_0Y96nt2&J=Jh2?({^Au{X2@ z5ifTxT>fZ-ye1!!2O0ZqX;2;0-wkGWuQGk>i9T&$(TuwT0S&QLov z%FkCMr)7}PN9{E@GZYc>bZn;WVYQjgU&A01`nx@kooROmTE8!<;M1}1b)M*%d3Vc0`Li@{tKsaCur=+laI9VNHKEQ;M+ul)X=SmFOm8&wjGicVMnkae;hE!_DZ7Z;9lApX{a0EQNMf@AU&^Zw$&C@MN}NgkreA zzd2rc!lk(EDmqGBjD(peYnY&qp$T`kUw$ zcXj;K%4snYie(YKe}KEV%U0lLn7l^nJ4hy$>K@lQChq38^Jkm!>rCEF;bwQPV2(M_ zy$r6}_ZUJ0yGMI8j+VbU>>nFq>ft&TN9XV}uC-+IP{ZDnwU$%f;zLQz#T9|k38nJ| zYcX`{S6^bdR`=Vx@v7>hJ=tA}V%MktZjC%rwOLywHBosf?J#PP&CQt2%ZGEHjIj$&^Cvn$B4MDQ= z>so{|NDg4!QA84nO3rw%F&xeOG(`9Dr|O-VQ&z%7DpR6*)?M-DqDE%d1&!?u(}f*v z7DrFBU3Y(zgH;zgq>%PteNl|>^}Dl6(SMba149N(52rvsE1g?^@eRb-Z#LZb;Gvw= za>pB!sI+;-*Ki`cL`zZL+Q;0vfjm-|zFAyjaI8Y0c4A~;DgD8tN6zTYLunRpOK@;- z!3M#;?f3AwX8P)Uk@emCb!9nS>1%xfZ_BE7?Ywj)L^LB#Nd80>Z-GT(!Id8!QlKY{ zzvd}XG1q5CjVZdbIZ`jB+`H+RIay+!g;3Jpn%U-jsP&LedyshcR&wFd%IKiq zNqM(N#~i#iWamyuZhoI0R!de=dt%@`u~_8(>3gNP=<%zajK$^GWIYa6ymqrI*U8MM zDUX*jl$HNP)}8A*)|YpEY4Ug8R$__zTQB?6u&Sgd7gDW_UZzfTWm`L|*iVXM#%0sM zVP*Ns9v21P(!K{}7r#j+{t-T^&fa^1o+89NmTYbIG?{UVlc&$`Wv|C3*DRVFb@iU~ z47ygWU%VXYJHpP*ms7yI&(@iTmNs+;WsO?wHzQqh9o`*uTu!Uc-(R^#y-?V`)Ph<+ z()iGj0NFkSKL6>FcI!7psj>j`SExjFwl>bR2n= zu1_g&>jw=rf1s-GXko$DP@=cGMU?sX-((@fg|0)B%kD`^+)oRw1j`%Ydj{AAxpz}8 zXs+Hq_tm9USZHFSPigdPnQ6u^&5@ZOPc2{Zx8ek?TA!fZ+m)NyWzlfvAxCF==YzaW zWih!e4?}}_rpfqD?eIh)lP@TcbGZ3*h&z--3f>l=z5bNV<-Y0!S2N50V2b0Bcdk6R z@a!`A{Ne1TuY=JBtjhNVj7?-xm2C#TgoGR^_}y`JAYeO;3-@W=M+O#MjxRHweNWSP z6Ru!?iGSr?^(%M({?B@vQL*AW*Ndx}>gGI+j1o06;>BI<(LaOa1kGB2NQTEL>;;$x zZmvY1a;&w;ZL!V*KMa}x{8elA2mT9reb`-2&J?td{MQ2-htPo{e(F&9XKEqLLAsZ# z1WJ|T2Mr7k=uL86RI!TcWP6Ms*65p}rUkQx6U)d^KOnz@Zt`{_5`t64D)AwS_VGDaQ6I|K9AF{~R z1IQu&9D!=i$B&+XY=E2j%vSTDCG$nw7zX%eI&5-in=pxSf*~xE%;DmHgZ$3W;*5&3 zN9Ie@cjX^ab%=i5T2qodo7%4uI1WUVnS zb^Z0`-R#Xzu{M!)6|xG)OtYyRmQ(XN=i@Ox2!xnxj(9AimXnw4b$;g_6POn5ld4 z`HzEqA9K*r2xabT*sArE%JRVSL~s1rK|8Xrilv2IGbTuo;LGQ<8DzBrg?v)?mz8~% zPh3Y06AWGjirWZ{3~J#)mx1YA9u2#j*WMnTuG+Rc(6*;UMnB7~@;*iIp6I1vS{e?H zXDv^2sG8Xnv-!fVpEM}Q?$Pti|7|dl(R}*sOvCjH963Eb6RWKyC4Fy)*y%#2J0?Hd z>j)RQHYDm8?R)cLk%p#8BsS=s*qY{nPsPhd3u4uxmYa?d*&n_s$e$v03tHI~xZUCk zhnuS{9R>E?xgLSl_9r;L{S2TJ-$mEwrE}Eii(XtKIS0r@UFND0kAuNr%!zxPb4OQg)^)en&vN8QA z^|6u<_;K|O+Qe@StF`x)50dS9BuII!^so5M`X<=Np7&#MB#dFl7+=SqpxOOyacJFR zq=-?{d$qTr@=J=`{fRo}+s387SB<%Eihd{;kI3DA+QgAMS?Ts4X(J_hwVC#JGAVmi z*~PrLN`*|H4&-iA4>CD!?G*Vb<8nreJo# zXmfYy11@grqie&Q4N-?4kJH&CbiW*e9s)4~pM-|5h*}%A{fW zX)`s{`J&^$6YH2gd_@1WRqGgZI(izbIO_JUij+2K+3aJ#`Plr6sjS=mEtfh~^PMp) zqbxDCy4P(5EiDJK^cKyPIqU6>dxsCr?p~!muYL4vu3ys!}=>aAF!Y5qZ%2E9b zSsj&at*2QEnK@dHZzY$-CX+<_G4j0RUO!hSPI`OSW7U}vT6Z_UcM1Lct(~ed263mH zd7`Jo6@T)XTLqb*V)5w0;ColM+qdT@I(3rOgpz_k3tU1&%cV>A(7~m)tXRWwIAw;^ zQgmIWNj=hQ_1R8w!iSw&DSgL|*Q~C(aQ&eowL5>lKZq(_m_>7IlWTL6ogr*v-K}nM zf>9;mm0&CzU1;!ymh*?_N^ZL9uy%CVEh;Fde9`Royy)y1;NSn1u4$urGJIk$^UpKA zKE@legE@Imv*==ILVt8h^I-Ny%QozCSpEG($4I5IrGHSuXeazy~p7bU3K=Z zKeewjep=blFn6nE;oQn0c_HiXYOmk^;QF0xou{9H;?;`i-J{vso?b6P`>U-6b1!C( za#RWm+4)+U%GU(lOSbMiwd7VhpRj29=~IQ_?p>!G0(4|`;}#u1ngdB>h8|vmxa7Yc zvTMIj-Eqp;T5-`A!A;LvM$jFcH`X?knjSMhx=&Zrrm8OEB#T?<*Ybvj%I0PurSCfx^R1$gLE7Hg_c{h z!!O$BV!o0FC1lv9a<5&Ls>s{M-sbZ98q+8%pWe!wqRT$o28!Ftmpv%HYy?brd=sRN zR7k;zX+UvG>h@Ze_|lR8rV~#$7<~1D7N)1wk7!vqMXBg$s;lc<@w{)ob9bPaK%;T; zG%dM&miN)x8+;AniJ~60aTHPv=jFWeocJ&o$<~+nZKcrTvMZZ zY{BeFPs#JsEUYy(PG`@0`T3bIngNF--H-@B` zMaD|%4aML3_1v$x;h7nB+jwfRbLYt-q+QyS$rJk}Z$8==#t|#FU{Ae+(0jl9#U9s6 zffu`9JgUBAPV360=#%j#qbOhh@Qn5m&s2>D8y<&giH5oOQ_>>BEl;xaV}n20(BGP< z4p2CCQ+2f;@WX0^lG=*9acpwvu)E&MeG97(ypxT;t*l7?y8%N?l>W-l&d{^eSsi;h z{UUi@=UP^p{&6jj+sieBMs;%uXX;JfKcx(EPbm2|+34|#t?yH8Rre)nrqEPpR&lFC zU7c(B`zgepiHnG^ObXThr?N{`N_+l@K)k2e$w;r|S*O3Ve&4^1t{(OmsZkQGH+jos zZ4-9sFKMEkMzE-Ck@^tRH~vQl-xZ&=Uv8EWF6x*47t+W(_)zUCQ$On^^<7%(W=Fqg zWgItk>oyT;cgsI1;1?ER=~sNxNb(?kSN(1w=~G%M;(S^}$HS>0Hx|G>yLd=d_}2a8 zSEkwH$4^qWW$^B0`gY{eLFp~?4^e92lOO&qLQ{!+dG})adC2Wiwi*G-T>(d=k^9qL!4y%_R~mM!H|e~A-2e4Y zXYZ_O9BuiEamV;FIth`q#Si|xJaU=;^VJirFElc`>`OaS2v)oP=c4w?hojH}TXR+5 z_r#q%8yrGc4|035|M`tcl?*VTd@nw`+c)zWOV^ELmj8eggpU3HvkWBC#s1dLmH$oB z2v)BD^W{{82^1msGyZFAw>|6n-=!zyiN92Ez4CADOzP|Mzu!)1ocw?FOYhzrPn)0l zgk>ODQH*z?xl;P#0mU@T)^!cad_HmA%IY|JuK)Abe!#?6>?5`jdXKSHo^W@GOv=fT zO&0a(^ONP{7xiJ6>Rf5~*qr{pBS<*Odo|+FMO_B$rscVE@%?zF3(=#+b#b>yQ`8RD zpYeKnO>f New to KubeDB? Please start [here](/docs/README.md). -# Reconfiguring TLS of Kafka +# Reconfiguring TLS of Elasticsearch -This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of `Kafka`. +This guide will give an overview on how KubeDB Ops-manager operator reconfigures TLS configuration i.e. add TLS, remove TLS, update issuer/cluster issuer or Certificates and rotate the certificates of `Elasticsearch`. ## Before You Begin - You should be familiar with the following `KubeDB` concepts: -- [Kafka](/docs/guides/kafka/concepts/kafka.md) -- [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) +- [Elasticsearch](/docs/guides/Elasticsearch/concepts/elasticsearch.md) +- [ElasticsearchOpsRequest](/docs/guides/Elasticsearch/concepts/elasticsearch-ops-request.md) -## How Reconfiguring Kafka TLS Configuration Process Works +## How Reconfiguring Elasticsearch TLS Configuration Process Works -The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Kafka`. Open the image in a new tab to see the enlarged version. +The following diagram shows how KubeDB Ops-manager operator reconfigures TLS of a `Elasticsearch`. Open the image in a new tab to see the enlarged version.
-   Reconfiguring TLS process of Kafka -
Fig: Reconfiguring TLS process of Kafka
+   Reconfiguring TLS process of Elasticsearch +
Fig: Reconfiguring TLS process of Elasticsearch
-The Reconfiguring Kafka TLS process consists of the following steps: +The Reconfiguring Elasticsearch TLS process consists of the following steps: -1. At first, a user creates a `Kafka` Custom Resource Object (CRO). +1. At first, a user creates a `Elasticsearch` Custom Resource Object (CRO). -2. `KubeDB` Provisioner operator watches the `Kafka` CRO. +2. `KubeDB` Provisioner operator watches the `Elasticsearch` CRO. -3. When the operator finds a `Kafka` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. +3. When the operator finds a `Elasticsearch` CR, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. -4. Then, in order to reconfigure the TLS configuration of the `Kafka` database the user creates a `KafkaOpsRequest` CR with desired information. +4. Then, in order to reconfigure the TLS configuration of the `Elasticsearch` database the user creates a `ElasticsearchOpsRequest` CR with desired information. -5. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CR. +5. `KubeDB` Ops-manager operator watches the `ElasticsearchOpsRequest` CR. -6. When it finds a `KafkaOpsRequest` CR, it pauses the `Kafka` object which is referred from the `KafkaOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Kafka` object during the reconfiguring TLS process. +6. When it finds a `ElasticsearchOpsRequest` CR, it pauses the `Elasticsearch` object which is referred from the `ElasticsearchOpsRequest`. So, the `KubeDB` Provisioner operator doesn't perform any operations on the `Elasticsearch` object during the reconfiguring TLS process. 7. Then the `KubeDB` Ops-manager operator will add, remove, update or rotate TLS configuration based on the Ops Request yaml. -8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `KafkaOpsRequest` CR. +8. Then the `KubeDB` Ops-manager operator will restart all the Pods of the database so that they restart with the new TLS configuration defined in the `ElasticsearchOpsRequest` CR. -9. After the successful reconfiguring of the `Kafka` TLS, the `KubeDB` Ops-manager operator resumes the `Kafka` object so that the `KubeDB` Provisioner operator resumes its usual operations. +9. After the successful reconfiguring of the `Elasticsearch` TLS, the `KubeDB` Ops-manager operator resumes the `Elasticsearch` object so that the `KubeDB` Provisioner operator resumes its usual operations. -In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a Kafka database using `KafkaOpsRequest` CRD. \ No newline at end of file +In the next docs, we are going to show a step by step guide on reconfiguring TLS configuration of a Elasticsearch database using `ElasticsearchOpsRequest` CRD. \ No newline at end of file diff --git a/docs/guides/elasticsearch/restart/index.md b/docs/guides/elasticsearch/restart/index.md index 51c86fa9b..464e67333 100644 --- a/docs/guides/elasticsearch/restart/index.md +++ b/docs/guides/elasticsearch/restart/index.md @@ -22,7 +22,8 @@ This guide will demonstrate how to restart an Elasticsearch cluster using an Ops ## Before You Begin -- You need a running Kubernetes cluster and a properly configured `kubectl` command-line tool. If you don’t have a cluster, you can create one using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). +- You need a running Kubernetes cluster and a properly configured `kubectl` command-line tool. If you don’t +have a cluster, you can create one using [kind](https://kind.sigs.k8s.io/docs/user/quick-start/). - Install the KubeDB CLI on your workstation and the KubeDB operator in your cluster by following the [installation steps](/docs/setup/README.md). diff --git a/docs/guides/elasticsearch/scaling/horizontal/topology.md b/docs/guides/elasticsearch/scaling/horizontal/topology.md index 33c60c80f..8080f872e 100644 --- a/docs/guides/elasticsearch/scaling/horizontal/topology.md +++ b/docs/guides/elasticsearch/scaling/horizontal/topology.md @@ -405,7 +405,9 @@ $ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topol $ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topology.ingest.replicas' 2 ``` -**Only ingest nodes after scaling down:** +From all the above outputs we can see that the replicas of the Topology cluster is `2`. That means we have successfully scaled down the replicas of the Elasticsearch Topology cluster. + +Only one node can be scaling down at a time. So we are scaling down the `ingest` node. ```bash apiVersion: ops.kubedb.com/v1alpha1 kind: ElasticsearchOpsRequest @@ -420,7 +422,6 @@ spec: topology: ingest: 2 ``` -From all the above outputs we can see that the replicas of the Topology cluster is `2`. That means we have successfully scaled down the replicas of the Elasticsearch Topology cluster. @@ -589,8 +590,7 @@ $ kubectl get elasticsearch -n demo es-hscale-topology -o json | jq '.spec.topol From all the above outputs we can see that the brokers of the Topology Elasticsearch is `3`. That means we have successfully scaled up the replicas of the Elasticsearch Topology cluster. - -**Only ingest nodes after scaling up:** +Only one node can be scaling up at a time. So we are scaling up the `ingest` node. ```yaml apiVersion: ops.kubedb.com/v1alpha1 kind: ElasticsearchOpsRequest diff --git a/docs/guides/elasticsearch/scaling/vertical/topology.md b/docs/guides/elasticsearch/scaling/vertical/topology.md index 73dc38707..43d7b6efa 100644 --- a/docs/guides/elasticsearch/scaling/vertical/topology.md +++ b/docs/guides/elasticsearch/scaling/vertical/topology.md @@ -689,6 +689,4 @@ kubectl delete ns demo - Detail concepts of [Elasticsearch object](/docs/guides/Elasticsearch/concepts/Elasticsearch.md). - Different Elasticsearch topology clustering modes [here](/docs/guides/Elasticsearch/clustering/_index.md). - Monitor your Elasticsearch database with KubeDB using [out-of-the-box Prometheus operator](/docs/guides/Elasticsearch/monitoring/using-prometheus-operator.md). - -[//]: # (- Monitor your Elasticsearch database with KubeDB using [out-of-the-box builtin-Prometheus](/docs/guides/Elasticsearch/monitoring/using-builtin-prometheus.md).) - Want to hack on KubeDB? Check our [contribution guidelines](/docs/CONTRIBUTING.md). diff --git a/docs/guides/elasticsearch/update-version/elasticsearch.md b/docs/guides/elasticsearch/update-version/elasticsearch.md index 8f99b59bf..9c5979bca 100644 --- a/docs/guides/elasticsearch/update-version/elasticsearch.md +++ b/docs/guides/elasticsearch/update-version/elasticsearch.md @@ -38,7 +38,7 @@ namespace/demo created ## Prepare Elasticsearch -Now, we are going to deploy a `Elasticsearch` replicaset database with version `xpack-8.11.1`. +Now, we are going to deploy a `Elasticsearch` replicaset database with version `xpack-9.1.3`. ### Deploy Elasticsearch @@ -110,7 +110,7 @@ Here, - `spec.databaseRef.name` specifies that we are performing operation on `es-demo` Elasticsearch. - `spec.type` specifies that we are going to perform `UpdateVersion` on our database. -- `spec.updateVersion.targetVersion` specifies the expected version of the database `xpack-8.16.4`. +- `spec.updateVersion.targetVersion` specifies the expected version of the database `xpack-9.1.4`. > **Note:** If you want to update combined Elasticsearch, you just refer to the `Elasticsearch` combined object name in `spec.databaseRef.name`. To create a combined Elasticsearch, you can refer to the [Elasticsearch Combined](/docs/guides/elasticsearch/clustering/combined-cluster/index.md) guide. diff --git a/docs/guides/elasticsearch/volume-expantion/topology.md b/docs/guides/elasticsearch/volume-expantion/topology.md index e6bf35509..45cf5ab15 100644 --- a/docs/guides/elasticsearch/volume-expantion/topology.md +++ b/docs/guides/elasticsearch/volume-expantion/topology.md @@ -59,7 +59,7 @@ Now, we are going to deploy a `Elasticsearch` combined cluster with version `xpa ### Deploy Elasticsearch -In this section, we are going to deploy a Elasticsearch topology cluster for broker and controller with 1GB volume. Then, in the next section we will expand its volume to 2GB using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, +In this section, we are going to deploy a Elasticsearch topology cluster for broker and controller with 1Gi volume. Then, in the next section we will expand its volume to 2Gi using `ElasticsearchOpsRequest` CRD. Below is the YAML of the `Elasticsearch` CR that we are going to create, ```yaml apiVersion: kubedb.com/v1 @@ -140,7 +140,7 @@ pvc-bd4b7d5a-8494-4ee2-a25c-697a6f23cb79 1Gi RWO Delete pvc-c9057b3b-4412-467f-8ae5-f6414e0059c3 1Gi RWO Delete Bound demo/data-es-cluster-master-2 standard 22h ``` -You can see the petsets have 1GB storage, and the capacity of all the persistent volumes are also 1GB. +You can see the petsets have 1Gi storage, and the capacity of all the persistent volumes are also 1Gi. We are now ready to apply the `ElasticsearchOpsRequest` CR to expand the volume of this database.