Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 12 additions & 19 deletions troubleshoot/elasticsearch/increase-capacity-data-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,27 @@ mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-capacity-data-node.html
applies_to:
stack:
deployment:
eck:
ess:
ece:
self:
products:
- id: elasticsearch
---

# Increase the disk capacity of data nodes [increase-capacity-data-node]

:::::::{tab-set}
:::::::{applies-switch}

::::::{tab-item} {{ech}}
In order to increase the disk capacity of the data nodes in your cluster:
::::::{applies-item} { ess: }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these steps work for ech and ece (tweaking the first steps - you can use these)

the autoscaling UI has also since changed to a multiselect dropdown (both envs) - screenshots should be removed:

image image

not sure about the limit reached stuff. assume it's right?

To increase the disk capacity of the data nodes in your cluster:

1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body).
2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment.
3. If autoscaling is available but not enabled, enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below:
2. On the **Hosted deployments** panel, select the gear under the **Manage deployment** column that corresponds to the name of your deployment.
3. If autoscaling is available but not enabled, enable it by clicking the **Enable autoscaling** button in a banner like the one below:

:::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png
:alt: Autoscaling banner
:screenshot:
:::

Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page.
Or you can go to **Actions > Edit deployment**, check the checkbox **Autoscale** and select **Save** from the bottom of the page.

:::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png
:alt: Enabling autoscaling
Expand All @@ -43,18 +38,16 @@ In order to increase the disk capacity of the data nodes in your cluster:
:screenshot:
:::

or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below:
Alternatively, you can go to **Actions > Edit deployment** and look for the label `LIMIT REACHED` as shown below:

![Autoscaling limits](/troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png "Autoscaling limits reached")

:::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png
:alt: Autoscaling limits reached
:screenshot:
:::

If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page.
If you are seeing the banner, click **Update autoscaling settings** to go to the **Edit** page. Otherwise, if you are already on the **Edit** page, click **Edit settings** to increase the autoscaling limits. After you perform the change select **Save** from the bottom of the page.
::::::

::::::{tab-item} Self-managed
In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed.
::::::{applies-item} { self: }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing ECK steps.

cursor tells me that the steps for ECK are different ... would be good to get @eedugon's confirmation here

::::::{tab-item} {{eck}}
In order to increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes.

**Option 1: Add more data nodes**

Update the `count` field in your data node NodeSet to add more nodes:

```yaml subs=true
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: {{version.stack}}
  nodeSets:
  - name: data-nodes
    count: 5  # Increase from previous count
    config:
      node.roles: ["data"]
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Gi
```

Apply the changes:

```sh
kubectl apply -f your-elasticsearch-manifest.yaml
```

ECK will automatically create the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using:

```console
GET /_cat/shards?v&h=state,node&s=state
```

**Option 2: Increase storage size of existing nodes**

If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`:

```yaml subs=true
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: {{version.stack}}
  nodeSets:
  - name: data-nodes
    count: 3
    config:
      node.roles: ["data"]
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 200Gi  # Increased from previous size
```

Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem.

For more information, see [Updating deployments](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [Volume claim templates](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md).
::::::

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shainaraskas , the previous look good to me!

We could link to our official doc about volume claim templates for ECK and volume expansion: https://www.elastic.co/docs/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates#k8s-volume-claim-templates-update

To increase the data node capacity in your cluster, you need to calculate the amount of extra disk space needed.

1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark:

Expand Down
37 changes: 9 additions & 28 deletions troubleshoot/elasticsearch/increase-cluster-shard-limit.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,6 @@ mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-cluster-shard-limit.html
applies_to:
stack:
deployment:
eck:
ess:
ece:
self:
products:
- id: elasticsearch
---
Expand All @@ -23,28 +18,14 @@ You might want to influence this data distribution by configuring the [`cluster.

To fix this issue, complete the following steps:

:::::::{tab-set}
:::::::{applies-switch}

::::::{tab-item} {{ech}}
In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node in the cluster. We’ll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value.
::::::{applies-item} { ess: }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this setting is invalid in ESS

image

technically these steps work but only because they're being set in an invalid way

@eedugon would we expect people to ever work around non-whitelisted settings in this way?

regardless, this is another case where the ech and self-managed instructions are very similar. the difference between them raises a red flag for me - you can still add nodes in ECH, so checking the target tier and scaling up that tier should also be done before increasing the total number of shards per node. this is the same fix that is causing us grief over here

Copy link
Contributor

@eedugon eedugon Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@shainaraskas : cluster.routing.allocation.total_shards_per_node is a dynamic setting. When needed, it's recommended to use it with the cluster settings API and not defining it statically on elasticsearch.yml (as that would require a rolling restart of all nodes).

So, in ECH, even if the setting is not whitelisted I don't think it's set in an invalid way when set through the cluster settings API.

Anyway, take this in mind, as I think it's related with the existence of this document:

In the past the default of that setting was 1000, and the most common reason to need that setting was as a temporary measure to allow an unexpected amount of shards being allocated. That's why this document was super useful.
Currently that setting defaults to no limit so it will probably won't be needed anymore, except if a user wants to keep the amount of shards under strict control and limits.

would we expect people to ever work around non-whitelisted settings in this way?

IMO, if the setting is dynamic and there are legitimate use cases for it I'd say yes, without needing to whitelist them at node config level. But it's just my opinion.

Anyway this document probably won't be as useful as it was in the past, considering that today cluster.routing.allocation.total_shards_per_node does not have a limit by default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course if we still want to document this for ECH we need to ensure the reader doesn't try to configure cluster.routing.allocation.total_shards_per_node as a user setting because it's not whitelisted, they should do it with the cluster settings API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe (final thought here) we can rewrite a bit the introduction for users to understand that there's a dynamic cluster setting (cluster.routing.allocation.total_shards_per_node) that sets a maximum amount of shards a node can handle. In older versions that maximum defaulted to 1000, and that could cause the error Total number of shards per node has been reached.

If that setting (cluster.routing.allocation.total_shards_per_node) is set the user might need to increase it if they have exceeded the amount of shards in any of the nodes.

And the instructions to set it.... I'd say they are the same regardless of the deployment type (i'd only suggest the dynamic way in this troubleshooting document).

To get the shards assigned, you need to increase the number of shards that can be collocated on a node in the cluster. You achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings) and increasing the configured value.

**Use {{kib}}**
You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls.

1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body).
2. On the **Hosted deployments** panel, click the name of your deployment.

::::{note}
If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md).
::::

3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**.

:::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png
:alt: {{kib}} Console
:screenshot:
:::

4. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings):
1. Inspect the `cluster.routing.allocation.total_shards_per_node` [cluster setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-get-settings):

```console
GET /_cluster/settings?flat_settings
Expand All @@ -63,7 +44,7 @@ In order to get the shards assigned we’ll need to increase the number of shard

1. Represents the current configured value for the total number of shards that can reside on one node in the system.

5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value:
1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) the value for the total number of shards that can be assigned on one node to a higher value:

```console
PUT _cluster/settings
Expand All @@ -77,8 +58,8 @@ In order to get the shards assigned we’ll need to increase the number of shard
1. The new value for the system-wide `total_shards_per_node` configuration is increased from the previous value of `300` to `400`. The `total_shards_per_node` configuration can also be set to `null`, which represents no upper bound with regards to how many shards can be collocated on one node in the system.
::::::

::::::{tab-item} Self-managed
In order to get the shards assigned you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes.
::::::{applies-item} { self: }
To get the shards assigned, you can add more nodes to your {{es}} cluster and assign the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes.

To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting:

Expand Down Expand Up @@ -109,7 +90,7 @@ Alternatively, if adding more nodes to the {{es}} cluster is not desired, inspec
GET /_cluster/settings?flat_settings
```

The response will look like this:
The response looks like this:

```console-result
{
Expand Down
35 changes: 8 additions & 27 deletions troubleshoot/elasticsearch/increase-shard-limit.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same issue on this page

Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,6 @@ mapped_pages:
- https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-shard-limit.html
applies_to:
stack:
deployment:
eck:
ess:
ece:
self:
products:
- id: elasticsearch
---
Expand All @@ -23,28 +18,14 @@ You might want to influence this data distribution by configuring the [index.rou

To fix this issue, complete the following steps:

:::::::{tab-set}
:::::::{applies-switch}

::::::{tab-item} {{ech}}
In order to get the shards assigned we’ll need to increase the number of shards that can be collocated on a node. We’ll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned.
::::::{applies-item} { ess: }
To get the shards assigned, you need to increase the number of shards that can be collocated on a node. You achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) and increasing the configured value for the indices that have shards unassigned.

**Use {{kib}}**
You can run the following steps using either [API console](/explore-analyze/query-filter/tools/console.md) or direct [Elasticsearch API](elasticsearch://reference/elasticsearch/rest-apis/index.md) calls.

1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body).
2. On the **Hosted deployments** panel, click the name of your deployment.

::::{note}
If the name of your deployment is disabled your {{kib}} instances might be unhealthy, in which case contact [Elastic Support](https://support.elastic.co). If your deployment doesn’t include {{kib}}, all you need to do is [enable it first](../../deploy-manage/deploy/elastic-cloud/access-kibana.md).
::::

3. Open your deployment’s side navigation menu (placed under the Elastic logo in the upper left corner) and go to **Dev Tools > Console**.

:::{image} /troubleshoot/images/elasticsearch-reference-kibana-console.png
:alt: {{kib}} Console
:screenshot:
:::

4. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards:
1. Inspect the `index.routing.allocation.total_shards_per_node` [index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) for the index with unassigned shards:

```console
GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings
Expand All @@ -64,7 +45,7 @@ In order to get the shards assigned we’ll need to increase the number of shard

1. Represents the current configured value for the total number of shards that can reside on one node for the `my-index-000001` index.

5. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value:
1. [Increase](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings) the value for the total number of shards that can be assigned on one node to a higher value:

```console
PUT /my-index-000001/_settings
Expand All @@ -78,8 +59,8 @@ In order to get the shards assigned we’ll need to increase the number of shard
1. The new value for the `total_shards_per_node` configuration for the `my-index-000001` index is increased from the previous value of `1` to `2`. The `total_shards_per_node` configuration can also be set to `-1`, which represents no upper bound with regards to how many shards of the same index can reside on one node.
::::::

::::::{tab-item} Self-managed
In order to get the shards assigned you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes.
::::::{applies-item} { self: }
To get the shards assigned, you can add more nodes to your {{es}} cluster and assing the index’s target tier [node role](../../manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md#assign-data-tier) to the new nodes.

To inspect which tier is an index targeting for assignment, use the [get index setting](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-settings) API to retrieve the configured value for the `index.routing.allocation.include._tier_preference` setting:

Expand Down
Loading
Loading