You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.
| serviceAccount.name | string |`""`| The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template. |
88
88
| serviceMonitor.additionalLabels | object |`{}`| Additional labels for the ServiceMonitor. |
89
89
| serviceMonitor.enabled | bool |`false`| Specifies whether a ServiceMonitor should be created. |
90
-
| serviceMonitor.endpointConfig | object |`{}`| Configuration on `http-metrics` endpoint for the ServiceMonitor. Not to be used to add additional endpoints. See the Prometheus operator documentation for configurable fields https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint|
90
+
| serviceMonitor.endpointConfig | object |`{}`| Configuration on `http-metrics` endpoint for the ServiceMonitor. Not to be used to add additional endpoints. See the Prometheus operator documentation for configurable fields https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#endpoint|
91
91
| serviceMonitor.metricRelabelings | list |`[]`| Metric relabelings for the `http-metrics` endpoint on the ServiceMonitor. For more details on metric relabelings, see: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs|
92
92
| serviceMonitor.relabelings | list |`[]`| Relabelings for the `http-metrics` endpoint on the ServiceMonitor. For more details on relabelings, see: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config|
93
93
| settings | object |`{"batchIdleDuration":"1s","batchMaxDuration":"10s","clusterCABundle":"","clusterEndpoint":"","clusterName":"","eksControlPlane":false,"featureGates":{"nodeRepair":false,"reservedCapacity":false,"spotToSpotConsolidation":false},"interruptionQueue":"","isolatedVPC":false,"preferencePolicy":"Respect","reservedENIs":"0","vmMemoryOverheadPercent":0.075}`| Global Settings to configure Karpenter |
Copy file name to clipboardExpand all lines: website/content/en/docs/concepts/nodeclasses.md
+5-11Lines changed: 5 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -141,6 +141,7 @@ spec:
141
141
deleteOnTermination: true
142
142
throughput: 125
143
143
snapshotID: snap-0123456789
144
+
volumeInitializationRate: 100
144
145
145
146
# Optional, use instance-store volumes for node ephemeral-storage
146
147
instanceStorePolicy: RAID0
@@ -295,17 +296,6 @@ spec:
295
296
Note that when using the `Custom` AMIFamily you will need to specify fields **both** in `spec.kubelet` and `spec.userData`.
296
297
{{% /alert %}}
297
298
298
-
{{% alert title="Warning" color="warning" %}}
299
-
The Bottlerocket AMIFamily does not support the following fields:
300
-
301
-
* `evictionSoft`
302
-
* `evictionSoftGracePeriod`
303
-
* `evictionMaxPodGracePeriod`
304
-
305
-
If any of these fields are specified on a Bottlerocket EC2NodeClass, they will be ommited from generated UserData and ignored for scheduling purposes.
306
-
Support for these fields can be tracked via GitHub issue [#3722](https://github.com/aws/karpenter-provider-aws/issues/3722).
307
-
{{% /alert %}}
308
-
309
299
#### Pods Per Core
310
300
311
301
An alternative way to dynamically set the maximum density of pods on a node is to use the `.spec.kubelet.podsPerCore` value. Karpenter will calculate the pod density during scheduling by multiplying this value by the number of logical cores (vCPUs) on an instance type. This value will also be passed through to the `--pods-per-core` value on kubelet startup to configure the number of allocatable pods the kubelet can assign to the node instance.
@@ -890,6 +880,10 @@ A term can specify an ID or a set of tags to select against.
890
880
When specifying tags, it will select all capacity reservations accessible from the account with matching tags.
891
881
This can be further restricted by specifying an owner ID.
892
882
883
+
{{% alert title="Note" color="primary" %}}
884
+
Note that the IAM role Karpenter assumes should have a permissions policy associated with it that grants it permissions to use the [ec2:DescribeCapacityReservations](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html#amazonec2-DescribeCapacityReservations) action to discover capacity reservations and the [ec2:RunInstances](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonec2.html#amazonec2-RunInstances) action to run instances in those capacity reservations.
Copy file name to clipboardExpand all lines: website/content/en/docs/faq.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ See [Configuring NodePools]({{< ref "./concepts/#configuring-nodepools" >}}) for
17
17
AWS is the first cloud provider supported by Karpenter, although it is designed to be used with other cloud providers as well.
18
18
19
19
### Can I write my own cloud provider for Karpenter?
20
-
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-provider-aws/tree/v1.4.0/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.
20
+
Yes, but there is no documentation yet for it. Start with Karpenter's GitHub [cloudprovider](https://github.com/aws/karpenter-provider-aws/tree/v1.5.6/pkg/cloudprovider) documentation to see how the AWS provider is built, but there are other sections of the code that will require changes too.
21
21
22
22
### What operating system nodes does Karpenter deploy?
23
23
Karpenter uses the OS defined by the [AMI Family in your EC2NodeClass]({{< ref "./concepts/nodeclasses#specamifamily" >}}).
@@ -29,7 +29,7 @@ Karpenter has multiple mechanisms for configuring the [operating system]({{< ref
29
29
Karpenter is flexible to multi-architecture configurations using [well known labels]({{< ref "./concepts/scheduling/#supported-labels">}}).
30
30
31
31
### What RBAC access is required?
32
-
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.4.0/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.4.0/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.4.0/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.4.0/charts/karpenter/templates/role.yaml) files for details.
32
+
All the required RBAC rules can be found in the Helm chart template. See [clusterrole-core.yaml](https://github.com/aws/karpenter/blob/v1.5.6/charts/karpenter/templates/clusterrole-core.yaml), [clusterrole.yaml](https://github.com/aws/karpenter/blob/v1.5.6/charts/karpenter/templates/clusterrole.yaml), [rolebinding.yaml](https://github.com/aws/karpenter/blob/v1.5.6/charts/karpenter/templates/rolebinding.yaml), and [role.yaml](https://github.com/aws/karpenter/blob/v1.5.6/charts/karpenter/templates/role.yaml) files for details.
33
33
34
34
### Can I run Karpenter outside of a Kubernetes cluster?
35
35
Yes, as long as the controller has network and IAM/RBAC access to the Kubernetes API and your provider API.
Copy file name to clipboardExpand all lines: website/content/en/docs/getting-started/getting-started-with-karpenter/_index.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ After setting up the tools, set the Karpenter and Kubernetes version:
48
48
49
49
```bash
50
50
export KARPENTER_NAMESPACE="kube-system"
51
-
export KARPENTER_VERSION="1.4.0"
51
+
export KARPENTER_VERSION="1.5.6"
52
52
export K8S_VERSION="1.32"
53
53
```
54
54
@@ -115,13 +115,13 @@ See [Enabling Windows support](https://docs.aws.amazon.com/eks/latest/userguide/
115
115
As the OCI Helm chart is signed by [Cosign](https://github.com/sigstore/cosign) as part of the release process you can verify the chart before installing it by running the following command.
Copy file name to clipboardExpand all lines: website/content/en/docs/getting-started/migrating-from-cas/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ One for your Karpenter node role and one for your existing node group.
92
92
First set the Karpenter release you want to deploy.
93
93
94
94
```bash
95
-
export KARPENTER_VERSION="1.4.0"
95
+
export KARPENTER_VERSION="1.5.6"
96
96
```
97
97
98
98
We can now generate a full Karpenter deployment yaml from the Helm chart.
@@ -132,7 +132,7 @@ Now that our deployment is ready we can create the karpenter namespace, create t
132
132
133
133
## Create default NodePool
134
134
135
-
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.4.0/examples/v1) for specific needs.
135
+
We need to create a default NodePool so Karpenter knows what types of nodes we want for unscheduled workloads. You can refer to some of the [example NodePool](https://github.com/aws/karpenter/tree/v1.5.6/examples/v1) for specific needs.
0 commit comments