Skip to content

Commit

Permalink
Update version to v1.1.1
Browse files Browse the repository at this point in the history
Signed-off-by: David Ko <dko@suse.com>
  • Loading branch information
innobead authored and shuo-wu committed Apr 23, 2021
1 parent 299f4ab commit 93f250e
Show file tree
Hide file tree
Showing 13 changed files with 304 additions and 55 deletions.
10 changes: 5 additions & 5 deletions charts/longhorn/Chart.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
apiVersion: v1
name: longhorn
version: 1.1.0
appVersion: v1.1.0
version: 1.1.1
appVersion: v1.1.1
kubeVersion: ">=v1.16.0-r0"
description: Longhorn is a distributed block storage system for Kubernetes.
keywords:
Expand All @@ -11,6 +11,7 @@ keywords:
- block
- device
- iscsi
- nfs
home: https://github.com/longhorn/longhorn
sources:
- https://github.com/longhorn/longhorn
Expand All @@ -20,9 +21,8 @@ sources:
- https://github.com/longhorn/longhorn-manager
- https://github.com/longhorn/longhorn-ui
- https://github.com/longhorn/longhorn-tests
- https://github.com/longhorn/backing-image-manager
maintainers:
- name: Longhorn maintainers
email: maintainers@longhorn.io
- name: Sheng Yang
email: sheng@yasker.org
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/icon/color/longhorn-icon-color.svg?sanitize=true
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/icon/color/longhorn-icon-color.png
4 changes: 2 additions & 2 deletions charts/longhorn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,10 @@ Longhorn is 100% open source software. Project source code is spread across a nu

## Prerequisites

1. Docker v1.13+
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
2. Kubernetes v1.16+
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.

## Installation
1. Add Longhorn chart repository.
Expand Down
102 changes: 66 additions & 36 deletions charts/longhorn/questions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ questions:
label: Longhorn Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.manager.tag
default: v1.1.0
default: v1.1.1
description: "Specify Longhorn Manager Image Tag"
type: string
label: Longhorn Manager Image Tag
Expand All @@ -29,7 +29,7 @@ questions:
label: Longhorn Engine Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.engine.tag
default: v1.1.0
default: v1.1.1
description: "Specify Longhorn Engine Image Tag"
type: string
label: Longhorn Engine Image Tag
Expand All @@ -41,7 +41,7 @@ questions:
label: Longhorn UI Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.ui.tag
default: v1.1.0
default: v1.1.1
description: "Specify Longhorn UI Image Tag"
type: string
label: Longhorn UI Image Tag
Expand All @@ -65,11 +65,23 @@ questions:
label: Longhorn Share Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.shareManager.tag
default: v1_20201204
default: v1_20210416
description: "Specify Longhorn Share Manager Image Tag"
type: string
label: Longhorn Share Manager Image Tag
group: "Longhorn Images Settings"
- variable: image.longhorn.backingImageManager.repository
default: longhornio/backing-image-manager
description: "Specify Longhorn Backing Image Manager Image Repository"
type: string
label: Longhorn Backing Image Manager Image Repository
group: "Longhorn Images Settings"
- variable: image.longhorn.backingImageManager.tag
default: v1_20210422
description: "Specify Longhorn Backing Image Manager Image Tag"
type: string
label: Longhorn Backing Image Manager Image Tag
group: "Longhorn Images Settings"
- variable: image.csi.attacher.repository
default: longhornio/csi-attacher
description: "Specify CSI attacher image repository. Leave blank to autodetect."
Expand Down Expand Up @@ -279,18 +291,6 @@ The available modes are:
min: 1
max: 20
default: 3
- variable: defaultSettings.guaranteedEngineCPU
label: Guaranteed Engine CPU
description: "Allow Longhorn Instance Managers to have guaranteed CPU allocation. By default 0.25. The value is how many CPUs should be reserved for each Engine/Replica Instance Manager Pod created by Longhorn. For example, 0.1 means one-tenth of a CPU. This will help maintain engine stability during high node workload. It only applies to the Engine/Replica Instance Manager Pods created after the setting took effect.
In order to prevent unexpected volume crash, you can use the following formula to calculate an appropriate value for this setting:
'Guaranteed Engine CPU = The estimated max Longhorn volume/replica count on a node * 0.1'.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the volume/replica count now, you can leave it with the default value, or allocate 1/8 of total CPU of a node. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING: After this setting is changed, all the instance managers on all the nodes will be automatically restarted
WARNING: DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings"
type: float
default: 0.25
- variable: defaultSettings.defaultLonghornStaticStorageClass
label: Default Longhorn Static StorageClass Name
description: "The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'."
Expand All @@ -304,26 +304,6 @@ WARNING: DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
type: int
min: 0
default: 300
- variable: defaultSettings.taintToleration
label: Kubernetes Taint Toleration
description: "To dedicate nodes to store Longhorn replicas and reject other general workloads, set tolerations for Longhorn and add taints for the storage nodes.
All Longhorn volumes should be detached before modifying toleration settings.
We recommend setting tolerations during Longhorn deployment because the Longhorn system cannot be operated during the update.
Multiple tolerations can be set here, and these tolerations are separated by semicolon. For example:
* `key1=value1:NoSchedule; key2:NoExecute`
* `:` this toleration tolerates everything because an empty key with operator `Exists` matches all keys, values and effects
* `key1=value1:` this toleration has empty effect. It matches all effects with key `key1`
Because `kubernetes.io` is used as the key of all Kubernetes default tolerations, it should not be used in the toleration settings.
WARNING: DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES!"
group: "Longhorn Default Settings"
type: string
default: ""
- variable: defaultSettings.priorityClass
label: Priority Class
description: "The name of the Priority Class to set on the Longhorn workloads. This can help prevent Longhorn workloads from being evicted under Node Pressure. WARNING: DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings"
type: string
default: ""
- variable: defaultSettings.autoSalvage
label: Automatic salvage
description: "If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true."
Expand Down Expand Up @@ -429,6 +409,56 @@ Warning: This option works only when there is a failed replica in the volume. An
group: "Longhorn Default Settings"
type: boolean
default: "true"
- variable: defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit
label: Concurrent Automatic Engine Upgrade Per Node Limit
description: "This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version."
group: "Longhorn Default Settings"
type: int
min: 0
default: 0
- variable: defaultSettings.backingImageCleanupWaitInterval
label: Backing Image Cleanup Wait Interval
description: "This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it."
group: "Longhorn Default Settings"
type: int
min: 0
default: 60
- variable: defaultSettings.guaranteedEngineManagerCPU
label: Guaranteed Engine Manager CPU
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each engine manager Pod. For example, 10 means 10% of the total CPU on a node will be allocated to each engine manager pod on this node. This will help maintain engine stability during high node workload.
In order to prevent unexpected volume engine crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Engine Manager CPU = The estimated max Longhorn volume engine count on a node * 0.1 / The total allocatable CPUs on the node * 100.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING:
- Value 0 means unsetting CPU requests for engine manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Engine Manager CPU' should not be greater than 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"EngineManagerCPURequest\" on the node is set.
- After this setting is changed, all engine manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings"
type: int
min: 0
max: 40
default: 12
- variable: defaultSettings.guaranteedReplicaManagerCPU
label: Guaranteed Replica Manager CPU
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each replica manager Pod. 10 means 10% of the total CPU on a node will be allocated to each replica manager pod on this node. This will help maintain replica stability during high node workload.
In order to prevent unexpected volume replica crash as well as guarantee a relative acceptable IO performance, you can use the following formula to calculate a value for this setting:
Guaranteed Replica Manager CPU = The estimated max Longhorn volume replica count on a node * 0.1 / The total allocatable CPUs on the node * 100.
The result of above calculation doesn't mean that's the maximum CPU resources the Longhorn workloads require. To fully exploit the Longhorn volume I/O performance, you can allocate/guarantee more CPU resources via this setting.
If it's hard to estimate the usage now, you can leave it with the default value, which is 12%. Then you can tune it when there is no running workload using Longhorn volumes.
WARNING:
- Value 0 means unsetting CPU requests for replica manager pods.
- Considering the possible new instance manager pods in the further system upgrade, this integer value is range from 0 to 40. And the sum with setting 'Guaranteed Replica Manager CPU' should not be greater than 40.
- One more set of instance manager pods may need to be deployed when the Longhorn system is upgraded. If current available CPUs of the nodes are not enough for the new instance manager pods, you need to detach the volumes using the oldest instance manager pods so that Longhorn can clean up the old pods automatically and release the CPU resources. And the new pods with the latest instance manager image will be launched then.
- This global setting will be ignored for a node if the field \"ReplicaManagerCPURequest\" on the node is set.
- After this setting is changed, all replica manager pods using this global setting on all the nodes will be automatically restarted. In other words, DO NOT CHANGE THIS SETTING WITH ATTACHED VOLUMES."
group: "Longhorn Default Settings"
type: int
min: 0
max: 40
default: 12
- variable: persistence.defaultClass
default: "true"
description: "Set as default StorageClass for Longhorn"
Expand Down
2 changes: 1 addition & 1 deletion charts/longhorn/templates/clusterrole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ rules:
- apiGroups: ["longhorn.io"]
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status",
"sharemanagers", "sharemanagers/status"]
"sharemanagers", "sharemanagers/status", "backingimages", "backingimages/status", "backingimagemanagers", "backingimagemanagers/status"]
verbs: ["*"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
Expand Down
94 changes: 94 additions & 0 deletions charts/longhorn/templates/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -378,3 +378,97 @@ spec:
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
labels: {{- include "longhorn.labels" . | nindent 4 }}
longhorn-manager: BackingImage
name: backingimages.longhorn.io
spec:
group: longhorn.io
names:
kind: BackingImage
listKind: BackingImageList
plural: backingimages
shortNames:
- lhbi
singular: backingimage
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
x-kubernetes-preserve-unknown-fields: true
status:
x-kubernetes-preserve-unknown-fields: true
subresources:
status: {}
additionalPrinterColumns:
- name: Image
type: string
description: The backing image name
jsonPath: .spec.image
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
labels: {{- include "longhorn.labels" . | nindent 4 }}
longhorn-manager: BackingImageManager
name: backingimagemanagers.longhorn.io
spec:
group: longhorn.io
names:
kind: BackingImageManager
listKind: BackingImageManagerList
plural: backingimagemanagers
shortNames:
- lhbim
singular: backingimagemanager
scope: Namespaced
versions:
- name: v1beta1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
x-kubernetes-preserve-unknown-fields: true
status:
x-kubernetes-preserve-unknown-fields: true
subresources:
status: {}
additionalPrinterColumns:
- name: State
type: string
description: The current state of the manager
jsonPath: .status.currentState
- name: Image
type: string
description: The image the manager pod will use
jsonPath: .spec.image
- name: Node
type: string
description: The node the manager is on
jsonPath: .spec.nodeID
- name: DiskUUID
type: string
description: The disk the manager is responsible for
jsonPath: .spec.diskUUID
- name: DiskPath
type: string
description: The disk path the manager is using
jsonPath: .spec.diskPath
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
23 changes: 17 additions & 6 deletions charts/longhorn/templates/daemonset-sa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,10 @@ spec:
metadata:
labels: {{- include "longhorn.labels" . | nindent 8 }}
app: longhorn-manager
{{- with .Values.annotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
containers:
- name: longhorn-manager
Expand All @@ -30,6 +34,8 @@ spec:
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.instanceManager.repository }}:{{ .Values.image.longhorn.instanceManager.tag }}"
- --share-manager-image
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.shareManager.repository }}:{{ .Values.image.longhorn.shareManager.tag }}"
- --backing-image-manager-image
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.backingImageManager.repository }}:{{ .Values.image.longhorn.backingImageManager.tag }}"
- --manager-image
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
- --service-account
Expand All @@ -45,9 +51,6 @@ spec:
mountPath: /host/dev/
- name: proc
mountPath: /host/proc/
- name: varrun
mountPath: /var/run/
mountPropagation: Bidirectional
- name: longhorn
mountPath: /var/lib/longhorn/
mountPropagation: Bidirectional
Expand Down Expand Up @@ -75,9 +78,6 @@ spec:
- name: proc
hostPath:
path: /proc/
- name: varrun
hostPath:
path: /var/run/
- name: longhorn
hostPath:
path: /var/lib/longhorn/
Expand All @@ -88,7 +88,18 @@ spec:
imagePullSecrets:
- name: {{ .Values.privateRegistry.registrySecret }}
{{- end }}
{{- if .Values.longhornManager.priorityClass }}
priorityClassName: {{ .Values.longhornManager.priorityClass | quote}}
{{- end }}
serviceAccountName: longhorn-service-account
{{- if .Values.longhornManager.tolerations }}
tolerations:
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
{{- end }}
{{- if .Values.longhornManager.nodeSelector }}
nodeSelector:
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
{{- end }}
updateStrategy:
rollingUpdate:
maxUnavailable: "100%"
Expand Down
Loading

0 comments on commit 93f250e

Please sign in to comment.