-
Notifications
You must be signed in to change notification settings - Fork 1
/
config.yaml
181 lines (176 loc) · 7.91 KB
/
config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
options:
volume_name:
type: string
default: test
description: |
The name of the Gluster volume to create. This will also serve as the name
of the mount point. Example: mount -t glusterfs server1:/test
brick_devices:
type: string
default:
description: |
Space separated device list to format and set up as brick volumes.
These devices are the range of devices that will be checked for and
used across all service units, in addition to any volumes attached
via the --storage flag during deployment.
raid_stripe_width:
type: int
description: |
If a raid array is being used as the block device please enter the
stripe width here so that the filesystem can be aligned properly
at creation time.
For xfs this is generally # of data disks (don't count parity disks).
Note: if not using a raid array this should be left blank.
This setting has no effect for Btrfs of Zfs
Both raid_stripe_width and raid_stripe_unit must be specified together.
raid_stripe_unit:
type: int
description: |
If a raid array is being used as the block device please enter the
stripe unit here so that the filesystem can be aligned properly at
creation time. Note: if not using a raid array this should be left blank.
For ext4 this corresponds to stride.
Also this should be a power of 2. Otherwise mkfs will fail.
Note: This setting has no effect for Btrfs of Zfs
Both raid_stripe_width and raid_stripe_unit must be specified together.
inode_size:
type: int
default: 512
description: |
Inode size can be set at brick filesystem creation time. This is generally
helpful in cases where metadata will be split into multiple iops.
disk_elevator:
type: string
default: deadline
description: |
The disk elevator or I/O scheduler is used to determine how I/O operations
are handled by the kernel on a per disk level. If you don't know what
this means or is used for than leaving the default is a safe choice. I/O
intensive applications like Gluster usually benefit from using the deadline
scheduler over CFQ. If you have a hardware raid card or a solid state
drive then setting noop here could improve your performance.
The quick high level summary is: Deadline is primarily concerned with
latency. Noop is primarily concerned with throughput.
Options include:
cfq
deadline
noop
defragmentation_interval:
type: string
default: "@weekly"
description: |
XFS and other filesystems fragment over time and this can lead to
degraded performance for your cluster. This setting which takes any
valid crontab period will setup a defrag schedule. Be aware that this
can generate significant IO on the cluster so choose a low activity
period. Zfs does not have an online defrag option so this
option mainly is concerned with Btrfs, Ext4 or XFS.
ephemeral_unmount:
type: string
default:
description: |
Cloud instances provide ephermeral storage which is normally mounted
on /mnt.
Setting this option to the path of the ephemeral mountpoint will force
an unmount of the corresponding device so that it can be used as a brick
storage device. This is useful for testing purposes (cloud deployment
is not a typical use case).
cluster_type:
type: string
default: DistributedAndReplicate
description: |
The type of volume to setup. DistributedAndReplicate is sufficient
for most use cases.
Other options include: Distribute,
Arbiter,
Stripe,
StripedAndReplicate,
Disperse,
DistributedAndStripe,
DistributedAndReplicate,
DistributedAndStripedAndReplicate,
DistributedAndDisperse.
For more information about these cluster types please see here:
https://gluster.readthedocs.io/en/latest/Quick-Start-Guide/Architecture/#types-of-volumes
replication_level:
type: int
default: 3
description: |
This sets how many replicas of the data should be stored in the cluster.
Generally 2 or 3 will be fine for almost all use cases. Greater than 3
could be useful for read heavy uses cases.
filesystem_type:
type: string
default: xfs
description: |
The filesystem type to use for each one of the bricks. Can be either
zfs, xfs, btrfs, or ext4. Note that zfs only works with ubuntu 16.04 or
newer. General testing has shown that xfs is the most performant
filesystem.
splitbrain_policy:
type: string
default: size
description: |
Split brain means that the cluster can not come to consensus on which
version of a file to serve to a client.
This option set automatic resolution to split-brains in replica volumes
Options include: ctime|mtime|size|majority. Set this to none to disable.
Example: Setting this to "size" will pick the largest size automatically
and delete the smaller size file. "majority" picks a file with identical
mtime and size in more than half the number of bricks in the replica.
source:
type: string
default: ppa:gluster/glusterfs-3.10
description: |
Optional configuration to support use of additional sources such as:
- ppa:myteam/ppa
- cloud:trusty-proposed/kilo
- http://my.archive.com/ubuntu main
The last option should be used in conjunction with the key configuration
option. NOTE: Changing this configuration value after your cluster is
deployed will initiate a rolling upgrade of the servers one by one.
key:
type: string
default:
description: |
Key ID to import to the apt keyring to support use with arbitary source
configuration from outside of Launchpad archives or PPA's.
virtual_ip_addresses:
type: string
description: |
By default NFSv3 is installed and started on all servers. If the server
the client is connected to dies NFSv3 doesn't have a failover mechanism.
Setting this option to a space separated list of ip addresses cidr's to be
used as virtual ip addresses on the cluster such as:
- 10.0.0.6/24 10.0.0.7/24 10.0.0.8/24 10.0.0.9/24
With a cluster of 2 machines would assign 2 virutal addresses
to each host.
- 2001:0db8:85a3:0000:0000:8a2e:0370:7334/24 2001:cdba:0000:0000:0000:0000:3257:9652/24
With a cluster of 2 machines would assign 1 virtual address to each
host.
Each server will be assigned (# of ip addresses) / ( # of servers) ip
addresses to use for quick failover. The administrator is expected
to create a DNS A record with the virtual ip addresses before setting
this. When clients connect to this charm it will attempt to resolve
its own virtual ip and hand out the DNS address instead of the ip
address if possible. Upon failure of a server ctdb will issue a
gratitous arp requests and migrate the virtual ip address over to a
machine that is still up. Note however that there are implications here
for load balancing and moving an additional ip address to a server
could increase load on it.
cifs:
type: boolean
description: |
Enable CIFS exporting of the volume. This option will enable windows
clients to access the volume.
sysctl:
type: string
default: '{ vm.vfs_cache_pressure: 100, vm.swappiness: 1 }'
description: |
YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303). Example settings for
random and small file workloads:
'{ vm.dirty_ratio: 5, vm.dirty_background_ratio: 2 }'