blkio-controller.txt 15.5 KB
Newer Older
Vivek Goyal's avatar
Vivek Goyal committed
1 2 3 4 5 6 7 8 9 10
				Block IO Controller
				===================
Overview
========
cgroup subsys "blkio" implements the block io controller. There seems to be
a need of various kinds of IO control policies (like proportional BW, max BW)
both at leaf nodes as well as at intermediate nodes in a storage hierarchy.
Plan is to use the same cgroup based management interface for blkio controller
and based on user options switch IO policies in the background.

Vivek Goyal's avatar
Vivek Goyal committed
11 12 13 14 15 16
Currently two IO control policies are implemented. First one is proportional
weight time based division of disk policy. It is implemented in CFQ. Hence
this policy takes effect only on leaf nodes when CFQ is being used. The second
one is throttling policy which can be used to specify upper IO rate limits
on devices. This policy is implemented in generic block layer and can be
used on leaf nodes as well as higher level logical devices like device mapper.
Vivek Goyal's avatar
Vivek Goyal committed
17 18 19

HOWTO
=====
Vivek Goyal's avatar
Vivek Goyal committed
20 21
Proportional Weight division of bandwidth
-----------------------------------------
Vivek Goyal's avatar
Vivek Goyal committed
22 23 24
You can do a very simple testing of running two dd threads in two different
cgroups. Here is what you can do.

25 26 27
- Enable Block IO controller
	CONFIG_BLK_CGROUP=y

Vivek Goyal's avatar
Vivek Goyal committed
28 29 30
- Enable group scheduling in CFQ
	CONFIG_CFQ_GROUP_IOSCHED=y

31 32
- Compile and boot into kernel and mount IO controller (blkio); see
  cgroups.txt, Why are cgroups needed?.
Vivek Goyal's avatar
Vivek Goyal committed
33

34 35 36
	mount -t tmpfs cgroup_root /sys/fs/cgroup
	mkdir /sys/fs/cgroup/blkio
	mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
Vivek Goyal's avatar
Vivek Goyal committed
37 38

- Create two cgroups
39
	mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2
Vivek Goyal's avatar
Vivek Goyal committed
40 41

- Set weights of group test1 and test2
42 43
	echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight
	echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight
Vivek Goyal's avatar
Vivek Goyal committed
44 45 46 47 48 49 50 51

- Create two same size files (say 512MB each) on same disk (file1, file2) and
  launch two dd threads in different cgroup to read those files.

	sync
	echo 3 > /proc/sys/vm/drop_caches

	dd if=/mnt/sdb/zerofile1 of=/dev/null &
52 53
	echo $! > /sys/fs/cgroup/blkio/test1/tasks
	cat /sys/fs/cgroup/blkio/test1/tasks
Vivek Goyal's avatar
Vivek Goyal committed
54 55

	dd if=/mnt/sdb/zerofile2 of=/dev/null &
56 57
	echo $! > /sys/fs/cgroup/blkio/test2/tasks
	cat /sys/fs/cgroup/blkio/test2/tasks
Vivek Goyal's avatar
Vivek Goyal committed
58 59 60 61 62 63 64 65

- At macro level, first dd should finish first. To get more precise data, keep
  on looking at (with the help of script), at blkio.disk_time and
  blkio.disk_sectors files of both test1 and test2 groups. This will tell how
  much disk time (in milli seconds), each group got and how many secotors each
  group dispatched to the disk. We provide fairness in terms of disk time, so
  ideally io.disk_time of cgroups should be in proportion to the weight.

Vivek Goyal's avatar
Vivek Goyal committed
66 67 68 69 70 71 72 73
Throttling/Upper Limit policy
-----------------------------
- Enable Block IO controller
	CONFIG_BLK_CGROUP=y

- Enable throttling in block layer
	CONFIG_BLK_DEV_THROTTLING=y

74 75
- Mount blkio controller (see cgroups.txt, Why are cgroups needed?)
        mount -t cgroup -o blkio none /sys/fs/cgroup/blkio
Vivek Goyal's avatar
Vivek Goyal committed
76 77 78 79

- Specify a bandwidth rate on particular device for root group. The format
  for policy is "<major>:<minor>  <byes_per_second>".

80
        echo "8:16  1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
Vivek Goyal's avatar
Vivek Goyal committed
81 82 83 84 85 86 87 88 89 90 91 92

  Above will put a limit of 1MB/second on reads happening for root group
  on device having major/minor number 8:16.

- Run dd to read a file and see if rate is throttled to 1MB/s or not.

		# dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
		# iflag=direct
        1024+0 records in
        1024+0 records out
        4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s

93
 Limits for writes can be put using blkio.throttle.write_bps_device file.
Vivek Goyal's avatar
Vivek Goyal committed
94

95 96
Hierarchical Cgroups
====================
97 98
- Currently none of the IO control policy supports hierarchical groups. But
  cgroup interface does allow creation of hierarchical cgroups and internally
99 100
  IO policies treat them as flat hierarchy.

101
  So this patch will allow creation of cgroup hierarchcy but at the backend
102 103 104 105 106 107 108 109 110 111 112 113
  everything will be treated as flat. So if somebody created a hierarchy like
  as follows.

			root
			/  \
		     test1 test2
			|
		     test3

  CFQ and throttling will practically treat all groups at same level.

				pivot
114
			     /  /   \  \
115 116 117 118 119 120 121
			root  test1 test2  test3

  Down the line we can implement hierarchical accounting/control support
  and also introduce a new cgroup file "use_hierarchy" which will control
  whether cgroup hierarchy is viewed as flat or hierarchical by the policy..
  This is how memory controller also has implemented the things.

Vivek Goyal's avatar
Vivek Goyal committed
122 123 124
Various user visible config options
===================================
CONFIG_BLK_CGROUP
125
	- Block IO controller.
Vivek Goyal's avatar
Vivek Goyal committed
126 127

CONFIG_DEBUG_BLK_CGROUP
128 129 130 131 132 133
	- Debug help. Right now some additional stats file show up in cgroup
	  if this option is enabled.

CONFIG_CFQ_GROUP_IOSCHED
	- Enables group scheduling in CFQ. Currently only 1 level of group
	  creation is allowed.
Vivek Goyal's avatar
Vivek Goyal committed
134

Vivek Goyal's avatar
Vivek Goyal committed
135 136 137
CONFIG_BLK_DEV_THROTTLING
	- Enable block device throttling support in block layer.

Vivek Goyal's avatar
Vivek Goyal committed
138 139
Details of cgroup files
=======================
Vivek Goyal's avatar
Vivek Goyal committed
140 141
Proportional weight policy files
--------------------------------
Vivek Goyal's avatar
Vivek Goyal committed
142
- blkio.weight
143 144 145
	- Specifies per cgroup weight. This is default weight of the group
	  on all the devices until and unless overridden by per device rule.
	  (See blkio.weight_device).
146
	  Currently allowed range of weights is from 10 to 1000.
Vivek Goyal's avatar
Vivek Goyal committed
147

148 149 150 151 152 153 154
- blkio.weight_device
	- One can specify per cgroup per device rules using this interface.
	  These rules override the default value of group weight as specified
	  by blkio.weight.

	  Following is the format.

155
	  # echo dev_maj:dev_minor weight > blkio.weight_device
156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174
	  Configure weight=300 on /dev/sdb (8:16) in this cgroup
	  # echo 8:16 300 > blkio.weight_device
	  # cat blkio.weight_device
	  dev     weight
	  8:16    300

	  Configure weight=500 on /dev/sda (8:0) in this cgroup
	  # echo 8:0 500 > blkio.weight_device
	  # cat blkio.weight_device
	  dev     weight
	  8:0     500
	  8:16    300

	  Remove specific weight for /dev/sda in this cgroup
	  # echo 8:0 0 > blkio.weight_device
	  # cat blkio.weight_device
	  dev     weight
	  8:16    300

Vivek Goyal's avatar
Vivek Goyal committed
175 176 177 178 179 180 181 182 183 184 185 186
- blkio.time
	- disk time allocated to cgroup per device in milliseconds. First
	  two fields specify the major and minor number of the device and
	  third field specifies the disk time allocated to group in
	  milliseconds.

- blkio.sectors
	- number of sectors transferred to/from disk by the group. First
	  two fields specify the major and minor number of the device and
	  third field specifies the number of sectors transferred by the
	  group to/from the device.

187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229
- blkio.io_service_bytes
	- Number of bytes transferred to/from the disk by the group. These
	  are further divided by the type of operation - read or write, sync
	  or async. First two fields specify the major and minor number of the
	  device, third field specifies the operation type and the fourth field
	  specifies the number of bytes.

- blkio.io_serviced
	- Number of IOs completed to/from the disk by the group. These
	  are further divided by the type of operation - read or write, sync
	  or async. First two fields specify the major and minor number of the
	  device, third field specifies the operation type and the fourth field
	  specifies the number of IOs.

- blkio.io_service_time
	- Total amount of time between request dispatch and request completion
	  for the IOs done by this cgroup. This is in nanoseconds to make it
	  meaningful for flash devices too. For devices with queue depth of 1,
	  this time represents the actual service time. When queue_depth > 1,
	  that is no longer true as requests may be served out of order. This
	  may cause the service time for a given IO to include the service time
	  of multiple IOs when served out of order which may result in total
	  io_service_time > actual time elapsed. This time is further divided by
	  the type of operation - read or write, sync or async. First two fields
	  specify the major and minor number of the device, third field
	  specifies the operation type and the fourth field specifies the
	  io_service_time in ns.

- blkio.io_wait_time
	- Total amount of time the IOs for this cgroup spent waiting in the
	  scheduler queues for service. This can be greater than the total time
	  elapsed since it is cumulative io_wait_time for all IOs. It is not a
	  measure of total time the cgroup spent waiting but rather a measure of
	  the wait_time for its individual IOs. For devices with queue_depth > 1
	  this metric does not include the time spent waiting for service once
	  the IO is dispatched to the device but till it actually gets serviced
	  (there might be a time lag here due to re-ordering of requests by the
	  device). This is in nanoseconds to make it meaningful for flash
	  devices too. This time is further divided by the type of operation -
	  read or write, sync or async. First two fields specify the major and
	  minor number of the device, third field specifies the operation type
	  and the fourth field specifies the io_wait_time in ns.

Divyesh Shah's avatar
Divyesh Shah committed
230 231 232 233 234
- blkio.io_merged
	- Total number of bios/requests merged into requests belonging to this
	  cgroup. This is further divided by the type of operation - read or
	  write, sync or async.

235 236 237 238 239 240
- blkio.io_queued
	- Total number of requests queued up at any given instant for this
	  cgroup. This is further divided by the type of operation - read or
	  write, sync or async.

- blkio.avg_queue_size
241
	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
242 243 244 245
	  The average queue size for this cgroup over the entire time of this
	  cgroup's existence. Queue size samples are taken each time one of the
	  queues of this cgroup gets a timeslice.

246
- blkio.group_wait_time
247
	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
248 249 250 251 252 253 254 255 256 257
	  This is the amount of time the cgroup had to wait since it became busy
	  (i.e., went from 0 to 1 request queued) to get a timeslice for one of
	  its queues. This is different from the io_wait_time which is the
	  cumulative total of the amount of time spent by each IO in that cgroup
	  waiting in the scheduler queue. This is in nanoseconds. If this is
	  read when the cgroup is in a waiting (for timeslice) state, the stat
	  will only report the group_wait_time accumulated till the last time it
	  got a timeslice and will not include the current delta.

- blkio.empty_time
258
	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
259 260 261 262 263 264 265 266
	  This is the amount of time a cgroup spends without any pending
	  requests when not being served, i.e., it does not include any time
	  spent idling for one of the queues of the cgroup. This is in
	  nanoseconds. If this is read when the cgroup is in an empty state,
	  the stat will only report the empty_time accumulated till the last
	  time it had a pending request and will not include the current delta.

- blkio.idle_time
267
	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
268
	  This is the amount of time spent by the IO scheduler idling for a
269
	  given cgroup in anticipation of a better request than the existing ones
270 271 272 273 274
	  from other queues/cgroups. This is in nanoseconds. If this is read
	  when the cgroup is in an idling state, the stat will only report the
	  idle_time accumulated till the last idle period and will not include
	  the current delta.

Vivek Goyal's avatar
Vivek Goyal committed
275
- blkio.dequeue
276
	- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
Vivek Goyal's avatar
Vivek Goyal committed
277 278 279 280 281
	  gives the statistics about how many a times a group was dequeued
	  from service tree of the device. First two fields specify the major
	  and minor number of the device and third field specifies the number
	  of times a group was dequeued from a particular device.

Vivek Goyal's avatar
Vivek Goyal committed
282 283 284 285
Throttling/Upper limit policy files
-----------------------------------
- blkio.throttle.read_bps_device
	- Specifies upper limit on READ rate from the device. IO rate is
286
	  specified in bytes per second. Rules are per device. Following is
Vivek Goyal's avatar
Vivek Goyal committed
287 288
	  the format.

289
  echo "<major>:<minor>  <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device
Vivek Goyal's avatar
Vivek Goyal committed
290 291 292

- blkio.throttle.write_bps_device
	- Specifies upper limit on WRITE rate to the device. IO rate is
293
	  specified in bytes per second. Rules are per device. Following is
Vivek Goyal's avatar
Vivek Goyal committed
294 295
	  the format.

296
  echo "<major>:<minor>  <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device
Vivek Goyal's avatar
Vivek Goyal committed
297 298 299

- blkio.throttle.read_iops_device
	- Specifies upper limit on READ rate from the device. IO rate is
300
	  specified in IO per second. Rules are per device. Following is
Vivek Goyal's avatar
Vivek Goyal committed
301 302
	  the format.

303
  echo "<major>:<minor>  <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device
Vivek Goyal's avatar
Vivek Goyal committed
304 305 306

- blkio.throttle.write_iops_device
	- Specifies upper limit on WRITE rate to the device. IO rate is
307
	  specified in io per second. Rules are per device. Following is
Vivek Goyal's avatar
Vivek Goyal committed
308 309
	  the format.

310
  echo "<major>:<minor>  <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device
Vivek Goyal's avatar
Vivek Goyal committed
311 312

Note: If both BW and IOPS rules are specified for a device, then IO is
313
      subjected to both the constraints.
Vivek Goyal's avatar
Vivek Goyal committed
314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342

- blkio.throttle.io_serviced
	- Number of IOs (bio) completed to/from the disk by the group (as
	  seen by throttling policy). These are further divided by the type
	  of operation - read or write, sync or async. First two fields specify
	  the major and minor number of the device, third field specifies the
	  operation type and the fourth field specifies the number of IOs.

	  blkio.io_serviced does accounting as seen by CFQ and counts are in
	  number of requests (struct request). On the other hand,
	  blkio.throttle.io_serviced counts number of IO in terms of number
	  of bios as seen by throttling policy.  These bios can later be
	  merged by elevator and total number of requests completed can be
	  lesser.

- blkio.throttle.io_service_bytes
	- Number of bytes transferred to/from the disk by the group. These
	  are further divided by the type of operation - read or write, sync
	  or async. First two fields specify the major and minor number of the
	  device, third field specifies the operation type and the fourth field
	  specifies the number of bytes.

	  These numbers should roughly be same as blkio.io_service_bytes as
	  updated by CFQ. The difference between two is that
	  blkio.io_service_bytes will not be updated if CFQ is not operating
	  on request queue.

Common files among various policies
-----------------------------------
343 344 345 346
- blkio.reset_stats
	- Writing an int to this file will result in resetting all the stats
	  for that cgroup.

Vivek Goyal's avatar
Vivek Goyal committed
347 348
CFQ sysfs tunable
=================
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375
/sys/block/<disk>/queue/iosched/slice_idle
------------------------------------------
On a faster hardware CFQ can be slow, especially with sequential workload.
This happens because CFQ idles on a single queue and single queue might not
drive deeper request queue depths to keep the storage busy. In such scenarios
one can try setting slice_idle=0 and that would switch CFQ to IOPS
(IO operations per second) mode on NCQ supporting hardware.

That means CFQ will not idle between cfq queues of a cfq group and hence be
able to driver higher queue depth and achieve better throughput. That also
means that cfq provides fairness among groups in terms of IOPS and not in
terms of disk time.

/sys/block/<disk>/queue/iosched/group_idle
------------------------------------------
If one disables idling on individual cfq queues and cfq service trees by
setting slice_idle=0, group_idle kicks in. That means CFQ will still idle
on the group in an attempt to provide fairness among groups.

By default group_idle is same as slice_idle and does not do anything if
slice_idle is enabled.

One can experience an overall throughput drop if you have created multiple
groups and put applications in that group which are not driving enough
IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
on individual groups and throughput should improve.

Vivek Goyal's avatar
Vivek Goyal committed
376 377 378 379 380
What works
==========
- Currently only sync IO queues are support. All the buffered writes are
  still system wide and not per group. Hence we will not see service
  differentiation between buffered writes between groups.