Path: blob/master/Documentation/block/cfq-iosched.txt
10821 views
CFQ ioscheduler tunables1========================23slice_idle4----------5This specifies how long CFQ should idle for next request on certain cfq queues6(for sequential workloads) and service trees (for random workloads) before7queue is expired and CFQ selects next queue to dispatch from.89By default slice_idle is a non-zero value. That means by default we idle on10queues/service trees. This can be very helpful on highly seeky media like11single spindle SATA/SAS disks where we can cut down on overall number of12seeks and see improved throughput.1314Setting slice_idle to 0 will remove all the idling on queues/service tree15level and one should see an overall improved throughput on faster storage16devices like multiple SATA/SAS disks in hardware RAID configuration. The down17side is that isolation provided from WRITES also goes down and notion of18IO priority becomes weaker.1920So depending on storage and workload, it might be useful to set slice_idle=0.21In general I think for SATA/SAS disks and software RAID of SATA/SAS disks22keeping slice_idle enabled should be useful. For any configurations where23there are multiple spindles behind single LUN (Host based hardware RAID24controller or for storage arrays), setting slice_idle=0 might end up in better25throughput and acceptable latencies.2627CFQ IOPS Mode for group scheduling28===================================29Basic CFQ design is to provide priority based time slices. Higher priority30process gets bigger time slice and lower priority process gets smaller time31slice. Measuring time becomes harder if storage is fast and supports NCQ and32it would be better to dispatch multiple requests from multiple cfq queues in33request queue at a time. In such scenario, it is not possible to measure time34consumed by single queue accurately.3536What is possible though is to measure number of requests dispatched from a37single queue and also allow dispatch from multiple cfq queue at the same time.38This effectively becomes the fairness in terms of IOPS (IO operations per39second).4041If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches42to IOPS mode and starts providing fairness in terms of number of requests43dispatched. Note that this mode switching takes effect only for group44scheduling. For non-cgroup users nothing should change.454647