You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
79 lines
2.7 KiB
79 lines
2.7 KiB
20 years ago
|
Deadline IO scheduler tunables
|
||
|
==============================
|
||
|
|
||
|
This little file attempts to document how the deadline io scheduler works.
|
||
|
In particular, it will clarify the meaning of the exposed tunables that may be
|
||
|
of interest to power users.
|
||
|
|
||
|
Each io queue has a set of io scheduler tunables associated with it. These
|
||
|
tunables control how the io scheduler works. You can find these entries
|
||
|
in:
|
||
|
|
||
|
/sys/block/<device>/queue/iosched
|
||
|
|
||
|
assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
|
||
|
you can do so by typing:
|
||
|
|
||
|
# mount none /sys -t sysfs
|
||
|
|
||
|
|
||
|
********************************************************************************
|
||
|
|
||
|
|
||
|
read_expire (in ms)
|
||
|
-----------
|
||
|
|
||
|
The goal of the deadline io scheduler is to attempt to guarentee a start
|
||
|
service time for a request. As we focus mainly on read latencies, this is
|
||
|
tunable. When a read request first enters the io scheduler, it is assigned
|
||
|
a deadline that is the current time + the read_expire value in units of
|
||
|
miliseconds.
|
||
|
|
||
|
|
||
|
write_expire (in ms)
|
||
|
-----------
|
||
|
|
||
|
Similar to read_expire mentioned above, but for writes.
|
||
|
|
||
|
|
||
|
fifo_batch
|
||
|
----------
|
||
|
|
||
|
When a read request expires its deadline, we must move some requests from
|
||
|
the sorted io scheduler list to the block device dispatch queue. fifo_batch
|
||
|
controls how many requests we move, based on the cost of each request. A
|
||
|
request is either qualified as a seek or a stream. The io scheduler knows
|
||
|
the last request that was serviced by the drive (or will be serviced right
|
||
|
before this one). See seek_cost and stream_unit.
|
||
|
|
||
|
|
||
|
write_starved (number of dispatches)
|
||
|
-------------
|
||
|
|
||
|
When we have to move requests from the io scheduler queue to the block
|
||
|
device dispatch queue, we always give a preference to reads. However, we
|
||
|
don't want to starve writes indefinitely either. So writes_starved controls
|
||
|
how many times we give preference to reads over writes. When that has been
|
||
|
done writes_starved number of times, we dispatch some writes based on the
|
||
|
same criteria as reads.
|
||
|
|
||
|
|
||
|
front_merges (bool)
|
||
|
------------
|
||
|
|
||
|
Sometimes it happens that a request enters the io scheduler that is contigious
|
||
|
with a request that is already on the queue. Either it fits in the back of that
|
||
|
request, or it fits at the front. That is called either a back merge candidate
|
||
|
or a front merge candidate. Due to the way files are typically laid out,
|
||
|
back merges are much more common than front merges. For some work loads, you
|
||
|
may even know that it is a waste of time to spend any time attempting to
|
||
|
front merge requests. Setting front_merges to 0 disables this functionality.
|
||
|
Front merges may still occur due to the cached last_merge hint, but since
|
||
|
that comes at basically 0 cost we leave that on. We simply disable the
|
||
|
rbtree front sector lookup when the io scheduler merge function is called.
|
||
|
|
||
|
|
||
|
Nov 11 2002, Jens Axboe <axboe@suse.de>
|
||
|
|
||
|
|