// SPDX-License-Identifier: GPL-2.0-or-later /* * Budget Fair Queueing (BFQ) I/O scheduler. * * Based on ideas and code from CFQ: * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk> * * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it> * Paolo Valente <paolo.valente@unimore.it> * * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it> * Arianna Avanzini <avanzini@google.com> * * Copyright (C) 2017 Paolo Valente <paolo.valente@linaro.org> * * BFQ is a proportional-share I/O scheduler, with some extra * low-latency capabilities. BFQ also supports full hierarchical * scheduling through cgroups. Next paragraphs provide an introduction * on BFQ inner workings. Details on BFQ benefits, usage and * limitations can be found in Documentation/block/bfq-iosched.rst. * * BFQ is a proportional-share storage-I/O scheduling algorithm based * on the slice-by-slice service scheme of CFQ. But BFQ assigns * budgets, measured in number of sectors, to processes instead of * time slices. The device is not granted to the in-service process * for a given time slice, but until it has exhausted its assigned * budget. This change from the time to the service domain enables BFQ * to distribute the device throughput among processes as desired, * without any distortion due to throughput fluctuations, or to device * internal queueing. BFQ uses an ad hoc internal scheduler, called * B-WF2Q+, to schedule processes according to their budgets. More * precisely, BFQ schedules queues associated with processes. Each * process/queue is assigned a user-configurable weight, and B-WF2Q+ * guarantees that each queue receives a fraction of the throughput * proportional to its weight. Thanks to the accurate policy of * B-WF2Q+, BFQ can afford to assign high budgets to I/O-bound * processes issuing sequential requests (to boost the throughput), * and yet guarantee a low latency to interactive and soft real-time * applications. * * In particular, to provide these low-latency guarantees, BFQ * explicitly privileges the I/O of two classes of time-sensitive * applications: interactive and soft real-time. In more detail, BFQ * behaves this way if the low_latency parameter is set (default * configuration). This feature enables BFQ to provide applications in * these classes with a very low latency. * * To implement this feature, BFQ constantly tries to detect whether * the I/O requests in a bfq_queue come from an interactive or a soft * real-time application. For brevity, in these cases, the queue is * said to be interactive or soft real-time. In both cases, BFQ * privileges the service of the queue, over that of non-interactive * and non-soft-real-time queues. This privileging is performed, * mainly, by raising the weight of the queue. So, for brevity, we * call just weight-raising periods the time periods during which a * queue is privileged, because deemed interactive or soft real-time. * * The detection of soft real-time queues/applications is described in * detail in the comments on the function * bfq_bfqq_softrt_next_start. On the other hand, the detection of an * interactive queue works as follows: a queue is deemed interactive * if it is constantly non empty only for a limited time interval, * after which it does become empty. The queue may be deemed * interactive again (for a limited time), if it restarts being * constantly non empty, provided that this happens only after the * queue has remained empty for a given minimum idle time. * * By default, BFQ computes automatically the above maximum time * interval, i.e., the time interval after which a constantly * non-empty queue stops being deemed interactive. Since a queue is * weight-raised while it is deemed interactive, this maximum time * interval happens to coincide with the (maximum) duration of the * weight-raising for interactive queues. * * Finally, BFQ also features additional heuristics for * preserving both a low latency and a high throughput on NCQ-capable, * rotational or flash-based devices, and to get the job done quickly * for applications consisting in many I/O-bound processes. * * NOTE: if the main or only goal, with a given device, is to achieve * the maximum-possible throughput at all times, then do switch off * all low-latency heuristics for that device, by setting low_latency * to 0. * * BFQ is described in [1], where also a reference to the initial, * more theoretical paper on BFQ can be found. The interested reader * can find in the latter paper full details on the main algorithm, as * well as formulas of the guarantees and formal proofs of all the * properties. With respect to the version of BFQ presented in these * papers, this implementation adds a few more heuristics, such as the * ones that guarantee a low latency to interactive and soft real-time * applications, and a hierarchical extension based on H-WF2Q+. * * B-WF2Q+ is based on WF2Q+, which is described in [2], together with * H-WF2Q+, while the augmented tree used here to implement B-WF2Q+ * with O(log N) complexity derives from the one introduced with EEVDF * in [3]. * * [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O * Scheduler", Proceedings of the First Workshop on Mobile System * Technologies (MST-2015), May 2015. * http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf * * [2] Jon C.R. Bennett and H. Zhang, "Hierarchical Packet Fair Queueing * Algorithms", IEEE/ACM Transactions on Networking, 5(5):675-689, * Oct 1997. * * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz * * [3] I. Stoica and H. Abdel-Wahab, "Earliest Eligible Virtual Deadline * First: A Flexible and Accurate Mechanism for Proportional Share * Resource Allocation", technical report. * * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
*/ #include <linux/module.h> #include <linux/slab.h> #include <linux/blkdev.h> #include <linux/cgroup.h> #include <linux/ktime.h> #include <linux/rbtree.h> #include <linux/ioprio.h> #include <linux/sbitmap.h> #include <linux/delay.h> #include <linux/backing-dev.h>
/* Expiration time of async (0) and sync (1) requests, in ns. */ staticconst u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 };
/* Maximum backwards seek (magic number lifted from CFQ), in KiB. */ staticconstint bfq_back_max = 16 * 1024;
/* Penalty of a backwards seek, in number of sectors. */ staticconstint bfq_back_penalty = 2;
/* Idling period duration, in ns. */ static u64 bfq_slice_idle = NSEC_PER_SEC / 125;
/* Minimum number of assigned budgets for which stats are safe to compute. */ staticconstint bfq_stats_min_budgets = 194;
/* Default maximum budget values, in sectors and number of requests. */ staticconstint bfq_default_max_budget = 16 * 1024;
/* * When a sync request is dispatched, the queue that contains that * request, and all the ancestor entities of that queue, are charged * with the number of sectors of the request. In contrast, if the * request is async, then the queue and its ancestor entities are * charged with the number of sectors of the request, multiplied by * the factor below. This throttles the bandwidth for async I/O, * w.r.t. to sync I/O, and it is done to counter the tendency of async * writes to steal I/O throughput to reads. * * The current value of this parameter is the result of a tuning with * several hardware and software configurations. We tried to find the * lowest value for which writes do not cause noticeable problems to * reads. In fact, the lower this parameter, the stabler I/O control, * in the following respect. The lower this parameter is, the less * the bandwidth enjoyed by a group decreases * - when the group does writes, w.r.t. to when it does reads; * - when other groups do reads, w.r.t. to when they do writes.
*/ staticconstint bfq_async_charge_factor = 3;
/* * Time limit for merging (see comments in bfq_setup_cooperator). Set * to the slowest value that, in our tests, proved to be effective in * removing false positives, while not causing true positives to miss * queue merging. * * As can be deduced from the low time limit below, queue merging, if * successful, happens at the very beginning of the I/O of the involved * cooperating processes, as a consequence of the arrival of the very * first requests from each cooperator. After that, there is very * little chance to find cooperators.
*/ staticconstunsignedlong bfq_merge_time_limit = HZ/10;
staticstruct kmem_cache *bfq_pool;
/* Below this threshold (in ns), we consider thinktime immediate. */ #define BFQ_MIN_TT (2 * NSEC_PER_MSEC)
/* hw_tag detection: parallel requests threshold and min samples needed. */ #define BFQ_HW_QUEUE_THRESHOLD 3 #define BFQ_HW_QUEUE_SAMPLES 32
#define BFQQ_SEEK_THR (sector_t)(8 * 100) #define BFQQ_SECT_THR_NONROT (sector_t)(2 * 32) #define BFQ_RQ_SEEKY(bfqd, last_pos, rq) \
(get_sdist(last_pos, rq) > \
BFQQ_SEEK_THR && \
(!blk_queue_nonrot(bfqd->queue) || \
blk_rq_sectors(rq) < BFQQ_SECT_THR_NONROT)) #define BFQQ_CLOSE_THR (sector_t)(8 * 1024) #define BFQQ_SEEKY(bfqq) (hweight32(bfqq->seek_history) > 19) /* * Sync random I/O is likely to be confused with soft real-time I/O, * because it is characterized by limited throughput and apparently * isochronous arrival pattern. To avoid false positives, queues * containing only random (seeky) I/O are prevented from being tagged * as soft real-time.
*/ #define BFQQ_TOTALLY_SEEKY(bfqq) (bfqq->seek_history == -1)
/* Min number of samples required to perform peak-rate update */ #define BFQ_RATE_MIN_SAMPLES 32 /* Min observation time interval required to perform a peak-rate update (ns) */ #define BFQ_RATE_MIN_INTERVAL (300*NSEC_PER_MSEC) /* Target observation time interval for a peak-rate update (ns) */ #define BFQ_RATE_REF_INTERVAL NSEC_PER_SEC
/* * Shift used for peak-rate fixed precision calculations. * With * - the current shift: 16 positions * - the current type used to store rate: u32 * - the current unit of measure for rate: [sectors/usec], or, more precisely, * [(sectors/usec) / 2^BFQ_RATE_SHIFT] to take into account the shift, * the range of rates that can be stored is * [1 / 2^BFQ_RATE_SHIFT, 2^(32 - BFQ_RATE_SHIFT)] sectors/usec = * [1 / 2^16, 2^16] sectors/usec = [15e-6, 65536] sectors/usec = * [15, 65G] sectors/sec * Which, assuming a sector size of 512B, corresponds to a range of * [7.5K, 33T] B/sec
*/ #define BFQ_RATE_SHIFT 16
/* * When configured for computing the duration of the weight-raising * for interactive queues automatically (see the comments at the * beginning of this file), BFQ does it using the following formula: * duration = (ref_rate / r) * ref_wr_duration, * where r is the peak rate of the device, and ref_rate and * ref_wr_duration are two reference parameters. In particular, * ref_rate is the peak rate of the reference storage device (see * below), and ref_wr_duration is about the maximum time needed, with * BFQ and while reading two files in parallel, to load typical large * applications on the reference device (see the comments on * max_service_from_wr below, for more details on how ref_wr_duration * is obtained). In practice, the slower/faster the device at hand * is, the more/less it takes to load applications with respect to the * reference device. Accordingly, the longer/shorter BFQ grants * weight raising to interactive applications. * * BFQ uses two different reference pairs (ref_rate, ref_wr_duration), * depending on whether the device is rotational or non-rotational. * * In the following definitions, ref_rate[0] and ref_wr_duration[0] * are the reference values for a rotational device, whereas * ref_rate[1] and ref_wr_duration[1] are the reference values for a * non-rotational device. The reference rates are not the actual peak * rates of the devices used as a reference, but slightly lower * values. The reason for using slightly lower values is that the * peak-rate estimator tends to yield slightly lower values than the * actual peak rate (it can yield the actual peak rate only if there * is only one process doing I/O, and the process does sequential * I/O). * * The reference peak rates are measured in sectors/usec, left-shifted * by BFQ_RATE_SHIFT.
*/ staticint ref_rate[2] = {14000, 33000}; /* * To improve readability, a conversion function is used to initialize * the following array, which entails that the array can be * initialized only in a function.
*/ staticint ref_wr_duration[2];
/* * BFQ uses the above-detailed, time-based weight-raising mechanism to * privilege interactive tasks. This mechanism is vulnerable to the * following false positives: I/O-bound applications that will go on * doing I/O for much longer than the duration of weight * raising. These applications have basically no benefit from being * weight-raised at the beginning of their I/O. On the opposite end, * while being weight-raised, these applications * a) unjustly steal throughput to applications that may actually need * low latency; * b) make BFQ uselessly perform device idling; device idling results * in loss of device throughput with most flash-based storage, and may * increase latencies when used purposelessly. * * BFQ tries to reduce these problems, by adopting the following * countermeasure. To introduce this countermeasure, we need first to * finish explaining how the duration of weight-raising for * interactive tasks is computed. * * For a bfq_queue deemed as interactive, the duration of weight * raising is dynamically adjusted, as a function of the estimated * peak rate of the device, so as to be equal to the time needed to * execute the 'largest' interactive task we benchmarked so far. By * largest task, we mean the task for which each involved process has * to do more I/O than for any of the other tasks we benchmarked. This * reference interactive task is the start-up of LibreOffice Writer, * and in this task each process/bfq_queue needs to have at most ~110K * sectors transferred. * * This last piece of information enables BFQ to reduce the actual * duration of weight-raising for at least one class of I/O-bound * applications: those doing sequential or quasi-sequential I/O. An * example is file copy. In fact, once started, the main I/O-bound * processes of these applications usually consume the above 110K * sectors in much less time than the processes of an application that * is starting, because these I/O-bound processes will greedily devote * almost all their CPU cycles only to their target, * throughput-friendly I/O operations. This is even more true if BFQ * happens to be underestimating the device peak rate, and thus * overestimating the duration of weight raising. But, according to * our measurements, once transferred 110K sectors, these processes * have no right to be weight-raised any longer. * * Basing on the last consideration, BFQ ends weight-raising for a * bfq_queue if the latter happens to have received an amount of * service at least equal to the following constant. The constant is * set to slightly more than 110K, to have a minimum safety margin. * * This early ending of weight-raising reduces the amount of time * during which interactive false positives cause the two problems * described at the beginning of these comments.
*/ staticconstunsignedlong max_service_from_wr = 120000;
/* * Maximum time between the creation of two queues, for stable merge * to be activated (in ms)
*/ staticconstunsignedlong bfq_activation_stable_merging = 600; /* * Minimum time to be waited before evaluating delayed stable merge (in ms)
*/ staticconstunsignedlong bfq_late_stable_merging = 600;
/* * If bfqq != NULL, then a non-stable queue merge between * bic->bfqq and bfqq is happening here. This causes troubles * in the following case: bic->bfqq has also been scheduled * for a possible stable merge with bic->stable_merge_bfqq, * and bic->stable_merge_bfqq == bfqq happens to * hold. Troubles occur because bfqq may then undergo a split, * thereby becoming eligible for a stable merge. Yet, if * bic->stable_merge_bfqq points exactly to bfqq, then bfqq * would be stably merged with itself. To avoid this anomaly, * we cancel the stable merge if * bic->stable_merge_bfqq == bfqq.
*/ struct bfq_iocq_bfqq_data *bfqq_data = &bic->bfqq_data[actuator_idx];
/* Clear bic pointer if bfqq is detached from this bic */ if (old_bfqq && old_bfqq->bic == bic)
old_bfqq->bic = NULL;
if (is_sync)
bic->bfqq[1][actuator_idx] = bfqq; else
bic->bfqq[0][actuator_idx] = bfqq;
if (bfqq && bfqq_data->stable_merge_bfqq == bfqq) { /* * Actually, these same instructions are executed also * in bfq_setup_cooperator, in case of abort or actual * execution of a stable merge. We could avoid * repeating these instructions there too, but if we * did so, we would nest even more complexity in this * function.
*/
bfq_put_stable_ref(bfqq_data->stable_merge_bfqq);
/** * icq_to_bic - convert iocontext queue structure to bfq_io_cq. * @icq: the iocontext queue.
*/ staticstruct bfq_io_cq *icq_to_bic(struct io_cq *icq)
{ /* bic->icq is the first member, %NULL will convert to %NULL */ return container_of(icq, struct bfq_io_cq, icq);
}
/** * bfq_bic_lookup - search into @ioc a bic associated to @bfqd. * @q: the request queue.
*/ staticstruct bfq_io_cq *bfq_bic_lookup(struct request_queue *q)
{ if (!current->io_context) return NULL;
return icq_to_bic(ioc_lookup_icq(q));
}
/* * Scheduler run of queue, if there are requests pending and no one in the * driver that will restart queueing.
*/ void bfq_schedule_dispatch(struct bfq_data *bfqd)
{
lockdep_assert_held(&bfqd->lock);
/* * Lifted from AS - choose which of rq1 and rq2 that is best served now. * We choose the request that is closer to the head right now. Distance * behind the head is penalized and only allowed to a certain extent.
*/ staticstruct request *bfq_choose_req(struct bfq_data *bfqd, struct request *rq1, struct request *rq2,
sector_t last)
{
sector_t s1, s2, d1 = 0, d2 = 0; unsignedlong back_max; #define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */ #define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */ unsignedint wrap = 0; /* bit mask: requests behind the disk head? */
if (!rq1 || rq1 == rq2) return rq2; if (!rq2) return rq1;
/* * By definition, 1KiB is 2 sectors.
*/
back_max = bfqd->bfq_back_max * 2;
/* * Strict one way elevator _except_ in the case where we allow * short backward seeks which are biased as twice the cost of a * similar forward seek.
*/ if (s1 >= last)
d1 = s1 - last; elseif (s1 + back_max >= last)
d1 = (last - s1) * bfqd->bfq_back_penalty; else
wrap |= BFQ_RQ1_WRAP;
/* * By doing switch() on the bit mask "wrap" we avoid having to * check two variables for all permutations: --> faster!
*/ switch (wrap) { case 0: /* common case for CFQ: rq1 and rq2 not wrapped */ if (d1 < d2) return rq1; elseif (d2 < d1) return rq2;
if (s1 >= s2) return rq1; else return rq2;
case BFQ_RQ2_WRAP: return rq1; case BFQ_RQ1_WRAP: return rq2; case BFQ_RQ1_WRAP|BFQ_RQ2_WRAP: /* both rqs wrapped */ default: /* * Since both rqs are wrapped, * start with the one that's further behind head * (--> only *one* back seek required), * since back seek takes more time than forward.
*/ if (s1 <= s2) return rq1; else return rq2;
}
}
#define BFQ_LIMIT_INLINE_DEPTH 16
#ifdef CONFIG_BFQ_GROUP_IOSCHED staticbool bfqq_request_over_limit(struct bfq_data *bfqd, struct bfq_io_cq *bic, blk_opf_t opf, unsignedint act_idx, int limit)
{ struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH]; struct bfq_entity **entities = inline_entities; int alloc_depth = BFQ_LIMIT_INLINE_DEPTH; struct bfq_sched_data *sched_data; struct bfq_entity *entity; struct bfq_queue *bfqq; unsignedlong wsum; bool ret = false; int depth; int level;
entity = &bfqq->entity; if (!entity->on_st_or_in_serv) goto out;
/* +1 for bfqq entity, root cgroup not included */
depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1; if (depth > alloc_depth) {
spin_unlock_irq(&bfqd->lock); if (entities != inline_entities)
kfree(entities);
entities = kmalloc_array(depth, sizeof(*entities), GFP_NOIO); if (!entities) returnfalse;
alloc_depth = depth; goto retry;
}
sched_data = entity->sched_data; /* Gather our ancestors as we need to traverse them in reverse order */
level = 0;
for_each_entity(entity) { /* * If at some level entity is not even active, allow request * queueing so that BFQ knows there's work to do and activate * entities.
*/ if (!entity->on_st_or_in_serv) goto out; /* Uh, more parents than cgroup subsystem thinks? */ if (WARN_ON_ONCE(level >= depth)) break;
entities[level++] = entity;
}
WARN_ON_ONCE(level != depth); for (level--; level >= 0; level--) {
entity = entities[level]; if (level > 0) {
wsum = bfq_entity_service_tree(entity)->wsum;
} else { int i; /* * For bfqq itself we take into account service trees * of all higher priority classes and multiply their * weights so that low prio queue from higher class * gets more requests than high prio queue from lower * class.
*/
wsum = 0; for (i = 0; i <= bfqq->ioprio_class - 1; i++) {
wsum = wsum * IOPRIO_BE_NR +
sched_data->service_tree[i].wsum;
}
} if (!wsum) continue;
limit = DIV_ROUND_CLOSEST(limit * entity->weight, wsum); if (entity->allocated >= limit) {
bfq_log_bfqq(bfqq->bfqd, bfqq, "too many requests: allocated %d limit %d level %d",
entity->allocated, limit, level);
ret = true; break;
}
}
out:
spin_unlock_irq(&bfqd->lock); if (entities != inline_entities)
kfree(entities); return ret;
} #else staticbool bfqq_request_over_limit(struct bfq_data *bfqd, struct bfq_io_cq *bic, blk_opf_t opf, unsignedint act_idx, int limit)
{ returnfalse;
} #endif
/* * Async I/O can easily starve sync I/O (both sync reads and sync * writes), by consuming all tags. Similarly, storms of sync writes, * such as those that sync(2) may trigger, can starve sync reads. * Limit depths of async I/O and sync writes so as to counter both * problems. * * Also if a bfq queue or its parent cgroup consume more tags than would be * appropriate for their weight, we trim the available tag depth to 1. This * avoids a situation where one cgroup can starve another cgroup from tags and * thus block service differentiation among cgroups. Note that because the * queue / cgroup already has many requests allocated and queued, this does not * significantly affect service guarantees coming from the BFQ scheduling * algorithm.
*/ staticvoid bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
{ struct bfq_data *bfqd = data->q->elevator->elevator_data; struct bfq_io_cq *bic = bfq_bic_lookup(data->q); unsignedint limit, act_idx;
/* Sync reads have full depth available */ if (op_is_sync(opf) && !op_is_write(opf))
limit = data->q->nr_requests; else
limit = bfqd->async_depths[!!bfqd->wr_busy_queues][op_is_sync(opf)];
for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) { /* Fast path to check if bfqq is already allocated. */ if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx)) continue;
/* * Does queue (or any parent entity) exceed number of * requests that should be available to it? Heavily * limit depth so that it cannot consume more * available requests and thus starve other entities.
*/ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
limit = 1; break;
}
}
/* * Sort strictly based on sector. Smallest to the left, * largest to the right.
*/ if (sector > blk_rq_pos(bfqq->next_rq))
n = &(*p)->rb_right; elseif (sector < blk_rq_pos(bfqq->next_rq))
n = &(*p)->rb_left; else break;
p = n;
bfqq = NULL;
}
/* * The following function is not marked as __cold because it is * actually cold, but for the same performance goal described in the * comments on the likely() at the beginning of * bfq_setup_cooperator(). Unexpectedly, to reach an even lower * execution time for the case where this function is not invoked, we * had to add an unlikely() in each involved if().
*/ void __cold
bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ struct rb_node **p, *parent; struct bfq_queue *__bfqq;
if (bfqq->pos_root) {
rb_erase(&bfqq->pos_node, bfqq->pos_root);
bfqq->pos_root = NULL;
}
/* oom_bfqq does not participate in queue merging */ if (bfqq == &bfqd->oom_bfqq) return;
/* * bfqq cannot be merged any longer (see comments in * bfq_setup_cooperator): no point in adding bfqq into the * position tree.
*/ if (bfq_too_late_for_merging(bfqq)) return;
if (bfq_class_idle(bfqq)) return; if (!bfqq->next_rq) return;
/* * The following function returns false either if every active queue * must receive the same share of the throughput (symmetric scenario), * or, as a special case, if bfqq must receive a share of the * throughput lower than or equal to the share that every other active * queue must receive. If bfqq does sync I/O, then these are the only * two cases where bfqq happens to be guaranteed its share of the * throughput even if I/O dispatching is not plugged when bfqq remains * temporarily empty (for more details, see the comments in the * function bfq_better_to_idle()). For this reason, the return value * of this function is used to check whether I/O-dispatch plugging can * be avoided. * * The above first case (symmetric scenario) occurs when: * 1) all active queues have the same weight, * 2) all active queues belong to the same I/O-priority class, * 3) all active groups at the same level in the groups tree have the same * weight, * 4) all active groups at the same level in the groups tree have the same * number of children. * * Unfortunately, keeping the necessary state for evaluating exactly * the last two symmetry sub-conditions above would be quite complex * and time consuming. Therefore this function evaluates, instead, * only the following stronger three sub-conditions, for which it is * much easier to maintain the needed state: * 1) all active queues have the same weight, * 2) all active queues belong to the same I/O-priority class, * 3) there is at most one active group. * In particular, the last condition is always true if hierarchical * support or the cgroups interface are not enabled, thus no state * needs to be maintained in this case.
*/ staticbool bfq_asymmetric_scenario(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ bool smallest_weight = bfqq &&
bfqq->weight_counter &&
bfqq->weight_counter ==
container_of(
rb_first_cached(&bfqd->queue_weights_tree), struct bfq_weight_counter,
weights_node);
/* * For queue weights to differ, queue_weights_tree must contain * at least two nodes.
*/ bool varied_queue_weights = !smallest_weight &&
!RB_EMPTY_ROOT(&bfqd->queue_weights_tree.rb_root) &&
(bfqd->queue_weights_tree.rb_root.rb_node->rb_left ||
bfqd->queue_weights_tree.rb_root.rb_node->rb_right);
/* * If the weight-counter tree passed as input contains no counter for * the weight of the input queue, then add that counter; otherwise just * increment the existing counter. * * Note that weight-counter trees contain few nodes in mostly symmetric * scenarios. For example, if all queues have the same weight, then the * weight-counter tree for the queues may contain at most one node. * This holds even if low_latency is on, because weight-raised queues * are not inserted in the tree. * In most scenarios, the rate at which nodes are created/destroyed * should be low too.
*/ void bfq_weights_tree_add(struct bfq_queue *bfqq)
{ struct rb_root_cached *root = &bfqq->bfqd->queue_weights_tree; struct bfq_entity *entity = &bfqq->entity; struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL; bool leftmost = true;
/* * Do not insert if the queue is already associated with a * counter, which happens if: * 1) a request arrival has caused the queue to become both * non-weight-raised, and hence change its weight, and * backlogged; in this respect, each of the two events * causes an invocation of this function, * 2) this is the invocation of this function caused by the * second event. This second invocation is actually useless, * and we handle this fact by exiting immediately. More * efficient or clearer solutions might possibly be adopted.
*/ if (bfqq->weight_counter) return;
/* * In the unlucky event of an allocation failure, we just * exit. This will cause the weight of queue to not be * considered in bfq_asymmetric_scenario, which, in its turn, * causes the scenario to be deemed wrongly symmetric in case * bfqq's weight would have been the only weight making the * scenario asymmetric. On the bright side, no unbalance will * however occur when bfqq becomes inactive again (the * invocation of this function is triggered by an activation * of queue). In fact, bfq_weights_tree_remove does nothing * if !bfqq->weight_counter.
*/ if (unlikely(!bfqq->weight_counter)) return;
/* * Decrement the weight counter associated with the queue, and, if the * counter reaches 0, remove the counter from the tree. * See the comments to the function bfq_weights_tree_add() for considerations * about overhead.
*/ void bfq_weights_tree_remove(struct bfq_queue *bfqq)
{ struct rb_root_cached *root;
if (!bfqq->weight_counter) return;
root = &bfqq->bfqd->queue_weights_tree;
bfqq->weight_counter->num_active--; if (bfqq->weight_counter->num_active > 0) goto reset_entity_pointer;
/* Follow expired path, else get first next available. */
next = bfq_check_fifo(bfqq, last); if (next) return next;
if (rbprev)
prev = rb_entry_rq(rbprev);
if (rbnext)
next = rb_entry_rq(rbnext); else {
rbnext = rb_first(&bfqq->sort_list); if (rbnext && rbnext != &last->rb_node)
next = rb_entry_rq(rbnext);
}
/** * bfq_updated_next_req - update the queue after a new next_rq selection. * @bfqd: the device data the queue belongs to. * @bfqq: the queue to update. * * If the first request of a queue changes we make sure that the queue * has enough budget to serve at least its first request (if the * request has grown). We do this because if the queue has not enough * budget for its first request, it has to go through two dispatch * rounds to actually get it dispatched.
*/ staticvoid bfq_updated_next_req(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ struct bfq_entity *entity = &bfqq->entity; struct request *next_rq = bfqq->next_rq; unsignedlong new_budget;
if (!next_rq) return;
if (bfqq == bfqd->in_service_queue) /* * In order not to break guarantees, budgets cannot be * changed after an entity has been selected.
*/ return;
new_budget = max_t(unsignedlong,
max_t(unsignedlong, bfqq->max_budget,
bfq_serv_to_charge(next_rq, bfqq)),
entity->service); if (entity->budget != new_budget) {
entity->budget = new_budget;
bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
new_budget);
bfq_requeue_bfqq(bfqd, bfqq, false);
}
}
/* * Limit duration between 3 and 25 seconds. The upper limit * has been conservatively set after the following worst case: * on a QEMU/KVM virtual machine * - running in a slow PC * - with a virtual disk stacked on a slow low-end 5400rpm HDD * - serving a heavy I/O workload, such as the sequential reading * of several files * mplayer took 23 seconds to start, if constantly weight-raised. * * As for higher values than that accommodating the above bad * scenario, tests show that higher values would often yield * the opposite of the desired result, i.e., would worsen * responsiveness by allowing non-interactive applications to * preserve weight raising for too long. * * On the other end, lower values than 3 seconds make it * difficult for most interactive tasks to complete their jobs * before weight-raising finishes.
*/ return clamp_val(dur, msecs_to_jiffies(3000), msecs_to_jiffies(25000));
}
/* switch back from soft real-time to interactive weight raising */ staticvoid switch_back_to_interactive_wr(struct bfq_queue *bfqq, struct bfq_data *bfqd)
{
bfqq->wr_coeff = bfqd->bfq_wr_coeff;
bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
bfqq->last_wr_start_finish = bfqq->wr_start_at_switch_to_srt;
}
/* Empty burst list and add just bfqq (see comments on bfq_handle_burst) */ staticvoid bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ struct bfq_queue *item; struct hlist_node *n;
hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
hlist_del_init(&item->burst_list_node);
/* * Start the creation of a new burst list only if there is no * active queue. See comments on the conditional invocation of * bfq_handle_burst().
*/ if (bfq_tot_busy_queues(bfqd) == 0) {
hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
bfqd->burst_size = 1;
} else
bfqd->burst_size = 0;
/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */ staticvoid bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ /* Increment burst size to take into account also bfqq */
bfqd->burst_size++;
/* * Enough queues have been activated shortly after each * other to consider this burst as large.
*/
bfqd->large_burst = true;
/* * We can now mark all queues in the burst list as * belonging to a large burst.
*/
hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
burst_list_node)
bfq_mark_bfqq_in_large_burst(bfqq_item);
bfq_mark_bfqq_in_large_burst(bfqq);
/* * From now on, and until the current burst finishes, any * new queue being activated shortly after the last queue * was inserted in the burst can be immediately marked as * belonging to a large burst. So the burst list is not * needed any more. Remove it.
*/
hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
burst_list_node)
hlist_del_init(&pos->burst_list_node);
} else/* * Burst not yet large: add bfqq to the burst list. Do * not increment the ref counter for bfqq, because bfqq * is removed from the burst list before freeing bfqq * in put_queue.
*/
hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
}
/* * If many queues belonging to the same group happen to be created * shortly after each other, then the processes associated with these * queues have typically a common goal. In particular, bursts of queue * creations are usually caused by services or applications that spawn * many parallel threads/processes. Examples are systemd during boot, * or git grep. To help these processes get their job done as soon as * possible, it is usually better to not grant either weight-raising * or device idling to their queues, unless these queues must be * protected from the I/O flowing through other active queues. * * In this comment we describe, firstly, the reasons why this fact * holds, and, secondly, the next function, which implements the main * steps needed to properly mark these queues so that they can then be * treated in a different way. * * The above services or applications benefit mostly from a high * throughput: the quicker the requests of the activated queues are * cumulatively served, the sooner the target job of these queues gets * completed. As a consequence, weight-raising any of these queues, * which also implies idling the device for it, is almost always * counterproductive, unless there are other active queues to isolate * these new queues from. If there no other active queues, then * weight-raising these new queues just lowers throughput in most * cases. * * On the other hand, a burst of queue creations may be caused also by * the start of an application that does not consist of a lot of * parallel I/O-bound threads. In fact, with a complex application, * several short processes may need to be executed to start-up the * application. In this respect, to start an application as quickly as * possible, the best thing to do is in any case to privilege the I/O * related to the application with respect to all other * I/O. Therefore, the best strategy to start as quickly as possible * an application that causes a burst of queue creations is to * weight-raise all the queues created during the burst. This is the * exact opposite of the best strategy for the other type of bursts. * * In the end, to take the best action for each of the two cases, the * two types of bursts need to be distinguished. Fortunately, this * seems relatively easy, by looking at the sizes of the bursts. In * particular, we found a threshold such that only bursts with a * larger size than that threshold are apparently caused by * services or commands such as systemd or git grep. For brevity, * hereafter we call just 'large' these bursts. BFQ *does not* * weight-raise queues whose creation occurs in a large burst. In * addition, for each of these queues BFQ performs or does not perform * idling depending on which choice boosts the throughput more. The * exact choice depends on the device and request pattern at * hand. * * Unfortunately, false positives may occur while an interactive task * is starting (e.g., an application is being started). The * consequence is that the queues associated with the task do not * enjoy weight raising as expected. Fortunately these false positives * are very rare. They typically occur if some service happens to * start doing I/O exactly when the interactive task starts. * * Turning back to the next function, it is invoked only if there are * no active queues (apart from active queues that would belong to the * same, possible burst bfqq would belong to), and it implements all * the steps needed to detect the occurrence of a large burst and to * properly mark all the queues belonging to it (so that they can then * be treated in a different way). This goal is achieved by * maintaining a "burst list" that holds, temporarily, the queues that * belong to the burst in progress. The list is then used to mark * these queues as belonging to a large burst if the burst does become * large. The main steps are the following. * * . when the very first queue is created, the queue is inserted into the * list (as it could be the first queue in a possible burst) * * . if the current burst has not yet become large, and a queue Q that does * not yet belong to the burst is activated shortly after the last time * at which a new queue entered the burst list, then the function appends * Q to the burst list * * . if, as a consequence of the previous step, the burst size reaches * the large-burst threshold, then * * . all the queues in the burst list are marked as belonging to a * large burst * * . the burst list is deleted; in fact, the burst list already served * its purpose (keeping temporarily track of the queues in a burst, * so as to be able to mark them as belonging to a large burst in the * previous sub-step), and now is not needed any more * * . the device enters a large-burst mode * * . if a queue Q that does not belong to the burst is created while * the device is in large-burst mode and shortly after the last time * at which a queue either entered the burst list or was marked as * belonging to the current large burst, then Q is immediately marked * as belonging to a large burst. * * . if a queue Q that does not belong to the burst is created a while * later, i.e., not shortly after, than the last time at which a queue * either entered the burst list or was marked as belonging to the * current large burst, then the current burst is deemed as finished and: * * . the large-burst mode is reset if set * * . the burst list is emptied * * . Q is inserted in the burst list, as Q may be the first queue * in a possible new burst (then the burst list contains just Q * after this step).
*/ staticvoid bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
{ /* * If bfqq is already in the burst list or is part of a large * burst, or finally has just been split, then there is * nothing else to do.
*/ if (!hlist_unhashed(&bfqq->burst_list_node) ||
bfq_bfqq_in_large_burst(bfqq) ||
time_is_after_eq_jiffies(bfqq->split_time +
msecs_to_jiffies(10))) return;
/* * If bfqq's creation happens late enough, or bfqq belongs to * a different group than the burst group, then the current * burst is finished, and related data structures must be * reset. * * In this respect, consider the special case where bfqq is * the very first queue created after BFQ is selected for this * device. In this case, last_ins_in_burst and * burst_parent_entity are not yet significant when we get * here. But it is easy to verify that, whether or not the * following condition is true, bfqq will end up being * inserted into the burst list. In particular the list will * happen to contain only bfqq. And this is exactly what has * to happen, as bfqq may be the first queue of the first * burst.
*/ if (time_is_before_jiffies(bfqd->last_ins_in_burst +
bfqd->bfq_burst_interval) ||
bfqq->entity.parent != bfqd->burst_parent_entity) {
bfqd->large_burst = false;
bfq_reset_burst_list(bfqd, bfqq); goto end;
}
/* * If we get here, then bfqq is being activated shortly after the * last queue. So, if the current burst is also large, we can mark * bfqq as belonging to this large burst immediately.
*/ if (bfqd->large_burst) {
bfq_mark_bfqq_in_large_burst(bfqq); goto end;
}
/* * If we get here, then a large-burst state has not yet been * reached, but bfqq is being activated shortly after the last * queue. Then we add bfqq to the burst.
*/
bfq_add_to_burst(bfqd, bfqq);
end: /* * At this point, bfqq either has been added to the current * burst or has caused the current burst to terminate and a * possible new burst to start. In particular, in the second * case, bfqq has become the first queue in the possible new * burst. In both cases last_ins_in_burst needs to be moved * forward.
*/
bfqd->last_ins_in_burst = jiffies;
}
/* * If enough samples have been computed, return the current max budget * stored in bfqd, which is dynamically updated according to the * estimated disk peak rate; otherwise return the default max budget
*/ staticint bfq_max_budget(struct bfq_data *bfqd)
{ if (bfqd->budgets_assigned < bfq_stats_min_budgets) return bfq_default_max_budget; else return bfqd->bfq_max_budget;
}
/* * Return min budget, which is a fraction of the current or default * max budget (trying with 1/32)
*/ staticint bfq_min_budget(struct bfq_data *bfqd)
{ if (bfqd->budgets_assigned < bfq_stats_min_budgets) return bfq_default_max_budget / 32; else return bfqd->bfq_max_budget / 32;
}
/* * The next function, invoked after the input queue bfqq switches from * idle to busy, updates the budget of bfqq. The function also tells * whether the in-service queue should be expired, by returning * true. The purpose of expiring the in-service queue is to give bfqq * the chance to possibly preempt the in-service queue, and the reason * for preempting the in-service queue is to achieve one of the two * goals below. * * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has * expired because it has remained idle. In particular, bfqq may have * expired for one of the following two reasons: * * - BFQQE_NO_MORE_REQUESTS bfqq did not enjoy any device idling * and did not make it to issue a new request before its last * request was served; * * - BFQQE_TOO_IDLE bfqq did enjoy device idling, but did not issue * a new request before the expiration of the idling-time. * * Even if bfqq has expired for one of the above reasons, the process * associated with the queue may be however issuing requests greedily, * and thus be sensitive to the bandwidth it receives (bfqq may have * remained idle for other reasons: CPU high load, bfqq not enjoying * idling, I/O throttling somewhere in the path from the process to * the I/O scheduler, ...). But if, after every expiration for one of * the above two reasons, bfqq has to wait for the service of at least * one full budget of another queue before being served again, then * bfqq is likely to get a much lower bandwidth or resource time than * its reserved ones. To address this issue, two countermeasures need * to be taken. * * First, the budget and the timestamps of bfqq need to be updated in * a special way on bfqq reactivation: they need to be updated as if * bfqq did not remain idle and did not expire. In fact, if they are * computed as if bfqq expired and remained idle until reactivation, * then the process associated with bfqq is treated as if, instead of * being greedy, it stopped issuing requests when bfqq remained idle, * and restarts issuing requests only on this reactivation. In other * words, the scheduler does not help the process recover the "service * hole" between bfqq expiration and reactivation. As a consequence, * the process receives a lower bandwidth than its reserved one. In * contrast, to recover this hole, the budget must be updated as if * bfqq was not expired at all before this reactivation, i.e., it must * be set to the value of the remaining budget when bfqq was * expired. Along the same line, timestamps need to be assigned the * value they had the last time bfqq was selected for service, i.e., * before last expiration. Thus timestamps need to be back-shifted * with respect to their normal computation (see [1] for more details * on this tricky aspect). * * Secondly, to allow the process to recover the hole, the in-service * queue must be expired too, to give bfqq the chance to preempt it * immediately. In fact, if bfqq has to wait for a full budget of the * in-service queue to be completed, then it may become impossible to * let the process recover the hole, even if the back-shifted * timestamps of bfqq are lower than those of the in-service queue. If * this happens for most or all of the holes, then the process may not * receive its reserved bandwidth. In this respect, it is worth noting * that, being the service of outstanding requests unpreemptible, a * little fraction of the holes may however be unrecoverable, thereby * causing a little loss of bandwidth. * * The last important point is detecting whether bfqq does need this * bandwidth recovery. In this respect, the next function deems the * process associated with bfqq greedy, and thus allows it to recover * the hole, if: 1) the process is waiting for the arrival of a new * request (which implies that bfqq expired for one of the above two * reasons), and 2) such a request has arrived soon. The first * condition is controlled through the flag non_blocking_wait_rq, * while the second through the flag arrived_in_time. If both * conditions hold, then the function computes the budget in the * above-described special way, and signals that the in-service queue * should be expired. Timestamp back-shifting is done later in * __bfq_activate_entity. * * 2. Reduce latency. Even if timestamps are not backshifted to let * the process associated with bfqq recover a service hole, bfqq may * however happen to have, after being (re)activated, a lower finish * timestamp than the in-service queue. That is, the next budget of * bfqq may have to be completed before the one of the in-service * queue. If this is the case, then preempting the in-service queue * allows this goal to be achieved, apart from the unpreemptible, * outstanding requests mentioned above. * * Unfortunately, regardless of which of the above two goals one wants * to achieve, service trees need first to be updated to know whether * the in-service queue must be preempted. To have service trees * correctly updated, the in-service queue must be expired and * rescheduled, and bfqq must be scheduled too. This is one of the * most costly operations (in future versions, the scheduling * mechanism may be re-designed in such a way to make it possible to * know whether preemption is needed without needing to update service * trees). In addition, queue preemptions almost always cause random * I/O, which may in turn cause loss of throughput. Finally, there may * even be no in-service queue when the next function is invoked (so, * no queue to compare timestamps with). Because of these facts, the * next function adopts the following simple scheme to avoid costly * operations, too frequent preemptions and too many dependencies on * the state of the scheduler: it requests the expiration of the * in-service queue (unconditionally) only for queues that need to * recover a hole. Then it delegates to other parts of the code the * responsibility of handling the above case 2.
*/ staticbool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, struct bfq_queue *bfqq, bool arrived_in_time)
{ struct bfq_entity *entity = &bfqq->entity;
/* * In the next compound condition, we check also whether there * is some budget left, because otherwise there is no point in * trying to go on serving bfqq with this same budget: bfqq * would be expired immediately after being selected for * service. This would only cause useless overhead.
*/ if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time &&
bfq_bfqq_budget_left(bfqq) > 0) { /* * We do not clear the flag non_blocking_wait_rq here, as * the latter is used in bfq_activate_bfqq to signal * that timestamps need to be back-shifted (and is * cleared right after).
*/
/* * In next assignment we rely on that either * entity->service or entity->budget are not updated * on expiration if bfqq is empty (see * __bfq_bfqq_recalc_budget). Thus both quantities * remain unchanged after such an expiration, and the * following statement therefore assigns to * entity->budget the remaining budget on such an * expiration.
*/
entity->budget = min_t(unsignedlong,
bfq_bfqq_budget_left(bfqq),
bfqq->max_budget);
/* * At this point, we have used entity->service to get * the budget left (needed for updating * entity->budget). Thus we finally can, and have to, * reset entity->service. The latter must be reset * because bfqq would otherwise be charged again for * the service it has received during its previous * service slot(s).
*/
entity->service = 0;
returntrue;
}
/* * We can finally complete expiration, by setting service to 0.
*/
entity->service = 0;
entity->budget = max_t(unsignedlong, bfqq->max_budget,
bfq_serv_to_charge(bfqq->next_rq, bfqq));
bfq_clear_bfqq_non_blocking_wait_rq(bfqq); returnfalse;
}
/* * Return the farthest past time instant according to jiffies * macros.
*/ staticunsignedlong bfq_smallest_from_now(void)
{ return jiffies - MAX_JIFFY_OFFSET;
}
staticvoid bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, struct bfq_queue *bfqq, unsignedint old_wr_coeff, bool wr_or_deserves_wr, bool interactive, bool in_burst, bool soft_rt)
{ if (old_wr_coeff == 1 && wr_or_deserves_wr) { /* start a weight-raising period */ if (interactive) {
bfqq->service_from_wr = 0;
bfqq->wr_coeff = bfqd->bfq_wr_coeff;
bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
} else { /* * No interactive weight raising in progress * here: assign minus infinity to * wr_start_at_switch_to_srt, to make sure * that, at the end of the soft-real-time * weight raising periods that is starting * now, no interactive weight-raising period * may be wrongly considered as still in * progress (and thus actually started by * mistake).
*/
bfqq->wr_start_at_switch_to_srt =
bfq_smallest_from_now();
bfqq->wr_coeff = bfqd->bfq_wr_coeff *
BFQ_SOFTRT_WEIGHT_FACTOR;
bfqq->wr_cur_max_time =
bfqd->bfq_wr_rt_max_time;
}
/* * If needed, further reduce budget to make sure it is * close to bfqq's backlog, so as to reduce the * scheduling-error component due to a too large * budget. Do not care about throughput consequences, * but only about latency. Finally, do not assign a * too small budget either, to avoid increasing * latency by causing too frequent expirations.
*/
bfqq->entity.budget = min_t(unsignedlong,
bfqq->entity.budget,
2 * bfq_min_budget(bfqd));
} elseif (old_wr_coeff > 1) { if (interactive) { /* update wr coeff and duration */
bfqq->wr_coeff = bfqd->bfq_wr_coeff;
bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
} elseif (in_burst)
bfqq->wr_coeff = 1; elseif (soft_rt) { /* * The application is now or still meeting the * requirements for being deemed soft rt. We * can then correctly and safely (re)charge * the weight-raising duration for the * application with the weight-raising * duration for soft rt applications. * * In particular, doing this recharge now, i.e., * before the weight-raising period for the * application finishes, reduces the probability * of the following negative scenario: * 1) the weight of a soft rt application is * raised at startup (as for any newly * created application), * 2) since the application is not interactive, * at a certain time weight-raising is * stopped for the application, * 3) at that time the application happens to * still have pending requests, and hence * is destined to not have a chance to be * deemed soft rt before these requests are * completed (see the comments to the * function bfq_bfqq_softrt_next_start() * for details on soft rt detection), * 4) these pending requests experience a high * latency because the application is not * weight-raised while they are pending.
*/ if (bfqq->wr_cur_max_time !=
bfqd->bfq_wr_rt_max_time) {
bfqq->wr_start_at_switch_to_srt =
bfqq->last_wr_start_finish;
/* * Return true if bfqq is in a higher priority class, or has a higher * weight than the in-service queue.
*/ staticbool bfq_bfqq_higher_class_or_weight(struct bfq_queue *bfqq, struct bfq_queue *in_serv_bfqq)
{ int bfqq_weight, in_serv_weight;
if (bfqq->ioprio_class < in_serv_bfqq->ioprio_class) returntrue;
/* * Get the index of the actuator that will serve bio.
*/ staticunsignedint bfq_actuator_index(struct bfq_data *bfqd, struct bio *bio)
{ unsignedint i;
sector_t end;
/* no search needed if one or zero ranges present */ if (bfqd->num_actuators == 1) return 0;
/* bio_end_sector(bio) gives the sector after the last one */
end = bio_end_sector(bio) - 1;
--> --------------------
--> maximum size reached
--> --------------------
¤ Dauer der Verarbeitung: 0.26 Sekunden
(vorverarbeitet)
¤
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung ist noch experimentell.