/** * DOC: Logical Rings, Logical Ring Contexts and Execlists * * Motivation: * GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts". * These expanded contexts enable a number of new abilities, especially * "Execlists" (also implemented in this file). * * One of the main differences with the legacy HW contexts is that logical * ring contexts incorporate many more things to the context's state, like * PDPs or ringbuffer control registers: * * The reason why PDPs are included in the context is straightforward: as * PPGTTs (per-process GTTs) are actually per-context, having the PDPs * contained there mean you don't need to do a ppgtt->switch_mm yourself, * instead, the GPU will do it for you on the context switch. * * But, what about the ringbuffer control registers (head, tail, etc..)? * shouldn't we just need a set of those per engine command streamer? This is * where the name "Logical Rings" starts to make sense: by virtualizing the * rings, the engine cs shifts to a new "ring buffer" with every context * switch. When you want to submit a workload to the GPU you: A) choose your * context, B) find its appropriate virtualized ring, C) write commands to it * and then, finally, D) tell the GPU to switch to that context. * * Instead of the legacy MI_SET_CONTEXT, the way you tell the GPU to switch * to a contexts is via a context execution list, ergo "Execlists". * * LRC implementation: * Regarding the creation of contexts, we have: * * - One global default context. * - One local default context for each opened fd. * - One local extra context for each context create ioctl call. * * Now that ringbuffers belong per-context (and not per-engine, like before) * and that contexts are uniquely tied to a given engine (and not reusable, * like before) we need: * * - One ringbuffer per-engine inside each context. * - One backing object per-engine inside each context. * * The global default context starts its life with these new objects fully * allocated and populated. The local default context for each opened fd is * more complex, because we don't know at creation time which engine is going * to use them. To handle this, we have implemented a deferred creation of LR * contexts: * * The local context starts its life as a hollow or blank holder, that only * gets populated for a given engine once we receive an execbuffer. If later * on we receive another execbuffer ioctl for the same context but a different * engine, we allocate/populate a new ringbuffer and context backing object and * so on. * * Finally, regarding local contexts created using the ioctl call: as they are * only allowed with the render ring, we can allocate & populate them right * away (no need to defer anything, at least for now). * * Execlists implementation: * Execlists are the new method by which, on gen8+ hardware, workloads are * submitted for execution (as opposed to the legacy, ringbuffer-based, method). * This method works as follows: * * When a request is committed, its commands (the BB start and any leading or * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer * for the appropriate context. The tail pointer in the hardware context is not * updated at this time, but instead, kept by the driver in the ringbuffer * structure. A structure representing this request is added to a request queue * for the appropriate engine: this structure contains a copy of the context's * tail after the request was written to the ring buffer and a pointer to the * context itself. * * If the engine's request queue was empty before the request was added, the * queue is processed immediately. Otherwise the queue will be processed during * a context switch interrupt. In any case, elements on the queue will get sent * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a * globally unique 20-bits submission ID. * * When execution of a request completes, the GPU updates the context status * buffer with a context complete event and generates a context switch interrupt. * During the interrupt handling, the driver examines the events in the buffer: * for each context complete event, if the announced ID matches that on the head * of the request queue, then that request is retired and removed from the queue. * * After processing, if any requests were retired and the queue is not empty * then a new execution list can be submitted. The two requests at the front of * the queue are next to be submitted but since a context may not occur twice in * an execution list, if subsequent requests have the same ID as the first then * the two requests must be combined. This is done simply by discarding requests * at the head of the queue until either only one requests is left (in which case * we use a NULL second context) or the first two requests have unique IDs. * * By always executing the first two requests in the queue the driver ensures * that the GPU is kept as busy as possible. In the case where a single context * completes but a second context is still executing, the request for this second * context will be at the head of the queue when we remove the first one. This * request will then be resubmitted along with a new request for a different context, * which will cause the hardware to continue executing the second request and queue * the new request (the GPU detects the condition of a context getting preempted * with the same context and optimizes the context switch flow by not doing * preemption, but just sampling the new tail pointer). *
*/ #include <linux/interrupt.h> #include <linux/string_helpers.h>
/* * We allow only a single request through the virtual engine at a time * (each request in the timeline waits for the completion fence of * the previous before being submitted). By restricting ourselves to * only submitting a single request, each request is placed on to a * physical to maximise load spreading (by virtue of the late greedy * scheduling -- each real engine takes the next available request * upon idling).
*/ struct i915_request *request;
/* * We keep a rbtree of available virtual engines inside each physical * engine, sorted by priority. Here we preallocate the nodes we need * for the virtual engine, indexed by physical_engine->id.
*/ struct ve_node { struct rb_node rb; int prio;
} nodes[I915_NUM_ENGINES];
/* And finally, which physical engines this virtual engine maps onto. */ unsignedint num_siblings; struct intel_engine_cs *siblings[];
};
staticvoid ring_set_paused(conststruct intel_engine_cs *engine, int state)
{ /* * We inspect HWS_PREEMPT with a semaphore inside * engine->emit_fini_breadcrumb. If the dword is true, * the ring is paused as the semaphore will busywait * until the dword is false.
*/
engine->status_page.addr[I915_GEM_HWS_PREEMPT] = state; if (state)
wmb();
}
staticint effective_prio(conststruct i915_request *rq)
{ int prio = rq_prio(rq);
/* * If this request is special and must not be interrupted at any * cost, so be it. Note we are only checking the most recent request * in the context and so may be masking an earlier vip request. It * is hoped that under the conditions where nopreempt is used, this * will not matter (i.e. all requests to that context will be * nopreempt for as long as desired).
*/ if (i915_request_has_nopreempt(rq))
prio = I915_PRIORITY_UNPREEMPTABLE;
staticbool need_preempt(conststruct intel_engine_cs *engine, conststruct i915_request *rq)
{ int last_prio;
if (!intel_engine_has_semaphores(engine)) returnfalse;
/* * Check if the current priority hint merits a preemption attempt. * * We record the highest value priority we saw during rescheduling * prior to this dequeue, therefore we know that if it is strictly * less than the current tail of ESLP[0], we do not need to force * a preempt-to-idle cycle. * * However, the priority hint is a mere hint that we may need to * preempt. If that hint is stale or we may be trying to preempt * ourselves, ignore the request. * * More naturally we would write * prio >= max(0, last); * except that we wish to prevent triggering preemption at the same * priority level: the task that is running should remain running * to preserve FIFO ordering of dependencies.
*/
last_prio = max(effective_prio(rq), I915_PRIORITY_NORMAL - 1); if (engine->sched_engine->queue_priority_hint <= last_prio) returnfalse;
/* * Check against the first request in ELSP[1], it will, thanks to the * power of PI, be the highest priority of that context.
*/ if (!list_is_last(&rq->sched.link, &engine->sched_engine->requests) &&
rq_prio(list_next_entry(rq, sched.link)) > last_prio) returntrue;
/* * If the inflight context did not trigger the preemption, then maybe * it was the set of queued requests? Pick the highest priority in * the queue (the first active priolist) and see if it deserves to be * running instead of ELSP[0]. * * The highest priority request in the queue can not be either * ELSP[0] or ELSP[1] as, thanks again to PI, if it was the same * context, it's priority would not exceed ELSP[0] aka last_prio.
*/ return max(virtual_prio(&engine->execlists),
queue_prio(engine->sched_engine)) > last_prio;
}
__maybe_unused staticbool
assert_priority_queue(conststruct i915_request *prev, conststruct i915_request *next)
{ /* * Without preemption, the prev may refer to the still active element * which we refuse to let go. * * Even with preemption, there are times when we think it is better not * to preempt and leave an ostensibly lower priority request in flight.
*/ if (i915_request_is_active(prev)) returntrue;
/* Check in case we rollback so far we wrap [size/2] */ if (intel_ring_direction(rq->ring,
rq->tail,
rq->ring->tail + 8) > 0)
rq->context->lrc.desc |= CTX_DESC_FORCE_RESTORE;
active = rq;
}
return active;
}
staticvoid
execlists_context_status_change(struct i915_request *rq, unsignedlong status)
{ /* * Only used when GVT-g is enabled now. When GVT-g is disabled, * The compiler should eliminate this function as dead-code.
*/ if (!IS_ENABLED(CONFIG_DRM_I915_GVT)) return;
/* * The executing context has been cancelled. We want to prevent * further execution along this context and propagate the error on * to anything depending on its results. * * In __i915_request_submit(), we apply the -EIO and remove the * requests' payloads for any banned requests. But first, we must * rewind the context back to the start of the incomplete request so * that we do not jump back into the middle of the batch. * * We preserve the breadcrumbs and semaphores of the incomplete * requests so that inter-timeline dependencies (i.e other timelines) * remain correctly ordered. And we defer to __i915_request_submit() * so that all asynchronous waits are correctly handled.
*/
ENGINE_TRACE(engine, "{ reset rq=%llx:%lld }\n",
rq->fence.context, rq->fence.seqno);
/* On resubmission of the active request, payload will be scrubbed */ if (__i915_request_is_complete(rq))
head = rq->tail; else
head = __active_request(ce->timeline, rq, -EIO)->head;
head = intel_ring_wrap(ce->ring, head);
/* Scrub the context image to prevent replaying the previous batch */
lrc_init_regs(ce, engine, true);
/* We've switched away, so this should be a no-op, but intent matters */
ce->lrc.lrca = lrc_update_regs(ce, engine, head);
}
if (unlikely(intel_context_is_closed(ce) &&
!intel_engine_has_heartbeat(engine)))
intel_context_set_exiting(ce);
if (unlikely(!intel_context_is_schedulable(ce) || bad_request(rq)))
reset_active(rq, engine);
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
lrc_check_regs(ce, engine, "before");
if (ce->tag) { /* Use a fixed tag for OA and friends */
GEM_BUG_ON(ce->tag <= BITS_PER_LONG);
ce->lrc.ccid = ce->tag;
} elseif (GRAPHICS_VER_FULL(engine->i915) >= IP_VER(12, 55)) { /* We don't need a strict matching tag, just different values */ unsignedint tag = ffs(READ_ONCE(engine->context_tag));
GEM_BUG_ON(tag == 0 || tag >= BITS_PER_LONG);
clear_bit(tag - 1, &engine->context_tag);
ce->lrc.ccid = tag << (XEHP_SW_CTX_ID_SHIFT - 32);
/* * After this point, the rq may be transferred to a new sibling, so * before we clear ce->inflight make sure that the context has been * removed from the b->signalers and furthermore we need to make sure * that the concurrent iterator in signal_irq_work is no longer * following ce->signal_link.
*/ if (!list_empty(&ce->signals))
intel_context_remove_breadcrumbs(ce, engine->breadcrumbs);
/* * This engine is now too busy to run this virtual request, so * see if we can find an alternative engine for it to execute on. * Once a request has become bonded to this engine, we treat it the * same as other native request.
*/ if (i915_request_in_priority_queue(rq) &&
rq->execution_mask != engine->mask)
resubmit_virtual_request(rq, ve);
if (READ_ONCE(ve->request))
tasklet_hi_schedule(&ve->base.sched_engine->tasklet);
}
/* * NB process_csb() is not under the engine->sched_engine->lock and hence * schedule_out can race with schedule_in meaning that we should * refrain from doing non-trivial work here.
*/
if (IS_ENABLED(CONFIG_DRM_I915_DEBUG_GEM))
lrc_check_regs(ce, engine, "after");
/* * If we have just completed this context, the engine may now be * idle and we want to re-enter powersaving.
*/ if (intel_timeline_is_last(ce->timeline, rq) &&
__i915_request_is_complete(rq))
intel_engine_add_retire(engine, ce->timeline);
/* * If this is part of a virtual engine, its next request may * have been blocked waiting for access to the active context. * We have to kick all the siblings again in case we need to * switch (e.g. the next request is not runnable on this * engine). Hopefully, we will already have submitted the next * request before the tasklet runs and do not need to rebuild * each virtual tree and kick everyone again.
*/ if (ce->engine != engine)
kick_siblings(rq, ce);
desc = ce->lrc.desc; if (rq->engine->flags & I915_ENGINE_HAS_EU_PRIORITY)
desc |= map_i915_prio_to_lrc_desc_prio(rq_prio(rq));
/* * WaIdleLiteRestore:bdw,skl * * We should never submit the context with the same RING_TAIL twice * just in case we submit an empty ring, which confuses the HW. * * We append a couple of NOOPs (gen8_emit_wa_tail) after the end of * the normal request to be able to always advance the RING_TAIL on * subsequent resubmissions (for lite restore). Should that fail us, * and we try and submit the same tail again, force the context * reload. * * If we need to return to a preempted context, we need to skip the * lite-restore and force it to reload the RING_TAIL. Otherwise, the * HW has a tendency to ignore us rewinding the TAIL to the end of * an earlier request.
*/
GEM_BUG_ON(ce->lrc_reg_state[CTX_RING_TAIL] != rq->ring->tail);
prev = rq->ring->tail;
tail = intel_ring_set_tail(rq->ring, rq->tail); if (unlikely(intel_ring_direction(rq->ring, tail, prev) <= 0))
desc |= CTX_DESC_FORCE_RESTORE;
ce->lrc_reg_state[CTX_RING_TAIL] = tail;
rq->tail = rq->wa_tail;
/* * Make sure the context image is complete before we submit it to HW. * * Ostensibly, writes (including the WCB) should be flushed prior to * an uncached write such as our mmio register access, the empirical * evidence (esp. on Braswell) suggests that the WC write into memory * may not be visible to the HW prior to the completion of the UC * register write and that we may begin execution from the context * before its image is complete leading to invalid PD chasing.
*/
wmb();
if (ce == rq->context) {
GEM_TRACE_ERR("%s: Dup context:%llx in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending); returnfalse;
}
ce = rq->context;
if (ccid == ce->lrc.ccid) {
GEM_TRACE_ERR("%s: Dup ccid:%x context:%llx in pending[%zd]\n",
engine->name,
ccid, ce->timeline->fence_context,
port - execlists->pending); returnfalse;
}
ccid = ce->lrc.ccid;
/* * Sentinels are supposed to be the last request so they flush * the current execution off the HW. Check that they are the only * request in the pending submission. * * NB: Due to the async nature of preempt-to-busy and request * cancellation we need to handle the case where request * becomes a sentinel in parallel to CSB processing.
*/ if (prev && i915_request_has_sentinel(prev) &&
!READ_ONCE(prev->fence.error)) {
GEM_TRACE_ERR("%s: context:%llx after sentinel in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending); returnfalse;
}
prev = rq;
/* * We want virtual requests to only be in the first slot so * that they are never stuck behind a hog and can be immediately * transferred onto the next idle engine.
*/ if (rq->execution_mask != engine->mask &&
port != execlists->pending) {
GEM_TRACE_ERR("%s: virtual engine:%llx not in prime position[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending); returnfalse;
}
/* Hold tightly onto the lock to prevent concurrent retires! */ if (!spin_trylock_irqsave(&rq->lock, flags)) continue;
if (__i915_request_is_complete(rq)) goto unlock;
if (i915_active_is_idle(&ce->active) &&
!intel_context_is_barrier(ce)) {
GEM_TRACE_ERR("%s: Inactive context:%llx in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending);
ok = false; goto unlock;
}
if (!i915_vma_is_pinned(ce->state)) {
GEM_TRACE_ERR("%s: Unpinned context:%llx in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending);
ok = false; goto unlock;
}
if (!i915_vma_is_pinned(ce->ring->vma)) {
GEM_TRACE_ERR("%s: Unpinned ring:%llx in pending[%zd]\n",
engine->name,
ce->timeline->fence_context,
port - execlists->pending);
ok = false; goto unlock;
}
unlock:
spin_unlock_irqrestore(&rq->lock, flags); if (!ok) returnfalse;
}
/* * We can skip acquiring intel_runtime_pm_get() here as it was taken * on our behalf by the request (see i915_gem_mark_busy()) and it will * not be relinquished until the device is idle (see * i915_gem_idle_work_handler()). As a precaution, we make sure * that all ELSP are drained i.e. we have processed the CSB, * before allowing ourselves to idle and calling intel_runtime_pm_put().
*/
GEM_BUG_ON(!intel_engine_pm_is_awake(engine));
/* * ELSQ note: the submit queue is not cleared after being submitted * to the HW so we need to make sure we always clean it up. This is * currently ensured by the fact that we always write the same number * of elsq entries, keep this in mind before changing the loop below.
*/ for (n = execlists_num_ports(execlists); n--; ) { struct i915_request *rq = execlists->pending[n];
/* * We do not submit known completed requests. Therefore if the next * request is already completed, we can pretend to merge it in * with the previous context (and we will skip updating the ELSP * and tracking). Thus hopefully keeping the ELSP full with active * contexts, despite the best efforts of preempt-to-busy to confuse * us.
*/ if (__i915_request_is_complete(next)) returntrue;
if (unlikely((i915_request_flags(prev) | i915_request_flags(next)) &
(BIT(I915_FENCE_FLAG_NOPREEMPT) |
BIT(I915_FENCE_FLAG_SENTINEL)))) returnfalse;
if (!can_merge_ctx(prev->context, next->context)) returnfalse;
if (!(rq->execution_mask & engine->mask)) /* We peeked too soon! */ returnfalse;
/* * We track when the HW has completed saving the context image * (i.e. when we have seen the final CS event switching out of * the context) and must not overwrite the context image before * then. This restricts us to only using the active engine * while the previous virtualized request is inflight (so * we reuse the register offsets). This is a very small * hystersis on the greedy seelction algorithm.
*/
inflight = intel_context_inflight(&ve->context); if (inflight && inflight != engine) returnfalse;
GEM_BUG_ON(READ_ONCE(ve->context.inflight)); if (!intel_engine_has_relative_mmio(engine))
lrc_update_offsets(&ve->context, engine);
/* * Move the bound engine to the top of the list for * future execution. We then kick this tasklet first * before checking others, so that we preferentially * reuse this set of bound registers.
*/ for (n = 1; n < ve->num_siblings; n++) { if (ve->siblings[n] == engine) {
swap(ve->siblings[n], ve->siblings[0]); break;
}
}
}
/* * We want to move the interrupted request to the back of * the round-robin list (i.e. its priority level), but * in doing so, we must then move all requests that were in * flight and were waiting for the interrupted request to * be run after it again.
*/ do { struct i915_dependency *p;
/* Leave semaphores spinning on the other engines */ if (w->engine != rq->engine) continue;
/* No waiter should start before its signaler */
GEM_BUG_ON(i915_request_has_initial_breadcrumb(w) &&
__i915_request_has_started(w) &&
!__i915_request_is_complete(rq));
staticbool
timeslice_yield(conststruct intel_engine_execlists *el, conststruct i915_request *rq)
{ /* * Once bitten, forever smitten! * * If the active context ever busy-waited on a semaphore, * it will be treated as a hog until the end of its timeslice (i.e. * until it is scheduled out and replaced by a new submission, * possibly even its own lite-restore). The HW only sends an interrupt * on the first miss, and we do know if that semaphore has been * signaled, or even if it is now stuck on another semaphore. Play * safe, yield if it might be stuck -- it will be given a fresh * timeslice in the near future.
*/ return rq->context->lrc.ccid == READ_ONCE(el->yield);
}
/* If not currently active, or about to switch, wait for next event */ if (!rq || __i915_request_is_complete(rq)) returnfalse;
/* We do not need to start the timeslice until after the ACK */ if (READ_ONCE(engine->execlists.pending[0])) returnfalse;
/* If ELSP[1] is occupied, always check to see if worth slicing */ if (!list_is_last_rcu(&rq->sched.link,
&engine->sched_engine->requests)) {
ENGINE_TRACE(engine, "timeslice required for second inflight context\n"); returntrue;
}
/* Otherwise, ELSP[0] is by itself, but may be waiting in the queue */ if (!i915_sched_engine_is_empty(engine->sched_engine)) {
ENGINE_TRACE(engine, "timeslice required for queue\n"); returntrue;
}
if (!RB_EMPTY_ROOT(&engine->execlists.virtual.rb_root)) {
ENGINE_TRACE(engine, "timeslice required for virtual\n"); returntrue;
}
/* Disable the timer if there is nothing to switch to */
duration = 0; if (needs_timeslice(engine, *el->active)) { /* Avoid continually prolonging an active timeslice */ if (timer_active(&el->timer)) { /* * If we just submitted a new ELSP after an old * context, that context may have already consumed * its timeslice, so recheck.
*/ if (!timer_pending(&el->timer))
tasklet_hi_schedule(&engine->sched_engine->tasklet); return;
}
/* Only allow ourselves to force reset the currently active context */
engine->execlists.preempt_target = rq;
/* Force a fast reset for terminated contexts (ignoring sysfs!) */ if (unlikely(intel_context_is_banned(rq->context) || bad_request(rq))) return INTEL_CONTEXT_BANNED_PREEMPT_TIMEOUT_MS;
/* * Hardware submission is through 2 ports. Conceptually each port * has a (RING_START, RING_HEAD, RING_TAIL) tuple. RING_START is * static for a context, and unique to each, so we only execute * requests belonging to a single context from each ring. RING_HEAD * is maintained by the CS in the context image, it marks the place * where it got up to last time, and through RING_TAIL we tell the CS * where we want to execute up to this time. * * In this list the requests are in order of execution. Consecutive * requests from the same context are adjacent in the ringbuffer. We * can combine these requests into a single RING_TAIL update: * * RING_HEAD...req1...req2 * ^- RING_TAIL * since to execute req2 the CS must first execute req1. * * Our goal then is to point each port to the end of a consecutive * sequence of requests as being the most optimal (fewest wake ups * and context switches) submission.
*/
spin_lock(&sched_engine->lock);
/* * If the queue is higher priority than the last * request in the currently active context, submit afresh. * We will resubmit again afterwards in case we need to split * the active context to interject the preemption request, * i.e. we will retrigger preemption following the ack in case * of trouble. *
*/
active = execlists->active; while ((last = *active) && completed(last))
active++;
if (last) { if (need_preempt(engine, last)) {
ENGINE_TRACE(engine, "preempting last=%llx:%lld, prio=%d, hint=%d\n",
last->fence.context,
last->fence.seqno,
last->sched.attr.priority,
sched_engine->queue_priority_hint);
record_preemption(execlists);
/* * Don't let the RING_HEAD advance past the breadcrumb * as we unwind (and until we resubmit) so that we do * not accidentally tell it to go backwards.
*/
ring_set_paused(engine, 1);
/* * Note that we have not stopped the GPU at this point, * so we are unwinding the incomplete requests as they * remain inflight and so by the time we do complete * the preemption, some of the unwound requests may * complete!
*/
__unwind_incomplete_requests(engine);
/* * Consume this timeslice; ensure we start a new one. * * The timeslice expired, and we will unwind the * running contexts and recompute the next ELSP. * If that submit will be the same pair of contexts * (due to dependency ordering), we will skip the * submission. If we don't cancel the timer now, * we will see that the timer has expired and * reschedule the tasklet; continually until the * next context switch or other preemption event. * * Since we have decided to reschedule based on * consumption of this timeslice, if we submit the * same context again, grant it a full timeslice.
*/
cancel_timer(&execlists->timer);
ring_set_paused(engine, 1);
defer_active(engine);
/* * Unlike for preemption, if we rewind and continue * executing the same context as previously active, * the order of execution will remain the same and * the tail will only advance. We do not need to * force a full context restore, as a lite-restore * is sufficient to resample the monotonic TAIL. * * If we switch to any other context, similarly we * will not rewind TAIL of current context, and * normal save/restore will preserve state and allow * us to later continue executing the same request.
*/
last = NULL;
} else { /* * Otherwise if we already have a request pending * for execution after the current one, we can * just wait until the next CS event before * queuing more. In either case we will force a * lite-restore preemption event, but if we wait * we hopefully coalesce several updates into a single * submission.
*/ if (active[1]) { /* * Even if ELSP[1] is occupied and not worthy * of timeslices, our queue might be.
*/
spin_unlock(&sched_engine->lock); return;
}
}
}
/* XXX virtual is always taking precedence */ while ((ve = first_virtual_engine(engine))) { struct i915_request *rq;
spin_lock(&ve->base.sched_engine->lock);
rq = ve->request; if (unlikely(!virtual_matches(ve, rq, engine))) goto unlock; /* lost the race to a sibling */
if (unlikely(rq_prio(rq) < queue_prio(sched_engine))) {
spin_unlock(&ve->base.sched_engine->lock); break;
}
if (last && !can_merge_rq(last, rq)) {
spin_unlock(&ve->base.sched_engine->lock);
spin_unlock(&engine->sched_engine->lock); return; /* leave this for another sibling */
}
if (__i915_request_submit(rq)) { /* * Only after we confirm that we will submit * this request (i.e. it has not already * completed), do we want to update the context. * * This serves two purposes. It avoids * unnecessary work if we are resubmitting an * already completed request after timeslicing. * But more importantly, it prevents us altering * ve->siblings[] on an idle context, where * we may be using ve->siblings[] in * virtual_context_enter / virtual_context_exit.
*/
virtual_xfer_context(ve, engine);
GEM_BUG_ON(ve->siblings[0] != engine);
/* * Hmm, we have a bunch of virtual engine requests, * but the first one was already completed (thanks * preempt-to-busy!). Keep looking at the veng queue * until we have no more relevant requests (i.e. * the normal submit queue has higher priority).
*/ if (submit) break;
}
/* * Can we combine this request with the current port? * It has to be the same context/ringbuffer and not * have any exceptions (e.g. GVT saying never to * combine contexts). * * If we can combine the requests, we can execute both * by updating the RING_TAIL to point to the end of the * second request, and so we never need to tell the * hardware about the first.
*/ if (last && !can_merge_rq(last, rq)) { /* * If we are on the second port and cannot * combine this request with the last, then we * are done.
*/ if (port == last_port) goto done;
/* * We must not populate both ELSP[] with the * same LRCA, i.e. we must submit 2 different * contexts if we submit 2 ELSP.
*/ if (last->context == rq->context) goto done;
if (i915_request_has_sentinel(last)) goto done;
/* * We avoid submitting virtual requests into * the secondary ports so that we can migrate * the request immediately to another engine * rather than wait for the primary request.
*/ if (rq->execution_mask != engine->mask) goto done;
/* * If GVT overrides us we only ever submit * port[0], leaving port[1] empty. Note that we * also have to be careful that we don't queue * the same context (even though a different * request) to the second port.
*/ if (ctx_single_port_submission(last->context) ||
ctx_single_port_submission(rq->context)) goto done;
merge = false;
}
if (__i915_request_submit(rq)) { if (!merge) {
*port++ = i915_request_get(last);
last = NULL;
}
/* * Here be a bit of magic! Or sleight-of-hand, whichever you prefer. * * We choose the priority hint such that if we add a request of greater * priority than this, we kick the submission tasklet to decide on * the right order of submitting the requests to hardware. We must * also be prepared to reorder requests as they are in-flight on the * HW. We derive the priority hint then as the first "hole" in * the HW submission ports and if there are no available slots, * the priority of the lowest executing request, i.e. last. * * When we do receive a higher priority request ready to run from the * user, see queue_request(), the priority hint is bumped to that * request triggering preemption on the next dequeue (or subsequent * interrupt for secondary ports).
*/
sched_engine->queue_priority_hint = queue_prio(sched_engine);
i915_sched_engine_reset_on_empty(sched_engine);
spin_unlock(&sched_engine->lock);
/* * We can skip poking the HW if we ended up with exactly the same set * of requests as currently running, e.g. trying to timeslice a pair * of ordered contexts.
*/ if (submit &&
memcmp(active,
execlists->pending,
(port - execlists->pending) * sizeof(*port))) {
*port = NULL; while (port-- != execlists->pending)
execlists_schedule_in(*port, port - execlists->pending);
staticvoid
copy_ports(struct i915_request **dst, struct i915_request **src, int count)
{ /* A memcpy_p() would be very useful here! */ while (count--)
WRITE_ONCE(*dst++, *src++); /* avoid write tearing */
}
/* Mark the end of active before we overwrite *active */ for (port = xchg(&execlists->active, execlists->pending); *port; port++)
*inactive++ = *port;
clear_ports(execlists->inflight, ARRAY_SIZE(execlists->inflight));
smp_wmb(); /* complete the seqlock for execlists_active() */
WRITE_ONCE(execlists->active, execlists->inflight);
/* Having cancelled all outstanding process_csb(), stop their timers */
GEM_BUG_ON(execlists->pending[0]);
cancel_timer(&execlists->timer);
cancel_timer(&execlists->preempt);
return inactive;
}
/* * Starting with Gen12, the status has a new format: * * bit 0: switched to new queue * bit 1: reserved * bit 2: semaphore wait mode (poll or signal), only valid when * switch detail is set to "wait on semaphore" * bits 3-5: engine class * bits 6-11: engine instance * bits 12-14: reserved * bits 15-25: sw context id of the lrc the GT switched to * bits 26-31: sw counter of the lrc the GT switched to * bits 32-35: context switch detail * - 0: ctx complete * - 1: wait on sync flip * - 2: wait on vblank * - 3: wait on scanline * - 4: wait on semaphore * - 5: context preempted (not on SEMAPHORE_WAIT or * WAIT_FOR_EVENT) * bit 36: reserved * bits 37-43: wait detail (for switch detail 1 to 4) * bits 44-46: reserved * bits 47-57: sw context id of the lrc the GT switched away from * bits 58-63: sw counter of the lrc the GT switched away from * * Xe_HP csb shuffles things around compared to TGL: * * bits 0-3: context switch detail (same possible values as TGL) * bits 4-9: engine instance * bits 10-25: sw context id of the lrc the GT switched to * bits 26-31: sw counter of the lrc the GT switched to * bit 32: semaphore wait mode (poll or signal), Only valid when * switch detail is set to "wait on semaphore" * bit 33: switched to new queue * bits 34-41: wait detail (for switch detail 1 to 4) * bits 42-57: sw context id of the lrc the GT switched away from * bits 58-63: sw counter of the lrc the GT switched away from
*/ staticinlinebool
__gen12_csb_parse(bool ctx_to_valid, bool ctx_away_valid, bool new_queue,
u8 switch_detail)
{ /* * The context switch detail is not guaranteed to be 5 when a preemption * occurs, so we can't just check for that. The check below works for * all the cases we care about, including preemptions of WAIT * instructions and lite-restore. Preempt-to-idle via the CTRL register * would require some extra handling, but we don't support that.
*/ if (!ctx_away_valid || new_queue) {
GEM_BUG_ON(!ctx_to_valid); returntrue;
}
/* * switch detail = 5 is covered by the case above and we do not expect a * context switch on an unsuccessful wait instruction since we always * use polling mode.
*/
GEM_BUG_ON(switch_detail); returnfalse;
}
/* * Reading from the HWSP has one particular advantage: we can detect * a stale entry. Since the write into HWSP is broken, we have no reason * to trust the HW at all, the mmio entry may equally be unordered, so * we prefer the path that is self-checking and as a last resort, * return the mmio value. * * tgl,dg1:HSDES#22011327657
*/
preempt_disable(); if (wait_for_atomic_us((entry = READ_ONCE(*csb)) != -1, 10)) { int idx = csb - engine->execlists.csb_status; int status;
status = GEN8_EXECLISTS_STATUS_BUF; if (idx >= 6) {
status = GEN11_EXECLISTS_STATUS_BUF2;
idx -= 6;
}
status += sizeof(u64) * idx;
/* * Unfortunately, the GPU does not always serialise its write * of the CSB entries before its write of the CSB pointer, at least * from the perspective of the CPU, using what is known as a Global * Observation Point. We may read a new CSB tail pointer, but then * read the stale CSB entries, causing us to misinterpret the * context-switch events, and eventually declare the GPU hung. * * icl:HSDES#1806554093 * tgl:HSDES#22011248461
*/ if (unlikely(entry == -1))
entry = wa_csb_read(engine, csb);
/* Consume this entry so that we can spot its future reuse. */
WRITE_ONCE(*csb, -1);
/* ELSP is an implicit wmb() before the GPU wraps and overwrites csb */ return entry;
}
staticvoid new_timeslice(struct intel_engine_execlists *el)
{ /* By cancelling, we will start afresh in start_timeslice() */
cancel_timer(&el->timer);
}
/* * As we modify our execlists state tracking we require exclusive * access. Either we are inside the tasklet, or the tasklet is disabled * and we assume that is only inside the reset paths and so serialised.
*/
GEM_BUG_ON(!tasklet_is_locked(&engine->sched_engine->tasklet) &&
!reset_in_progress(engine));
/* * Note that csb_write, csb_status may be either in HWSP or mmio. * When reading from the csb_write mmio register, we have to be * careful to only use the GEN8_CSB_WRITE_PTR portion, which is * the low 4bits. As it happens we know the next 4bits are always * zero and so we can simply masked off the low u8 of the register * and treat it identically to reading from the HWSP (without having * to use explicit shifting and masking, and probably bifurcating * the code to handle the legacy mmio read).
*/
head = execlists->csb_head;
tail = READ_ONCE(*execlists->csb_write); if (unlikely(head == tail)) return inactive;
/* * We will consume all events from HW, or at least pretend to. * * The sequence of events from the HW is deterministic, and derived * from our writes to the ELSP, with a smidgen of variability for * the arrival of the asynchronous requests wrt to the inflight * execution. If the HW sends an event that does not correspond with * the one we are expecting, we have to abandon all hope as we lose * all tracking of what the engine is actually executing. We will * only detect we are out of sequence with the HW when we get an * 'impossible' event because we have already drained our own * preemption/promotion queue. If this occurs, we know that we likely * lost track of execution earlier and must unwind and restart, the * simplest way is by stop processing the event queue and force the * engine to reset.
*/
execlists->csb_head = tail;
ENGINE_TRACE(engine, "cs-irq head=%d, tail=%d\n", head, tail);
/* * Hopefully paired with a wmb() in HW! * * We must complete the read of the write pointer before any reads * from the CSB, so that we do not see stale values. Without an rmb * (lfence) the HW may speculatively perform the CSB[] reads *before* * we perform the READ_ONCE(*csb_write).
*/
rmb();
/* Remember who was last running under the timer */
prev = inactive;
*prev = NULL;
do { bool promote;
u64 csb;
if (++head == num_entries)
head = 0;
/* * We are flying near dragons again. * * We hold a reference to the request in execlist_port[] * but no more than that. We are operating in softirq * context and so cannot hold any mutex or sleep. That * prevents us stopping the requests we are processing * in port[] from being retired simultaneously (the * breadcrumb will be complete before we see the * context-switch). As we only hold the reference to the * request, any pointer chasing underneath the request * is subject to a potential use-after-free. Thus we * store all of the bookkeeping within port[] as * required, and avoid using unguarded pointers beneath * request itself. The same applies to the atomic * status notifier.
*/
/* port0 completed, advanced to port1 */
trace_ports(execlists, "completed", execlists->active);
/* * We rely on the hardware being strongly * ordered, that the breadcrumb write is * coherent (visible from the CPU) before the * user interrupt is processed. One might assume * that the breadcrumb write being before the * user interrupt and the CS event for the context * switch would therefore be before the CS event * itself...
*/ if (GEM_SHOW_DEBUG() &&
!__i915_request_is_complete(*execlists->active)) { struct i915_request *rq = *execlists->active; const u32 *regs __maybe_unused =
rq->context->lrc_reg_state;
/* * Gen11 has proven to fail wrt global observation point between * entry and tail update, failing on the ordering and thus * we see an old entry in the context status buffer. * * Forcibly evict out entries for the next gpu csb update, * to increase the odds that we get a fresh entries with non * working hardware. The cost for doing so comes out mostly with * the wash as hardware, working or not, will need to do the * invalidation before.
*/
drm_clflush_virt_range(&buf[0], num_entries * sizeof(buf[0]));
/* * We assume that any event reflects a change in context flow * and merits a fresh timeslice. We reinstall the timer after * inspecting the queue to see if we need to resumbit.
*/ if (*prev != *execlists->active) { /* elide lite-restores */ struct intel_context *prev_ce = NULL, *active_ce = NULL;
/* * Note the inherent discrepancy between the HW runtime, * recorded as part of the context switch, and the CPU * adjustment for active contexts. We have to hope that * the delay in processing the CS event is very small * and consistent. It works to our advantage to have * the CPU adjustment _undershoot_ (i.e. start later than) * the CS timestamp so we never overreport the runtime * and correct overselves later when updating from HW.
*/ if (*prev)
prev_ce = (*prev)->context; if (*execlists->active)
active_ce = (*execlists->active)->context; if (prev_ce != active_ce) { if (prev_ce)
lrc_runtime_stop(prev_ce); if (active_ce)
lrc_runtime_start(active_ce);
}
new_timeslice(execlists);
}
return inactive;
}
--> --------------------
--> maximum size reached
--> --------------------
Messung V0.5
¤ Dauer der Verarbeitung: 0.24 Sekunden
(vorverarbeitet)
¤
Die Informationen auf dieser Webseite wurden
nach bestem Wissen sorgfältig zusammengestellt. Es wird jedoch weder Vollständigkeit, noch Richtigkeit,
noch Qualität der bereit gestellten Informationen zugesichert.
Bemerkung:
Die farbliche Syntaxdarstellung und die Messung sind noch experimentell.