Execution Queue

An Execution queue is an interface for the HW context of execution. The user creates an execution queue, submits the GPU jobs through those queues and in the end destroys them.

Execution queues can also be created by XeKMD itself for driver internal operations like object migration etc.

An execution queue is associated with a specified HW engine or a group of engines (belonging to the same tile and engine class) and any GPU job submitted on the queue will be run on one of these engines.

An execution queue is tied to an address space (VM). It holds a reference of the associated VM and the underlying Logical Ring Context/s (LRC/s) until the queue is destroyed.

The execution queue sits on top of the submission backend. It opaquely handles the GuC and Execlist backends whichever the platform uses, and the ring operations the different engine classes support.

Internal API

struct xe_exec_queue

Execution queue

Definition:

struct xe_exec_queue {
    struct xe_file *xef;
    struct xe_gt *gt;
    struct xe_hw_engine *hwe;
    struct kref refcount;
    struct xe_vm *vm;
    enum xe_engine_class class;
    u32 logical_mask;
    char name[MAX_FENCE_NAME_LEN];
    u16 width;
    u16 msix_vec;
    struct xe_hw_fence_irq *fence_irq;
    struct dma_fence *last_fence;
#define EXEC_QUEUE_FLAG_KERNEL                  BIT(0);
#define EXEC_QUEUE_FLAG_PERMANENT               BIT(1);
#define EXEC_QUEUE_FLAG_VM                      BIT(2);
#define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD       BIT(3);
#define EXEC_QUEUE_FLAG_HIGH_PRIORITY           BIT(4);
#define EXEC_QUEUE_FLAG_LOW_LATENCY             BIT(5);
#define EXEC_QUEUE_FLAG_MIGRATE                 BIT(6);
    unsigned long flags;
    union {
        struct list_head multi_gt_list;
        struct list_head multi_gt_link;
    };
    union {
        struct xe_execlist_exec_queue *execlist;
        struct xe_guc_exec_queue *guc;
    };
    struct {
        u32 timeslice_us;
        u32 preempt_timeout_us;
        u32 job_timeout_ms;
        enum xe_exec_queue_priority priority;
    } sched_props;
    struct {
        struct dma_fence *pfence;
        u64 context;
        u32 seqno;
        struct list_head link;
    } lr;
#define XE_EXEC_QUEUE_TLB_INVAL_PRIMARY_GT      0;
#define XE_EXEC_QUEUE_TLB_INVAL_MEDIA_GT        1;
#define XE_EXEC_QUEUE_TLB_INVAL_COUNT           (XE_EXEC_QUEUE_TLB_INVAL_MEDIA_GT  + 1);
    struct {
        struct xe_dep_scheduler *dep_scheduler;
    } tlb_inval[XE_EXEC_QUEUE_TLB_INVAL_COUNT];
    struct {
        u8 type;
        struct list_head link;
    } pxp;
    const struct xe_exec_queue_ops *ops;
    const struct xe_ring_ops *ring_ops;
    struct drm_sched_entity *entity;
    u64 tlb_flush_seqno;
    struct list_head hw_engine_group_link;
    struct xe_lrc *lrc[] ;
};

Members

xef

Back pointer to xe file if this is user created exec queue

gt

GT structure this exec queue can submit to

hwe

A hardware of the same class. May (physical engine) or may not (virtual engine) be where jobs actual engine up running. Should never really be used for submissions.

refcount

ref count of this exec queue

vm

VM (address space) for this exec queue

class

class of this exec queue

logical_mask

logical mask of where job submitted to exec queue can run

name

name of this exec queue

width

width (number BB submitted per exec) of this exec queue

msix_vec

MSI-X vector (for platforms that support it)

fence_irq

fence IRQ used to signal job completion

last_fence

last fence on exec queue, protected by vm->lock in write mode if bind exec queue, protected by dma resv lock if non-bind exec queue

flags

flags for this exec queue, should statically setup aside from ban bit

{unnamed_union}

anonymous

multi_gt_list

list head for VM bind engines if multi-GT

multi_gt_link

link for VM bind engines if multi-GT

{unnamed_union}

anonymous

execlist

execlist backend specific state for exec queue

guc

GuC backend specific state for exec queue

sched_props

scheduling properties

lr

long-running exec queue state

tlb_inval

TLB invalidations exec queue state

tlb_inval.dep_scheduler

The TLB invalidation dependency scheduler

pxp

PXP info tracking

ops

submission backend exec queue operations

ring_ops

ring operations for this exec queue

entity

DRM sched entity for this exec queue (1 to 1 relationship)

tlb_flush_seqno

The seqno of the last rebind tlb flush performed Protected by vm’s resv. Unused if vm == NULL.

hw_engine_group_link

link into exec queues in the same hw engine group

lrc

logical ring context for this exec queue

Description

Contains all state necessary for submissions. Can either be a user object or a kernel object.

struct xe_exec_queue_ops

Submission backend exec queue operations

Definition:

struct xe_exec_queue_ops {
    int (*init)(struct xe_exec_queue *q);
    void (*kill)(struct xe_exec_queue *q);
    void (*fini)(struct xe_exec_queue *q);
    void (*destroy)(struct xe_exec_queue *q);
    int (*set_priority)(struct xe_exec_queue *q, enum xe_exec_queue_priority priority);
    int (*set_timeslice)(struct xe_exec_queue *q, u32 timeslice_us);
    int (*set_preempt_timeout)(struct xe_exec_queue *q, u32 preempt_timeout_us);
    int (*suspend)(struct xe_exec_queue *q);
    int (*suspend_wait)(struct xe_exec_queue *q);
    void (*resume)(struct xe_exec_queue *q);
    bool (*reset_status)(struct xe_exec_queue *q);
};

Members

init

Initialize exec queue for submission backend

kill

Kill inflight submissions for backend

fini

Undoes the init() for submission backend

destroy

Destroy exec queue for submission backend. The backend function must call xe_exec_queue_fini() (which will in turn call the fini() backend function) to ensure the queue is properly cleaned up.

set_priority

Set priority for exec queue

set_timeslice

Set timeslice for exec queue

set_preempt_timeout

Set preemption timeout for exec queue

suspend

Suspend exec queue from executing, allowed to be called multiple times in a row before resume with the caveat that suspend_wait returns before calling suspend again.

suspend_wait

Wait for an exec queue to suspend executing, should be call after suspend. In dma-fencing path thus must return within a reasonable amount of time. -ETIME return shall indicate an error waiting for suspend resulting in associated VM getting killed.

resume

Resume exec queue execution, exec queue must be in a suspended state and dma fence returned from most recent suspend call must be signalled when this function is called.

reset_status

check exec queue reset status

struct xe_exec_queue *xe_exec_queue_create_bind(struct xe_device *xe, struct xe_tile *tile, u32 flags, u64 extensions)

Create bind exec queue.

Parameters

struct xe_device *xe

Xe device.

struct xe_tile *tile

tile which bind exec queue belongs to.

u32 flags

exec queue creation flags

u64 extensions

exec queue creation extensions

Description

Normalize bind exec queue creation. Bind exec queue is tied to migration VM for access to physical memory required for page table programming. On a faulting devices the reserved copy engine instance must be used to avoid deadlocking (user binds cannot get stuck behind faults as kernel binds which resolve faults depend on user binds). On non-faulting devices any copy engine can be used.

Returns exec queue on success, ERR_PTR on failure

struct xe_lrc *xe_exec_queue_lrc(struct xe_exec_queue *q)

Get the LRC from exec queue.

Parameters

struct xe_exec_queue *q

The exec_queue.

Description

Retrieves the primary LRC for the exec queue. Note that this function returns only the first LRC instance, even when multiple parallel LRCs are configured.

Return

Pointer to LRC on success, error on failure

bool xe_exec_queue_is_lr(struct xe_exec_queue *q)

Whether an exec_queue is long-running

Parameters

struct xe_exec_queue *q

The exec_queue

Return

True if the exec_queue is long-running, false otherwise.

bool xe_exec_queue_ring_full(struct xe_exec_queue *q)

Whether an exec_queue’s ring is full

Parameters

struct xe_exec_queue *q

The exec_queue

Return

True if the exec_queue’s ring is full, false otherwise.

bool xe_exec_queue_is_idle(struct xe_exec_queue *q)

Whether an exec_queue is idle.

Parameters

struct xe_exec_queue *q

The exec_queue

Description

FIXME: Need to determine what to use as the short-lived timeline lock for the exec_queues, so that the return value of this function becomes more than just an advisory snapshot in time. The timeline lock must protect the seqno from racing submissions on the same exec_queue. Typically vm->resv, but user-created timeline locks use the migrate vm and never grabs the migrate vm->resv so we have a race there.

Return

True if the exec_queue is idle, false otherwise.

void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)

Update run time in ticks for this exec queue from hw

Parameters

struct xe_exec_queue *q

The exec queue

Description

Update the timestamp saved by HW for this exec queue and save run ticks calculated by using the delta from last update.

void xe_exec_queue_kill(struct xe_exec_queue *q)

permanently stop all execution from an exec queue

Parameters

struct xe_exec_queue *q

The exec queue

Description

This function permanently stops all activity on an exec queue. If the queue is actively executing on the HW, it will be kicked off the engine; any pending jobs are discarded and all future submissions are rejected. This function is safe to call multiple times.

void xe_exec_queue_last_fence_put(struct xe_exec_queue *q, struct xe_vm *vm)

Drop ref to last fence

Parameters

struct xe_exec_queue *q

The exec queue

struct xe_vm *vm

The VM the engine does a bind or exec for

void xe_exec_queue_last_fence_put_unlocked(struct xe_exec_queue *q)

Drop ref to last fence unlocked

Parameters

struct xe_exec_queue *q

The exec queue

Description

Only safe to be called from xe_exec_queue_destroy().

struct dma_fence *xe_exec_queue_last_fence_get(struct xe_exec_queue *q, struct xe_vm *vm)

Get last fence

Parameters

struct xe_exec_queue *q

The exec queue

struct xe_vm *vm

The VM the engine does a bind or exec for

Description

Get last fence, takes a ref

Return

last fence if not signaled, dma fence stub if signaled

struct dma_fence *xe_exec_queue_last_fence_get_for_resume(struct xe_exec_queue *q, struct xe_vm *vm)

Get last fence

Parameters

struct xe_exec_queue *q

The exec queue

struct xe_vm *vm

The VM the engine does a bind or exec for

Description

Get last fence, takes a ref. Only safe to be called in the context of resuming the hw engine group’s long-running exec queue, when the group semaphore is held.

Return

last fence if not signaled, dma fence stub if signaled

void xe_exec_queue_last_fence_set(struct xe_exec_queue *q, struct xe_vm *vm, struct dma_fence *fence)

Set last fence

Parameters

struct xe_exec_queue *q

The exec queue

struct xe_vm *vm

The VM the engine does a bind or exec for

struct dma_fence *fence

The fence

Description

Set the last fence for the engine. Increases reference count for fence, when closing engine xe_exec_queue_last_fence_put should be called.

int xe_exec_queue_last_fence_test_dep(struct xe_exec_queue *q, struct xe_vm *vm)

Test last fence dependency of queue

Parameters

struct xe_exec_queue *q

The exec queue

struct xe_vm *vm

The VM the engine does a bind or exec for

Return

-ETIME if there exists an unsignalled last fence dependency, zero otherwise.

int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch)

Re-compute GGTT references within all LRCs of a queue.

Parameters

struct xe_exec_queue *q

the xe_exec_queue struct instance containing target LRCs

void *scratch

scratch buffer to be used as temporary storage

Return

zero on success, negative error code on failure