Userland interfaces

The DRM core exports several interfaces to applications, generally intended to be used through corresponding libdrm wrapper functions. In addition, drivers export device-specific interfaces for use by userspace drivers & device-aware applications through ioctls and sysfs files.

External interfaces include: memory mapping, context management, DMA operations, AGP management, vblank control, fence management, memory management, and output management.

Cover generic ioctls and sysfs layout here. We only need high-level info, since man pages should cover the rest.

libdrm Device Lookup

BEWARE THE DRAGONS! MIND THE TRAPDOORS!

In an attempt to warn anyone else who’s trying to figure out what’s going on here, I’ll try to summarize the story. First things first, let’s clear up the names, because the kernel internals, libdrm and the ioctls are all named differently:

  • GET_UNIQUE ioctl, implemented by drm_getunique is wrapped up in libdrm through the drmGetBusid function.
  • The libdrm drmSetBusid function is backed by the SET_UNIQUE ioctl. All that code is nerved in the kernel with drm_invalid_op().
  • The internal set_busid kernel functions and driver callbacks are exclusively use by the SET_VERSION ioctl, because only drm 1.0 (which is nerved) allowed userspace to set the busid through the above ioctl.
  • Other ioctls and functions involved are named consistently.

For anyone wondering what’s the difference between drm 1.1 and 1.4: Correctly handling pci domains in the busid on ppc. Doing this correctly was only implemented in libdrm in 2010, hence can’t be nerved yet. No one knows what’s special with drm 1.2 and 1.3.

Now the actual horror story of how device lookup in drm works. At large, there’s 2 different ways, either by busid, or by device driver name.

Opening by busid is fairly simple:

  1. First call SET_VERSION to make sure pci domains are handled properly. As a side-effect this fills out the unique name in the master structure.
  2. Call GET_UNIQUE to read out the unique name from the master structure, which matches the busid thanks to step 1. If it doesn’t, proceed to try the next device node.

Opening by name is slightly different:

  1. Directly call VERSION to get the version and to match against the driver name returned by that ioctl. Note that SET_VERSION is not called, which means the the unique name for the master node just opening is _not_ filled out. This despite that with current drm device nodes are always bound to one device, and can’t be runtime assigned like with drm 1.0.
  2. Match driver name. If it mismatches, proceed to the next device node.
  3. Call GET_UNIQUE, and check whether the unique name has length zero (by checking that the first byte in the string is 0). If that’s not the case libdrm skips and proceeds to the next device node. Probably this is just copypasta from drm 1.0 times where a set unique name meant that the driver was in use already, but that’s just conjecture.

Long story short: To keep the open by name logic working, GET_UNIQUE must _not_ return a unique string when SET_VERSION hasn’t been called yet, otherwise libdrm breaks. Even when that unique string can’t ever change, and is totally irrelevant for actually opening the device because runtime assignable device instances were only support in drm 1.0, which is long dead. But the libdrm code in drmOpenByName somehow survived, hence this can’t be broken.

Primary Nodes, DRM Master and Authentication

struct drm_master is used to track groups of clients with open primary/legacy device nodes. For every struct drm_file which has had at least once successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time) there exists one drm_master. This is noted in the is_master member of drm_file. All other clients have just a pointer to the drm_master they are associated with.

In addition only one drm_master can be the current master for a drm_device. It can be switched through the DROP_MASTER and SET_MASTER IOCTL, or implicitly through closing/openeing the primary device node. See also drm_is_current_master().

Clients can authenticate against the current master (if it matches their own) using the GETMAGIC and AUTHMAGIC IOCTLs. Together with exchanging masters, this allows controlled access to the device for an entire group of mutually trusted clients.

bool drm_is_current_master(struct drm_file * fpriv)

checks whether priv is the current master

Parameters

struct drm_file * fpriv
DRM file private

Description

Checks whether fpriv is current master on its device. This decides whether a client is allowed to run DRM_MASTER IOCTLs.

Most of the modern IOCTL which require DRM_MASTER are for kernel modesetting - the current master is assumed to own the non-shareable display hardware.

struct drm_master * drm_master_get(struct drm_master * master)

reference a master pointer

Parameters

struct drm_master * master
struct drm_master

Description

Increments the reference count of master and returns a pointer to master.

void drm_master_put(struct drm_master ** master)

unreference and clear a master pointer

Parameters

struct drm_master ** master
pointer to a pointer of struct drm_master

Description

This decrements the drm_master behind master and sets it to NULL.

struct drm_master

drm master structure

Definition

struct drm_master {
  struct kref refcount;
  struct drm_device * dev;
  char * unique;
  int unique_len;
  struct idr magic_map;
  struct drm_lock_data lock;
  void * driver_priv;
};

Members

struct kref refcount
Refcount for this master object.
struct drm_device * dev
Link back to the DRM device
char * unique
Unique identifier: e.g. busid. Protected by drm_global_mutex.
int unique_len
Length of unique field. Protected by drm_global_mutex.
struct idr magic_map
Map of used authentication tokens. Protected by struct_mutex.
struct drm_lock_data lock
DRI lock information.
void * driver_priv
Pointer to driver-private information.

Description

Note that master structures are only relevant for the legacy/primary device nodes, hence there can only be one per device, not one per drm_minor.

Render nodes

DRM core provides multiple character-devices for user-space to use. Depending on which device is opened, user-space can perform a different set of operations (mainly ioctls). The primary node is always created and called card<num>. Additionally, a currently unused control node, called controlD<num> is also created. The primary node provides all legacy operations and historically was the only interface used by userspace. With KMS, the control node was introduced. However, the planned KMS control interface has never been written and so the control node stays unused to date.

With the increased use of offscreen renderers and GPGPU applications, clients no longer require running compositors or graphics servers to make use of a GPU. But the DRM API required unprivileged clients to authenticate to a DRM-Master prior to getting GPU access. To avoid this step and to grant clients GPU access without authenticating, render nodes were introduced. Render nodes solely serve render clients, that is, no modesetting or privileged ioctls can be issued on render nodes. Only non-global rendering commands are allowed. If a driver supports render nodes, it must advertise it via the DRIVER_RENDER DRM driver capability. If not supported, the primary node must be used for render clients together with the legacy drmAuth authentication procedure.

If a driver advertises render node support, DRM core will create a separate render node called renderD<num>. There will be one render node per device. No ioctls except PRIME-related ioctls will be allowed on this node. Especially GEM_OPEN will be explicitly prohibited. Render nodes are designed to avoid the buffer-leaks, which occur if clients guess the flink names or mmap offsets on the legacy interface. Additionally to this basic interface, drivers must mark their driver-dependent render-only ioctls as DRM_RENDER_ALLOW so render clients can use them. Driver authors must be careful not to allow any privileged ioctls on render nodes.

With render nodes, user-space can now control access to the render node via basic file-system access-modes. A running graphics server which authenticates clients on the privileged primary/legacy node is no longer required. Instead, a client can open the render node and is immediately granted GPU access. Communication between clients (or servers) is done via PRIME. FLINK from render node to legacy node is not supported. New clients must not use the insecure FLINK interface.

Besides dropping all modeset/global ioctls, render nodes also drop the DRM-Master concept. There is no reason to associate render clients with a DRM-Master as they are independent of any graphics server. Besides, they must work without any running master, anyway. Drivers must be able to run without a master object if they support render nodes. If, on the other hand, a driver requires shared state between clients which is visible to user-space and accessible beyond open-file boundaries, they cannot support render nodes.

VBlank event handling

The DRM core exposes two vertical blank related ioctls:

DRM_IOCTL_WAIT_VBLANK
This takes a struct drm_wait_vblank structure as its argument, and it is used to block or request a signal when a specified vblank event occurs.
DRM_IOCTL_MODESET_CTL
This was only used for user-mode-settind drivers around modesetting changes to allow the kernel to update the vblank interrupt after mode setting, since on many devices the vertical blank counter is reset to 0 at some point during modeset. Modern drivers should not call this any more since with kernel mode setting it is a no-op.

This second part of the GPU Driver Developer’s Guide documents driver code, implementation details and also all the driver-specific userspace interfaces. Especially since all hardware-acceleration interfaces to userspace are driver specific for efficiency and other reasons these interfaces can be rather substantial. Hence every driver has its own chapter.