aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorCyril Chemparathy <cyril@ti.com>2012-01-08 15:46:41 -0500
committerCyril Chemparathy <cyril@ti.com>2012-09-21 10:44:11 -0400
commit4c96ea43f8d8c91c2f5bf6ddd6b934a6d4a62908 (patch)
treec097cc2bd6391931d6f655ed8259cfefe2f0378d
parent6721114d9f2ce161cb4e5b870f37a64b7b657772 (diff)
downloadlinux-keystone-4c96ea43f8d8c91c2f5bf6ddd6b934a6d4a62908.tar.gz
hwqueue: add hwqueue core infrastructure
This patch adds an hwqueue core framework roughly along the lines of the hwspinlock architecture. This framework is hardware/platform independent and only provides an API abstraction on top of a registration/unregistration mechanism for real hardware hwqueue implementations. The abstraction elements in here are as follows: - "device" corresponds to a hardware queue manager. In the case of Appleton there is only one instance of a hwqueue device in the system. However, future devices may have more than one physical instance, for example, Keystone 2 devices have 2 instances of QMSS 1.5 devices. - "instance" corresponds to an actual hardware queue. Instances are numbered per-device as offsets from the device's base_id. - "handle" corresponds to a particular user of a hardware queue instance. Hardware queues may be opened by multiple entities in the kernel, and each entity may open the queue with a different read/write mode. Supported functionality: - Registration: The hwqueue core framework depends only on statically initialized state, and is therefore active very early in the kernel boot process. This allows other drivers to use hwqueue APIs without worrying about actual probe order. The registration/unregistration APIs allow underlying hardware implementations to add (and remove) themselves on the fly. - File semantics: The hwqueue core framework enforces traditional Unix open() mode semantics on top of hardware queues. These semantics include: * read/write modes: read-only, write-only, and read-write * creation mode: enforces that a queue not be already open * exclusive mode: enforces exclusive ownership on the queue - Notification: Read-capable users of an hwqueue may register one or more notifiers on the queue. These notifiers are called out independent of the underlying notification mechanism (accumulation, qpend interrupt, timer poll). The hwqueue core framework provides a timer based poll implementation for hwqueue hardware that does not support interrupt driven notification. - Debug visibility: The hwqueue core framework provides reasonably detailed debugfs visibility into the queues managed through this framework. This framework is nowhere near complete, and is sure to be extended quite a bit more as we add real hardware drivers under this framework, and real API users above it. It would, however, be useful to get some early feedback on the overall model, assumptions, etc. before we go too far down this road. Signed-Off-by: Prabhu Kuttiyam <pkuttiyam@ti.com> Signed-off-by: Cyril Chemparathy <cyril@ti.com> hwqueue: cleanups, improve debugfs dump hwqueue: avoid divide on queue push/pop This patch speeds up hwqueue push and pop operations by eliminating a divide operation in the processing. Since the ARM CPU does not have a divide instruction, this operation was spending measurable time in software divide routines. This divide was being done in mapping an hwqueue instance pointer to and from a queue index. With this patch in place, we round up the size of the hwqueue instance data to the nearest power of two, and use faster shift operations instead. hwqueue: add branch optimization hints This patch mainly adds branch optimization hints (likely/unlikely) for the compiler's benefit. In addition, we now enforce the presence of push and pop handlers at device registration time instead of checking on every push/pop. We've also removed a couple of impossible warning checks on helper routines that convert back and forth between hwqueue handles and internal types. hwqueue: reorder fields to group frequently used data This patch reorders the contents of the hwqueue_device data structure, to group together data that is frequently used in the data path. hwqueue: simplify data structures, inline data path ops This patch simplifies the hwqueue core data structures, by eliminating the need for a distinct hwqueue_handle structure. This further allows data path operations such as push and pop to be inlined in hwqueue.h. hwqueue: add scatter/gather support to hwqueue Prior to this patch, hwqueue_push() assumed the "data" parameter was the address of a buffer that needed to be mapped before being pushed. Likewise, hwqueue_pop() assumed the value popped was the address of a buffer that needed to be unmapped. This patch set breaks out these operations into separate functions. If the value to be pushed is the address of a buffer, it must be mapped using hwqueue_map(), and the DMA address returned is then passed to hwqueue_push(). Similarly, if the value retrieved by hwqueue_pop() is the address of a buffer, it must be unmapped using hwqueue_unmap() to get the virtual address of the buffer. These changes BREAK the old semantics of hwqueue_push() and hwqueue_pop(). Expect compile errors due to argument list changes. Do NOT just cast the old arguments to the new types; fix the errors properly. hwqueue fix Signed-off-by: Sandeep Paulraj <s-paulraj@ti.com> hwqueue: clean up statistics to separate out error counters This patch cleans up hwqueue statistics. Specifically, it separates out error counters from the original push/pop counters. hwqueue: add device tree bindings documentation Signed-off-by: Sandeep Paulraj <s-paulraj@ti.com> hwqueue: move hwqueue doc Signed-off-by: Sandeep Paulraj <s-paulraj@ti.com>
-rw-r--r--Documentation/devicetree/bindings/hwqueue/keystone-hwqueue.txt75
-rw-r--r--drivers/Kconfig2
-rw-r--r--drivers/Makefile1
-rw-r--r--drivers/hwqueue/Kconfig11
-rw-r--r--drivers/hwqueue/Makefile5
-rw-r--r--drivers/hwqueue/hwqueue_core.c745
-rw-r--r--drivers/hwqueue/hwqueue_internal.h121
-rw-r--r--include/linux/hwqueue.h220
8 files changed, 1180 insertions, 0 deletions
diff --git a/Documentation/devicetree/bindings/hwqueue/keystone-hwqueue.txt b/Documentation/devicetree/bindings/hwqueue/keystone-hwqueue.txt
new file mode 100644
index 00000000000000..4b7e05c2e8ad5d
--- /dev/null
+++ b/Documentation/devicetree/bindings/hwqueue/keystone-hwqueue.txt
@@ -0,0 +1,75 @@
+* Texas Instruments Keystone hwqueue driver
+
+Required properties:
+- compatible : Should be "ti,keystone-hwqueue";
+- reg : Address and length of the register set for the device for peek,
+ push/pop etc.
+- range : <start number> total range of hwqueue numbers for the device
+- region : <start number> of memory regions to use
+- linkram0 : <start number> of total link ram indices available
+- link-index : <start number> of link ram indices to use
+- queues : number of queues to use per queue range name (see example below)
+- descriptors : number and size of descriptors to use per hwqueue instance
+ name (see example below)
+
+Optional properties:
+- pdsps - PDSP configuration, if any.
+
+Example:
+
+hwqueue0: hwqueue@2a00000 {
+ compatible = "ti,keystone-hwqueue";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ reg = <0x2a00000 0x20000 /* 0 - peek */
+ 0x2a62000 0x6000 /* 1 - status */
+ 0x2a68000 0x2000 /* 2 - config */
+ 0x2a6a000 0x4000 /* 3 - region */
+ 0x2a20000 0x20000>; /* 4 - push/pop */
+ range = <0 0x1000>;
+ regions = <12 3>;
+ linkram0 = <0x80000 0x4000>;
+ link-index = <0x800 0x400>;
+ queues {
+ pacmd {
+ values = <640 1>;
+ reserved;
+ };
+ patx {
+ values = <648 1>;
+ reserved;
+ };
+ qpend-arm {
+ values = <650 8>;
+ irq-base = <41>;
+ };
+ infradma {
+ values = <800 2>;
+ reserved;
+ };
+ general {
+ values = <4000 16>;
+ };
+ };
+ descriptors {
+ pool-veth {
+ values = <256 128>; /* num_desc desc_size */
+ };
+ pool-net {
+ values = <512 128>; /* num_desc desc_size */
+ };
+ };
+ pdsps {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ pdsp0@0x2a60000 {
+ firmware = "keystone/qmss_pdsp_acc48_le_1_0_2_0.fw";
+ reg = <0x2a60000 0x1000 /*iram */
+ 0x2a6e000 0x1000 /*reg*/
+ 0x2ab8000 0x4000>; /*cmd*/
+ };
+ };
+};
+
diff --git a/drivers/Kconfig b/drivers/Kconfig
index ece958d3762e4a..30e1fe1db30287 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -130,6 +130,8 @@ source "drivers/clk/Kconfig"
source "drivers/hwspinlock/Kconfig"
+source "drivers/hwqueue/Kconfig"
+
source "drivers/clocksource/Kconfig"
source "drivers/iommu/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index 5b421840c48d28..3ce7fc05241294 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -125,6 +125,7 @@ obj-y += ieee802154/
obj-y += clk/
obj-$(CONFIG_HWSPINLOCK) += hwspinlock/
+obj-$(CONFIG_HWQUEUE) += hwqueue/
obj-$(CONFIG_NFC) += nfc/
obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
obj-$(CONFIG_REMOTEPROC) += remoteproc/
diff --git a/drivers/hwqueue/Kconfig b/drivers/hwqueue/Kconfig
new file mode 100644
index 00000000000000..fd6cf7f0ad4c60
--- /dev/null
+++ b/drivers/hwqueue/Kconfig
@@ -0,0 +1,11 @@
+#
+# Generic HWQUEUE framework
+#
+
+# HWQUEUE always gets selected by whoever wants it.
+config HWQUEUE
+ tristate
+
+menu "Hardware Queue drivers"
+
+endmenu
diff --git a/drivers/hwqueue/Makefile b/drivers/hwqueue/Makefile
new file mode 100644
index 00000000000000..ba3bfe2eaf7faa
--- /dev/null
+++ b/drivers/hwqueue/Makefile
@@ -0,0 +1,5 @@
+#
+# Generic Hardware Queue framework
+#
+
+obj-$(CONFIG_HWQUEUE) += hwqueue_core.o
diff --git a/drivers/hwqueue/hwqueue_core.c b/drivers/hwqueue/hwqueue_core.c
new file mode 100644
index 00000000000000..d7f874757e54c2
--- /dev/null
+++ b/drivers/hwqueue/hwqueue_core.c
@@ -0,0 +1,745 @@
+/*
+ * Hardware queue framework
+ *
+ * Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Contact: Prabhu Kuttiyam <pkuttiyam@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+
+#include "hwqueue_internal.h"
+
+static LIST_HEAD(hwqueue_devices);
+static DEFINE_MUTEX(hwqueue_devices_lock);
+
+#define for_each_handle_rcu(qh, inst) \
+ list_for_each_entry_rcu(qh, &inst->handles, list)
+
+#define for_each_device(hdev) \
+ list_for_each_entry(hdev, &hwqueue_devices, list)
+
+#define for_each_instance(id, inst, hdev) \
+ for (id = 0, inst = hdev->instances; \
+ id < (hdev)->num_queues; \
+ id++, inst = hwqueue_id_to_inst(hdev, id))
+
+static void __hwqueue_poll(unsigned long data);
+
+static int hwqueue_poll_interval = 100;
+
+static inline bool hwqueue_is_busy(struct hwqueue_instance *inst)
+{
+ return !list_empty(&inst->handles);
+}
+
+static inline bool hwqueue_is_exclusive(struct hwqueue_instance *inst)
+{
+ struct hwqueue *tmp;
+
+ rcu_read_lock();
+
+ for_each_handle_rcu(tmp, inst) {
+ if (tmp->flags & O_EXCL) {
+ rcu_read_unlock();
+ return true;
+ }
+ }
+
+ rcu_read_unlock();
+
+ return false;
+}
+
+static inline bool hwqueue_is_writable(struct hwqueue *qh)
+{
+ unsigned acc = qh->flags & O_ACCMODE;
+ return (acc == O_RDWR || acc == O_WRONLY);
+}
+
+static inline bool hwqueue_is_readable(struct hwqueue *qh)
+{
+ unsigned acc = qh->flags & O_ACCMODE;
+ return (acc == O_RDWR || acc == O_RDONLY);
+}
+
+static inline struct hwqueue_instance *hwqueue_find_by_id(int id)
+{
+ struct hwqueue_device *hdev;
+
+ for_each_device(hdev) {
+ if (hdev->base_id <= id &&
+ hdev->base_id + hdev->num_queues > id) {
+ id -= hdev->base_id;
+ return hwqueue_id_to_inst(hdev, id);
+ }
+ }
+ return NULL;
+}
+
+int hwqueue_device_register(struct hwqueue_device *hdev)
+{
+ struct hwqueue_instance *inst;
+ int id, size, ret = -EEXIST;
+ struct hwqueue_device *b;
+
+ if (!hdev->ops || !hdev->dev || !hdev->ops->push || !hdev->ops->pop)
+ return -EINVAL;
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ for_each_device(b) {
+ if (b->base_id + b->num_queues >= hdev->base_id &&
+ hdev->base_id + hdev->num_queues >= b->base_id) {
+ dev_err(hdev->dev, "id collision with %s\n",
+ dev_name(b->dev));
+ goto unlock_ret;
+ }
+ }
+ ret = -ENOMEM;
+
+ /* how much do we need for instance data? */
+ size = sizeof(struct hwqueue_instance) + hdev->priv_size;
+
+ /* round this up to a power of 2, keep the push/pop arithmetic fast */
+ hdev->inst_shift = order_base_2(size);
+ size = (1 << hdev->inst_shift) * hdev->num_queues;
+
+ hdev->instances = kzalloc(size, GFP_KERNEL);
+ if (!hdev->instances)
+ goto unlock_ret;
+
+ ret = 0;
+ for_each_instance(id, inst, hdev) {
+ inst->hdev = hdev;
+ INIT_LIST_HEAD(&inst->handles);
+ setup_timer(&inst->poll_timer, __hwqueue_poll,
+ (unsigned long)inst);
+ init_waitqueue_head(&inst->wait);
+ }
+
+ list_add(&hdev->list, &hwqueue_devices);
+
+ dev_info(hdev->dev, "registered queues %d-%d\n",
+ hdev->base_id, hdev->base_id + hdev->num_queues - 1);
+
+unlock_ret:
+ mutex_unlock(&hwqueue_devices_lock);
+ return ret;
+}
+EXPORT_SYMBOL(hwqueue_device_register);
+
+int hwqueue_device_unregister(struct hwqueue_device *hdev)
+{
+ struct hwqueue_instance *inst;
+ int id, ret = -EBUSY;
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ for_each_instance(id, inst, hdev) {
+ if (hwqueue_is_busy(inst)) {
+ dev_err(hdev->dev, "cannot unregister busy dev\n");
+ goto unlock_ret;
+ }
+ }
+
+ list_del(&hdev->list);
+ dev_info(hdev->dev, "unregistered queues %d-%d\n",
+ hdev->base_id, hdev->base_id + hdev->num_queues);
+ kfree(hdev->instances);
+ ret = 0;
+
+unlock_ret:
+ mutex_unlock(&hwqueue_devices_lock);
+ return ret;
+}
+EXPORT_SYMBOL(hwqueue_device_unregister);
+
+static struct hwqueue *__hwqueue_open(struct hwqueue_instance *inst,
+ const char *name, unsigned flags,
+ void *caller)
+{
+ struct hwqueue_device *hdev = inst->hdev;
+ struct hwqueue *qh;
+ int ret;
+
+ if (!try_module_get(hdev->dev->driver->owner))
+ return ERR_PTR(-ENODEV);
+
+ qh = kzalloc(sizeof(struct hwqueue), GFP_KERNEL);
+ if (!qh) {
+ module_put(hdev->dev->driver->owner);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ qh->flags = flags;
+ qh->inst = inst;
+
+ /* first opener? */
+ if (!hwqueue_is_busy(inst)) {
+ strncpy(inst->name, name, sizeof(inst->name));
+ ret = hdev->ops->open(inst, flags);
+ if (ret) {
+ kfree(qh);
+ module_put(hdev->dev->driver->owner);
+ return ERR_PTR(ret);
+ }
+ }
+
+ if (hwqueue_is_readable(qh)) {
+ qh->get_count = hdev->ops->get_count;
+ qh->pop = hdev->ops->pop;
+ qh->unmap = hdev->ops->unmap;
+ }
+
+ if (hwqueue_is_writable(qh)) {
+ qh->flush = hdev->ops->flush;
+ qh->push = hdev->ops->push;
+ qh->map = hdev->ops->map;
+ }
+
+ list_add_tail_rcu(&qh->list, &inst->handles);
+
+ return qh;
+}
+
+static struct hwqueue *hwqueue_open_by_id(const char *name, unsigned id,
+ unsigned flags, void *caller)
+{
+ struct hwqueue_instance *inst;
+ struct hwqueue_device *hdev;
+ struct hwqueue *qh;
+ int match;
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ qh = ERR_PTR(-ENODEV);
+ inst = hwqueue_find_by_id(id);
+ if (!inst)
+ goto unlock_ret;
+ hdev = inst->hdev;
+
+
+ qh = ERR_PTR(-EINVAL);
+ match = hdev->ops->match(inst, flags);
+
+ if (match < 0)
+ goto unlock_ret;
+
+ qh = ERR_PTR(-EBUSY);
+ if (hwqueue_is_exclusive(inst))
+ goto unlock_ret;
+
+ qh = ERR_PTR(-EEXIST);
+ if ((flags & O_CREAT) && hwqueue_is_busy(inst))
+ goto unlock_ret;
+
+ qh = __hwqueue_open(inst, name, flags, caller);
+
+unlock_ret:
+ mutex_unlock(&hwqueue_devices_lock);
+
+ return qh;
+}
+
+static struct hwqueue *hwqueue_open_any(const char *name, unsigned flags,
+ void *caller)
+{
+ struct hwqueue_device *hdev;
+ int match = INT_MAX, _match, id;
+ struct hwqueue_instance *inst = NULL, *_inst;
+ struct hwqueue *qh = ERR_PTR(-ENODEV);
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ for_each_device(hdev) {
+ for_each_instance(id, _inst, hdev) {
+ _match = hdev->ops->match(_inst, flags);
+ if (_match < 0) /* match error */
+ continue;
+ if (_match >= match) /* match is no better */
+ continue;
+ if (hwqueue_is_exclusive(_inst))
+ continue;
+ if ((flags & O_CREAT) && hwqueue_is_busy(_inst))
+ continue;
+
+ match = _match;
+ inst = _inst;
+
+ if (!match) /* made for each other */
+ break;
+ }
+ }
+
+ if (inst)
+ qh = __hwqueue_open(inst, name, flags, caller);
+
+ mutex_unlock(&hwqueue_devices_lock);
+ return qh;
+}
+
+static struct hwqueue *hwqueue_open_by_name(const char *name, unsigned flags,
+ void *caller)
+{
+ struct hwqueue_device *hdev;
+ struct hwqueue_instance *inst;
+ struct hwqueue *qh = ERR_PTR(-EINVAL);
+ int id;
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ for_each_device(hdev) {
+ for_each_instance(id, inst, hdev) {
+ int match = hdev->ops->match(inst, flags);
+ if (match < 0)
+ continue;
+ if (!hwqueue_is_busy(inst))
+ continue;
+ if (hwqueue_is_exclusive(inst))
+ continue;
+ if (strcmp(inst->name, name))
+ continue;
+ qh = __hwqueue_open(inst, name, flags, caller);
+ goto unlock_ret;
+ }
+ }
+
+unlock_ret:
+ mutex_unlock(&hwqueue_devices_lock);
+ return qh;
+}
+
+/**
+ * hwqueue_open() - open a hardware queue
+ * @name - name to give the queue handle
+ * @id - desired queue number if any
+ * HWQUEUE_ANY: allocate any free queue, implies O_CREAT
+ * HWQUEUE_BYNAME: open existing queue by name, implies !O_CREAT
+ * @flags - the following flags are applicable to queues:
+ * O_EXCL - insist on exclusive ownership - will fail if queue is
+ * already open. Subsequent attempts to open the same
+ * queue will also fail. O_EXCL => O_CREAT here.
+ * O_CREAT - insist that queue not be already open - will fail if
+ * queue is already open. Subsequent attempts to open
+ * the same queue may succeed. O_CREAT is implied if
+ * queue id == HWQUEUE_ANY.
+ * O_RDONLY - pop only access
+ * O_WRONLY - push only access
+ * O_RDWR - push and pop access
+ * O_NONBLOCK - never block on pushes and pops
+ * In addition, the following "hints" to the driver/hardware may be passed
+ * in at open:
+ * O_HIGHTHROUGHPUT - hint high throughput usage
+ * O_LOWLATENCY - hint low latency usage
+ *
+ * Returns a handle to the open hardware queue if successful. Use IS_ERR()
+ * to check the returned value for error codes.
+ */
+struct hwqueue *hwqueue_open(const char *name, unsigned id, unsigned flags)
+{
+ struct hwqueue *qh = ERR_PTR(-EINVAL);
+ void *caller = __builtin_return_address(0);
+
+ if (flags & O_EXCL)
+ flags |= O_CREAT;
+
+ switch (id) {
+ case HWQUEUE_ANY:
+ qh = hwqueue_open_any(name, flags, caller);
+ break;
+ case HWQUEUE_BYNAME:
+ if (WARN_ON(flags & (O_EXCL | O_CREAT)))
+ break;
+ qh = hwqueue_open_by_name(name, flags, caller);
+ break;
+ default:
+ qh = hwqueue_open_by_id(name, id, flags, caller);
+ break;
+ }
+
+ return qh;
+}
+EXPORT_SYMBOL(hwqueue_open);
+
+static void devm_hwqueue_release(struct device *dev, void *res)
+{
+ hwqueue_close(*(struct hwqueue **)res);
+}
+
+struct hwqueue *devm_hwqueue_open(struct device *dev, const char *name,
+ unsigned id, unsigned flags)
+{
+ struct hwqueue **ptr, *queue;
+
+ ptr = devres_alloc(devm_hwqueue_release, sizeof(*ptr), GFP_KERNEL);
+ if (!ptr)
+ return NULL;
+
+ queue = hwqueue_open(name, id, flags);
+ if (queue) {
+ *ptr = queue;
+ devres_add(dev, ptr);
+ } else
+ devres_free(ptr);
+
+ return queue;
+}
+EXPORT_SYMBOL(devm_hwqueue_open);
+
+/**
+ * hwqueue_close() - close a hardware queue handle
+ * @qh - handle to close
+ */
+void hwqueue_close(struct hwqueue *qh)
+{
+ struct hwqueue_instance *inst = qh->inst;
+ struct hwqueue_device *hdev = inst->hdev;
+
+ while (atomic_read(&qh->notifier_enabled) > 0)
+ hwqueue_disable_notifier(qh);
+
+ mutex_lock(&hwqueue_devices_lock);
+ list_del_rcu(&qh->list);
+ mutex_unlock(&hwqueue_devices_lock);
+
+ synchronize_rcu();
+
+ module_put(hdev->dev->driver->owner);
+
+ if (!hwqueue_is_busy(inst))
+ hdev->ops->close(inst);
+ kfree(qh);
+}
+EXPORT_SYMBOL(hwqueue_close);
+
+static int devm_hwqueue_match(struct device *dev, void *res, void *match_data)
+{
+ return *(void **)res == match_data;
+}
+
+void devm_hwqueue_close(struct device *dev, struct hwqueue *qh)
+{
+ WARN_ON(devres_destroy(dev, devm_hwqueue_release, devm_hwqueue_match,
+ (void *)qh));
+ hwqueue_close(qh);
+}
+EXPORT_SYMBOL(devm_hwqueue_close);
+
+/**
+ * hwqueue_get_id() - get an ID number for an open queue. This ID may be
+ * passed to another part of the kernel, which then opens the
+ * queue by ID.
+ * @qh - queue handle
+ *
+ * Returns queue id (>= 0) on success, negative return value is an error.
+ */
+int hwqueue_get_id(struct hwqueue *qh)
+{
+ struct hwqueue_instance *inst = qh->inst;
+ unsigned base_id = inst->hdev->base_id;
+
+ return base_id + hwqueue_inst_to_id(inst);
+}
+EXPORT_SYMBOL(hwqueue_get_id);
+
+/**
+ * hwqueue_get_hw_id() - get an ID number for an open queue. This ID may be
+ * passed to hardware modules as a part of
+ * descriptor/buffer content.
+ * @qh - queue handle
+ *
+ * Returns queue id (>= 0) on success, negative return value is an error.
+ */
+int hwqueue_get_hw_id(struct hwqueue *qh)
+{
+ struct hwqueue_instance *inst = qh->inst;
+ struct hwqueue_device *hdev = inst->hdev;
+
+ if (!hdev->ops->get_hw_id)
+ return -EINVAL;
+
+ return hdev->ops->get_hw_id(inst);
+}
+EXPORT_SYMBOL(hwqueue_get_hw_id);
+
+/**
+ * hwqueue_enable_notifier() - Enable notifier callback for a queue handle.
+ * @qh - hardware queue handle
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int hwqueue_enable_notifier(struct hwqueue *qh)
+{
+ struct hwqueue_instance *inst = qh->inst;
+ struct hwqueue_device *hdev = inst->hdev;
+ bool first;
+
+ if (!hwqueue_is_readable(qh))
+ return -EINVAL;
+
+ if (WARN_ON(!qh->notifier_fn))
+ return -EINVAL;
+
+ /* Adjust the per handle notifier count */
+ first = (atomic_inc_return(&qh->notifier_enabled) == 1);
+ if (!first)
+ return 0; /* nothing to do */
+
+ /* Now adjust the per instance notifier count */
+ first = (atomic_inc_return(&inst->num_notifiers) == 1);
+ if (first)
+ hdev->ops->set_notify(inst, true);
+
+ return 0;
+}
+EXPORT_SYMBOL(hwqueue_enable_notifier);
+
+/**
+ * hwqueue_disable_notifier() - Disable notifier callback for a queue handle.
+ * @qh - hardware queue handle
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int hwqueue_disable_notifier(struct hwqueue *qh)
+{
+ struct hwqueue_instance *inst = qh->inst;
+ struct hwqueue_device *hdev = inst->hdev;
+ bool last;
+
+ if (!hwqueue_is_readable(qh))
+ return -EINVAL;
+
+ last = (atomic_dec_return(&qh->notifier_enabled) == 0);
+ if (!last)
+ return 0; /* nothing to do */
+
+ last = (atomic_dec_return(&inst->num_notifiers) == 0);
+ if (last)
+ hdev->ops->set_notify(inst, false);
+
+ return 0;
+}
+EXPORT_SYMBOL(hwqueue_disable_notifier);
+
+/**
+ * hwqueue_set_notifier() - Set a notifier callback to a queue handle. This
+ * notifier is called whenever the queue has
+ * something to pop.
+ * @qh - hardware queue handle
+ * @fn - callback function
+ * @fn_arg - argument for the callback function
+ *
+ * Hardware queues can have multiple notifiers attached to them.
+ * The underlying notification mechanism may vary from queue to queue. For
+ * example, some queues may issue notify callbacks on timer expiry, and some
+ * may do so in interrupt context. Notifier callbacks may be called from an
+ * atomic context, and _must not_ block ever.
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int hwqueue_set_notifier(struct hwqueue *qh, hwqueue_notify_fn fn,
+ void *fn_arg)
+{
+ hwqueue_notify_fn old_fn = qh->notifier_fn;
+
+ if (!hwqueue_is_readable(qh))
+ return -EINVAL;
+
+ if (!fn && old_fn)
+ hwqueue_disable_notifier(qh);
+
+ qh->notifier_fn = fn;
+ qh->notifier_fn_arg = fn_arg;
+
+ if (fn && !old_fn)
+ hwqueue_enable_notifier(qh);
+
+ return 0;
+}
+EXPORT_SYMBOL(hwqueue_set_notifier);
+
+dma_addr_t __hwqueue_pop_slow(struct hwqueue_instance *inst, unsigned *size,
+ struct timeval *timeout)
+{
+ struct hwqueue_device *hdev = inst->hdev;
+ dma_addr_t dma_addr = 0;
+ int ret;
+
+ if (timeout) {
+ unsigned long expires = timeval_to_jiffies(timeout);
+
+ ret = wait_event_interruptible_timeout(inst->wait,
+ (dma_addr = hdev->ops->pop(inst, size)),
+ expires);
+ if (ret < 0)
+ return 0;
+ if (!ret && !dma_addr)
+ return 0;
+ jiffies_to_timeval(ret, timeout);
+ } else {
+ ret = wait_event_interruptible(inst->wait,
+ (dma_addr = hdev->ops->pop(inst, size)));
+ if (ret < 0)
+ return 0;
+ if (WARN_ON(!ret && !dma_addr))
+ return 0;
+ }
+
+ return dma_addr;
+}
+EXPORT_SYMBOL(__hwqueue_pop_slow);
+
+/**
+ * hwqueue_notify() - notify users on data availability
+ * @inst - hardware queue instance
+ *
+ * Walk through the notifier list for a hardware queue instance and issue
+ * callbacks. This function is called by drivers when data is available on a
+ * hardware queue, either when notified via interrupt or on timer poll.
+ */
+void hwqueue_notify(struct hwqueue_instance *inst)
+{
+ struct hwqueue *qh;
+
+ rcu_read_lock();
+
+ for_each_handle_rcu(qh, inst) {
+ if (atomic_read(&qh->notifier_enabled) <= 0)
+ continue;
+ if (WARN_ON(!qh->notifier_fn))
+ continue;
+ atomic_inc(&qh->stats.notifies);
+ qh->notifier_fn(qh->notifier_fn_arg);
+ }
+
+ rcu_read_unlock();
+
+ wake_up_interruptible(&inst->wait);
+}
+EXPORT_SYMBOL(hwqueue_notify);
+
+static void __hwqueue_poll(unsigned long data)
+{
+ struct hwqueue_instance *inst = (struct hwqueue_instance *)data;
+ struct hwqueue *qh;
+
+ rcu_read_lock();
+
+ for_each_handle_rcu(qh, inst) {
+ if (hwqueue_get_count(qh) > 0)
+ hwqueue_notify(inst);
+ break;
+ }
+
+ rcu_read_unlock();
+
+ mod_timer(&inst->poll_timer, jiffies +
+ msecs_to_jiffies(hwqueue_poll_interval));
+}
+
+void hwqueue_set_poll(struct hwqueue_instance *inst, bool enabled)
+{
+ unsigned long expires;
+
+ if (!enabled) {
+ del_timer(&inst->poll_timer);
+ return;
+ }
+
+ expires = jiffies + msecs_to_jiffies(hwqueue_poll_interval);
+ mod_timer(&inst->poll_timer, expires);
+}
+EXPORT_SYMBOL(hwqueue_set_poll);
+
+static void hwqueue_debug_show_instance(struct seq_file *s,
+ struct hwqueue_instance *inst)
+{
+ struct hwqueue_device *hdev = inst->hdev;
+ struct hwqueue *qh;
+
+ if (!hwqueue_is_busy(inst))
+ return;
+
+ seq_printf(s, "\tqueue id %d (%s)\n",
+ hdev->base_id + hwqueue_inst_to_id(inst), inst->name);
+
+ for_each_handle_rcu(qh, inst) {
+ seq_printf(s, "\t\thandle %p: ", qh);
+ seq_printf(s, "pushes %8d, ",
+ atomic_read(&qh->stats.pushes));
+ seq_printf(s, "pops %8d, ",
+ atomic_read(&qh->stats.pops));
+ seq_printf(s, "count %8d, ",
+ hwqueue_get_count(qh));
+ seq_printf(s, "notifies %8d, ",
+ atomic_read(&qh->stats.notifies));
+ seq_printf(s, "push errors %8d, ",
+ atomic_read(&qh->stats.push_errors));
+ seq_printf(s, "pop errors %8d\n",
+ atomic_read(&qh->stats.pop_errors));
+ }
+}
+
+static int hwqueue_debug_show(struct seq_file *s, void *v)
+{
+ struct hwqueue_device *hdev;
+ struct hwqueue_instance *inst;
+ int id;
+
+ mutex_lock(&hwqueue_devices_lock);
+
+ for_each_device(hdev) {
+ seq_printf(s, "hdev %s: %u-%u\n",
+ dev_name(hdev->dev), hdev->base_id,
+ hdev->base_id + hdev->num_queues - 1);
+
+ for_each_instance(id, inst, hdev)
+ hwqueue_debug_show_instance(s, inst);
+ }
+
+ mutex_unlock(&hwqueue_devices_lock);
+
+ return 0;
+}
+
+static int hwqueue_debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hwqueue_debug_show, NULL);
+}
+
+static const struct file_operations hwqueue_debug_ops = {
+ .open = hwqueue_debug_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int __init hwqueue_debug_init(void)
+{
+ debugfs_create_file("hwqueues", S_IFREG | S_IRUGO, NULL, NULL,
+ &hwqueue_debug_ops);
+ return 0;
+}
+device_initcall(hwqueue_debug_init);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Hardware queue interface");
diff --git a/drivers/hwqueue/hwqueue_internal.h b/drivers/hwqueue/hwqueue_internal.h
new file mode 100644
index 00000000000000..c650340f4044a8
--- /dev/null
+++ b/drivers/hwqueue/hwqueue_internal.h
@@ -0,0 +1,121 @@
+/*
+ * Hardware queues handle header
+ *
+ * Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Contact: Prabhu Kuttiyam <pkuttiyam@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published
+ * by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __HWQUEUE_HWQUEUE_H
+#define __HWQUEUE_HWQUEUE_H
+
+#include <linux/device.h>
+#include <linux/wait.h>
+#include <linux/hwqueue.h>
+
+
+struct hwqueue_instance {
+ struct list_head handles;
+ struct hwqueue_device *hdev;
+ struct timer_list poll_timer;
+ wait_queue_head_t wait;
+ void *priv;
+ char name[32];
+ atomic_t num_notifiers;
+};
+
+struct hwqueue_device_ops {
+ /*
+ * Return a match quotient (0 = best .. UINT_MAX-1) for a set of
+ * option flags. Negative error values imply "do not allocate"
+ */
+ int (*match)(struct hwqueue_instance *inst, unsigned flags);
+
+ /* Initialize a queue inst when opened for the first time */
+ int (*open)(struct hwqueue_instance *inst, unsigned flags);
+
+ /* Close a queue inst when closed by the last user */
+ void (*close)(struct hwqueue_instance *inst);
+
+ /* Enable or disable notification */
+ void (*set_notify)(struct hwqueue_instance *inst, bool enabled);
+
+ /* Get a hardware identifier for a queue */
+ int (*get_hw_id)(struct hwqueue_instance *inst);
+
+ /* Push something into the queue */
+ int (*push)(struct hwqueue_instance *inst, dma_addr_t dma,
+ unsigned size);
+
+ /* Pop something from the queue */
+ dma_addr_t (*pop)(struct hwqueue_instance *inst, unsigned *size);
+
+ /* Flush a queue */
+ int (*flush)(struct hwqueue_instance *inst);
+
+ /* Poll number of elements on the queue */
+ int (*get_count)(struct hwqueue_instance *inst);
+
+ /* Perform DMA mapping on objects to be pushed */
+ int (*map)(struct hwqueue_instance *inst, void *data,
+ unsigned size, dma_addr_t *dma_ptr, unsigned *size_ptr);
+
+ /* Perform DMA unmapping on objects that have been pulled */
+ void *(*unmap)(struct hwqueue_instance *inst, dma_addr_t dma,
+ unsigned desc_size);
+};
+
+struct hwqueue_device {
+ unsigned base_id;
+ unsigned num_queues;
+ unsigned inst_shift;
+ void *instances;
+ struct hwqueue_device_ops *ops;
+
+ unsigned priv_size;
+ struct list_head list;
+ struct device *dev;
+};
+
+static inline int hwqueue_inst_to_id(struct hwqueue_instance *inst)
+{
+ struct hwqueue_device *hdev = inst->hdev;
+ int offset = (void *)inst - hdev->instances;
+ int inst_size = 1 << hdev->inst_shift;
+
+ BUG_ON(offset & (inst_size - 1));
+ return offset >> hdev->inst_shift;
+}
+
+static inline struct hwqueue_instance *
+hwqueue_id_to_inst(struct hwqueue_device *hdev, unsigned id)
+{
+ return hdev->instances + (id << hdev->inst_shift);
+}
+
+static inline void *hwqueue_inst_to_priv(struct hwqueue_instance *inst)
+{
+ return (void *)(inst + 1);
+}
+
+static inline struct hwqueue *rcu_to_handle(struct rcu_head *rcu)
+{
+ return container_of(rcu, struct hwqueue, rcu);
+}
+
+int hwqueue_device_register(struct hwqueue_device *dev);
+int hwqueue_device_unregister(struct hwqueue_device *dev);
+void hwqueue_notify(struct hwqueue_instance *inst);
+void hwqueue_set_poll(struct hwqueue_instance *inst, bool enabled);
+
+#endif /* __HWQUEUE_HWQUEUE_H */
diff --git a/include/linux/hwqueue.h b/include/linux/hwqueue.h
new file mode 100644
index 00000000000000..713b4135758d86
--- /dev/null
+++ b/include/linux/hwqueue.h
@@ -0,0 +1,220 @@
+/*
+ * Hardware queue framework
+ *
+ * Copyright (C) 2011 Texas Instruments Incorporated - http://www.ti.com
+ *
+ * Contact: Prabhu Kuttiyam <pkuttiyam@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __LINUX_HWQUEUE_H
+#define __LINUX_HWQUEUE_H
+
+#include <linux/err.h>
+#include <linux/time.h>
+#include <linux/atomic.h>
+#include <linux/device.h>
+#include <linux/fcntl.h>
+
+#define HWQUEUE_ANY ((unsigned)-1)
+#define HWQUEUE_BYNAME ((unsigned)-2)
+
+/* file fcntl flags repurposed for hwqueues */
+#define O_HIGHTHROUGHPUT O_DIRECT
+#define O_LOWLATENCY O_LARGEFILE
+
+/* hardware queue notifier callback prototype */
+typedef void (*hwqueue_notify_fn)(void *arg);
+
+struct hwqueue_stats {
+ atomic_t pushes;
+ atomic_t pops;
+ atomic_t push_errors;
+ atomic_t pop_errors;
+ atomic_t notifies;
+};
+
+struct hwqueue_instance;
+
+struct hwqueue {
+ int (*push)(struct hwqueue_instance *inst, dma_addr_t dma,
+ unsigned size);
+ dma_addr_t (*pop)(struct hwqueue_instance *inst, unsigned *size);
+ int (*flush)(struct hwqueue_instance *inst);
+ int (*get_count)(struct hwqueue_instance *inst);
+ int (*map)(struct hwqueue_instance *inst, void *data,
+ unsigned size, dma_addr_t *dma_ptr, unsigned *size_ptr);
+ void *(*unmap)(struct hwqueue_instance *inst, dma_addr_t dma,
+ unsigned desc_size);
+
+ struct hwqueue_instance *inst;
+ struct hwqueue_stats stats;
+ hwqueue_notify_fn notifier_fn;
+ void *notifier_fn_arg;
+ atomic_t notifier_enabled;
+ struct rcu_head rcu;
+ unsigned flags;
+ struct list_head list;
+};
+
+struct hwqueue *hwqueue_open(const char *name, unsigned id, unsigned flags);
+
+void hwqueue_close(struct hwqueue *queue);
+
+struct hwqueue *devm_hwqueue_open(struct device *dev, const char *name,
+ unsigned id, unsigned flags);
+void devm_hwqueue_close(struct device *dev, struct hwqueue *queue);
+
+int hwqueue_get_id(struct hwqueue *queue);
+
+int hwqueue_get_hw_id(struct hwqueue *queue);
+
+int hwqueue_set_notifier(struct hwqueue *queue, hwqueue_notify_fn fn,
+ void *fn_arg);
+int hwqueue_enable_notifier(struct hwqueue *queue);
+
+int hwqueue_disable_notifier(struct hwqueue *queue);
+
+dma_addr_t __hwqueue_pop_slow(struct hwqueue_instance *inst, unsigned *size,
+ struct timeval *timeout);
+
+/**
+ * hwqueue_get_count() - poll a hardware queue and check if empty
+ * and return number of elements in a queue
+ * @qh - hardware queue handle
+ *
+ * Returns 'true' if empty, and 'false' otherwise
+ */
+static inline int hwqueue_get_count(struct hwqueue *qh)
+{
+ if (unlikely(!qh->get_count))
+ return -EINVAL;
+ return qh->get_count(qh->inst);
+}
+
+/**
+ * hwqueue_flush() - forcibly empty a queue if possible
+ * @qh - hardware queue handle
+ *
+ * Returns 0 on success, errno otherwise. This may not be universally
+ * supported on all hardware queue implementations.
+ */
+static inline int hwqueue_flush(struct hwqueue *qh)
+{
+ if (unlikely(!qh->flush))
+ return -EINVAL;
+ return qh->flush(qh->inst);
+}
+
+/**
+ * hwqueue_push() - push data (or descriptor) to the tail of a queue
+ * @qh - hardware queue handle
+ * @data - data to push
+ * @size - size of data to push
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+static inline int hwqueue_push(struct hwqueue *qh, dma_addr_t dma, unsigned size)
+{
+ int ret = 0;
+
+ do {
+ if (unlikely(!qh->push)) {
+ ret = -EINVAL;
+ break;
+ }
+
+ ret = qh->push(qh->inst, dma, size);
+ } while (0);
+
+ if (unlikely(ret < 0))
+ atomic_inc(&qh->stats.push_errors);
+ else
+ atomic_inc(&qh->stats.pushes);
+ return ret;
+}
+
+/**
+ * hwqueue_pop() - pop data (or descriptor) from the head of a queue
+ * @qh - hardware queue handle
+ * @size - hwqueue_pop fills this parameter on success
+ * @timeout - timeout value to use if blocking
+ *
+ * Returns a DMA address on success, 0 on failure.
+ */
+static inline dma_addr_t hwqueue_pop(struct hwqueue *qh, unsigned *size,
+ struct timeval *timeout)
+{
+ dma_addr_t ret = 0;
+
+ do {
+ if (unlikely(!qh->pop)) {
+ ret = -EINVAL;
+ break;
+ }
+
+ ret = qh->pop(qh->inst, size);
+ if (likely(ret))
+ break;
+
+ if (likely((qh->flags & O_NONBLOCK) ||
+ (timeout && !timeout->tv_sec && !timeout->tv_usec)))
+ break;
+
+ ret = __hwqueue_pop_slow(qh->inst, size, timeout);
+ } while (0);
+
+ if (likely(ret)) {
+ if (IS_ERR_VALUE(ret))
+ atomic_inc(&qh->stats.pop_errors);
+ else
+ atomic_inc(&qh->stats.pops);
+ }
+ return ret;
+}
+
+/**
+ * hwqueue_map() - Map data (or descriptor) for DMA transfer
+ * @qh - hardware queue handle
+ * @data - address of data to map
+ * @size - size of data to map
+ * @dma_ptr - DMA address return pointer
+ * @size_ptr - adjusted size return pointer
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+static inline int hwqueue_map(struct hwqueue *qh, void *data, unsigned size,
+ dma_addr_t *dma_ptr, unsigned *size_ptr)
+{
+ if (unlikely(!qh->map))
+ return -EINVAL;
+ return qh->map(qh->inst, data, size, dma_ptr, size_ptr);
+}
+
+/**
+ * hwqueue_unmap() - Unmap data (or descriptor) after DMA transfer
+ * @qh - hardware queue handle
+ * @dma - DMA address to be unmapped
+ * @size - size of data to unmap
+ *
+ * Returns a data/descriptor pointer on success. Use IS_ERR_OR_NULL() to identify
+ * error values on return.
+ */
+static inline void *hwqueue_unmap(struct hwqueue *qh, dma_addr_t dma,
+ unsigned size)
+{
+ if (unlikely(!qh->unmap))
+ return NULL;
+ return qh->unmap(qh->inst, dma, size);
+}
+
+#endif /* __LINUX_HWQUEUE_H */