Here's the big driver-core pull request for 3.17-rc1. Largest thing in here is the dma-buf rework and fence code, that touched many different subsystems so it was agreed it should go through this tree to handle merge issues. There's also some firmware loading updates, as well as tests added, and a few other tiny changes, the changelog has the details. All have been in linux-next for a long time. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEABECAAYFAlPf1XcACgkQMUfUDdst+ylREACdHLXBa02yLrRzbrONJ+nARuFv JuQAoMN49PD8K9iMQpXqKBvZBsu+iCIY =w8OJ -----END PGP SIGNATURE----- Merge tag 'driver-core-3.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core updates from Greg KH: "Here's the big driver-core pull request for 3.17-rc1. Largest thing in here is the dma-buf rework and fence code, that touched many different subsystems so it was agreed it should go through this tree to handle merge issues. There's also some firmware loading updates, as well as tests added, and a few other tiny changes, the changelog has the details. All have been in linux-next for a long time" * tag 'driver-core-3.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (32 commits) ARM: imx: Remove references to platform_bus in mxc code firmware loader: Fix _request_firmware_load() return val for fw load abort platform: Remove most references to platform_bus device test: add firmware_class loader test doc: fix minor typos in firmware_class README staging: android: Cleanup style issues Documentation: devres: Sort managed interfaces Documentation: devres: Add devm_kmalloc() et al fs: debugfs: remove trailing whitespace kernfs: kernel-doc warning fix debugfs: Fix corrupted loop in debugfs_remove_recursive stable_kernel_rules: Add pointer to netdev-FAQ for network patches driver core: platform: add device binding path 'driver_override' driver core/platform: remove unused implicit padding in platform_object firmware loader: inform direct failure when udev loader is disabled firmware: replace ALIGN(PAGE_SIZE) by PAGE_ALIGN firmware: read firmware size using i_size_read() firmware loader: allow disabling of udev as firmware loader reservation: add suppport for read-only access using rcu reservation: update api and add some helpers ... Conflicts: drivers/base/platform.ctirimbino
commit
29b88e23a9
@ -0,0 +1,20 @@ |
||||
What: /sys/bus/platform/devices/.../driver_override |
||||
Date: April 2014 |
||||
Contact: Kim Phillips <kim.phillips@freescale.com> |
||||
Description: |
||||
This file allows the driver for a device to be specified which |
||||
will override standard OF, ACPI, ID table, and name matching. |
||||
When specified, only a driver with a name matching the value |
||||
written to driver_override will have an opportunity to bind |
||||
to the device. The override is specified by writing a string |
||||
to the driver_override file (echo vfio-platform > \ |
||||
driver_override) and may be cleared with an empty string |
||||
(echo > driver_override). This returns the device to standard |
||||
matching rules binding. Writing to driver_override does not |
||||
automatically unbind the device from its current driver or make |
||||
any attempt to automatically load the specified driver. If no |
||||
driver with a matching name is currently loaded in the kernel, |
||||
the device will not bind to any driver. This also allows |
||||
devices to opt-out of driver binding using a driver_override |
||||
name such as "none". Only a single driver may be specified in |
||||
the override, there is no support for parsing delimiters. |
@ -1,39 +0,0 @@ |
||||
/*
|
||||
* Copyright (C) 2012-2013 Canonical Ltd |
||||
* |
||||
* Based on bo.c which bears the following copyright notice, |
||||
* but is dual licensed: |
||||
* |
||||
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA |
||||
* All Rights Reserved. |
||||
* |
||||
* Permission is hereby granted, free of charge, to any person obtaining a |
||||
* copy of this software and associated documentation files (the |
||||
* "Software"), to deal in the Software without restriction, including |
||||
* without limitation the rights to use, copy, modify, merge, publish, |
||||
* distribute, sub license, and/or sell copies of the Software, and to |
||||
* permit persons to whom the Software is furnished to do so, subject to |
||||
* the following conditions: |
||||
* |
||||
* The above copyright notice and this permission notice (including the |
||||
* next paragraph) shall be included in all copies or substantial portions |
||||
* of the Software. |
||||
* |
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL |
||||
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, |
||||
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR |
||||
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE |
||||
* USE OR OTHER DEALINGS IN THE SOFTWARE. |
||||
* |
||||
**************************************************************************/ |
||||
/*
|
||||
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> |
||||
*/ |
||||
|
||||
#include <linux/reservation.h> |
||||
#include <linux/export.h> |
||||
|
||||
DEFINE_WW_CLASS(reservation_ww_class); |
||||
EXPORT_SYMBOL(reservation_ww_class); |
@ -0,0 +1 @@ |
||||
obj-y := dma-buf.o fence.o reservation.o seqno-fence.o
|
@ -0,0 +1,431 @@ |
||||
/*
|
||||
* Fence mechanism for dma-buf and to allow for asynchronous dma access |
||||
* |
||||
* Copyright (C) 2012 Canonical Ltd |
||||
* Copyright (C) 2012 Texas Instruments |
||||
* |
||||
* Authors: |
||||
* Rob Clark <robdclark@gmail.com> |
||||
* Maarten Lankhorst <maarten.lankhorst@canonical.com> |
||||
* |
||||
* This program is free software; you can redistribute it and/or modify it |
||||
* under the terms of the GNU General Public License version 2 as published by |
||||
* the Free Software Foundation. |
||||
* |
||||
* This program is distributed in the hope that it will be useful, but WITHOUT |
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
||||
* more details. |
||||
*/ |
||||
|
||||
#include <linux/slab.h> |
||||
#include <linux/export.h> |
||||
#include <linux/atomic.h> |
||||
#include <linux/fence.h> |
||||
|
||||
#define CREATE_TRACE_POINTS |
||||
#include <trace/events/fence.h> |
||||
|
||||
EXPORT_TRACEPOINT_SYMBOL(fence_annotate_wait_on); |
||||
EXPORT_TRACEPOINT_SYMBOL(fence_emit); |
||||
|
||||
/**
|
||||
* fence context counter: each execution context should have its own |
||||
* fence context, this allows checking if fences belong to the same |
||||
* context or not. One device can have multiple separate contexts, |
||||
* and they're used if some engine can run independently of another. |
||||
*/ |
||||
static atomic_t fence_context_counter = ATOMIC_INIT(0); |
||||
|
||||
/**
|
||||
* fence_context_alloc - allocate an array of fence contexts |
||||
* @num: [in] amount of contexts to allocate |
||||
* |
||||
* This function will return the first index of the number of fences allocated. |
||||
* The fence context is used for setting fence->context to a unique number. |
||||
*/ |
||||
unsigned fence_context_alloc(unsigned num) |
||||
{ |
||||
BUG_ON(!num); |
||||
return atomic_add_return(num, &fence_context_counter) - num; |
||||
} |
||||
EXPORT_SYMBOL(fence_context_alloc); |
||||
|
||||
/**
|
||||
* fence_signal_locked - signal completion of a fence |
||||
* @fence: the fence to signal |
||||
* |
||||
* Signal completion for software callbacks on a fence, this will unblock |
||||
* fence_wait() calls and run all the callbacks added with |
||||
* fence_add_callback(). Can be called multiple times, but since a fence |
||||
* can only go from unsignaled to signaled state, it will only be effective |
||||
* the first time. |
||||
* |
||||
* Unlike fence_signal, this function must be called with fence->lock held. |
||||
*/ |
||||
int fence_signal_locked(struct fence *fence) |
||||
{ |
||||
struct fence_cb *cur, *tmp; |
||||
int ret = 0; |
||||
|
||||
if (WARN_ON(!fence)) |
||||
return -EINVAL; |
||||
|
||||
if (!ktime_to_ns(fence->timestamp)) { |
||||
fence->timestamp = ktime_get(); |
||||
smp_mb__before_atomic(); |
||||
} |
||||
|
||||
if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { |
||||
ret = -EINVAL; |
||||
|
||||
/*
|
||||
* we might have raced with the unlocked fence_signal, |
||||
* still run through all callbacks |
||||
*/ |
||||
} else |
||||
trace_fence_signaled(fence); |
||||
|
||||
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { |
||||
list_del_init(&cur->node); |
||||
cur->func(fence, cur); |
||||
} |
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL(fence_signal_locked); |
||||
|
||||
/**
|
||||
* fence_signal - signal completion of a fence |
||||
* @fence: the fence to signal |
||||
* |
||||
* Signal completion for software callbacks on a fence, this will unblock |
||||
* fence_wait() calls and run all the callbacks added with |
||||
* fence_add_callback(). Can be called multiple times, but since a fence |
||||
* can only go from unsignaled to signaled state, it will only be effective |
||||
* the first time. |
||||
*/ |
||||
int fence_signal(struct fence *fence) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
if (!fence) |
||||
return -EINVAL; |
||||
|
||||
if (!ktime_to_ns(fence->timestamp)) { |
||||
fence->timestamp = ktime_get(); |
||||
smp_mb__before_atomic(); |
||||
} |
||||
|
||||
if (test_and_set_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
return -EINVAL; |
||||
|
||||
trace_fence_signaled(fence); |
||||
|
||||
if (test_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags)) { |
||||
struct fence_cb *cur, *tmp; |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
list_for_each_entry_safe(cur, tmp, &fence->cb_list, node) { |
||||
list_del_init(&cur->node); |
||||
cur->func(fence, cur); |
||||
} |
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
} |
||||
return 0; |
||||
} |
||||
EXPORT_SYMBOL(fence_signal); |
||||
|
||||
/**
|
||||
* fence_wait_timeout - sleep until the fence gets signaled |
||||
* or until timeout elapses |
||||
* @fence: [in] the fence to wait on |
||||
* @intr: [in] if true, do an interruptible wait |
||||
* @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT |
||||
* |
||||
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the |
||||
* remaining timeout in jiffies on success. Other error values may be |
||||
* returned on custom implementations. |
||||
* |
||||
* Performs a synchronous wait on this fence. It is assumed the caller |
||||
* directly or indirectly (buf-mgr between reservation and committing) |
||||
* holds a reference to the fence, otherwise the fence might be |
||||
* freed before return, resulting in undefined behavior. |
||||
*/ |
||||
signed long |
||||
fence_wait_timeout(struct fence *fence, bool intr, signed long timeout) |
||||
{ |
||||
signed long ret; |
||||
|
||||
if (WARN_ON(timeout < 0)) |
||||
return -EINVAL; |
||||
|
||||
trace_fence_wait_start(fence); |
||||
ret = fence->ops->wait(fence, intr, timeout); |
||||
trace_fence_wait_end(fence); |
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL(fence_wait_timeout); |
||||
|
||||
void fence_release(struct kref *kref) |
||||
{ |
||||
struct fence *fence = |
||||
container_of(kref, struct fence, refcount); |
||||
|
||||
trace_fence_destroy(fence); |
||||
|
||||
BUG_ON(!list_empty(&fence->cb_list)); |
||||
|
||||
if (fence->ops->release) |
||||
fence->ops->release(fence); |
||||
else |
||||
fence_free(fence); |
||||
} |
||||
EXPORT_SYMBOL(fence_release); |
||||
|
||||
void fence_free(struct fence *fence) |
||||
{ |
||||
kfree_rcu(fence, rcu); |
||||
} |
||||
EXPORT_SYMBOL(fence_free); |
||||
|
||||
/**
|
||||
* fence_enable_sw_signaling - enable signaling on fence |
||||
* @fence: [in] the fence to enable |
||||
* |
||||
* this will request for sw signaling to be enabled, to make the fence |
||||
* complete as soon as possible |
||||
*/ |
||||
void fence_enable_sw_signaling(struct fence *fence) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
if (!test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags) && |
||||
!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { |
||||
trace_fence_enable_signal(fence); |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
|
||||
if (!fence->ops->enable_signaling(fence)) |
||||
fence_signal_locked(fence); |
||||
|
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
} |
||||
} |
||||
EXPORT_SYMBOL(fence_enable_sw_signaling); |
||||
|
||||
/**
|
||||
* fence_add_callback - add a callback to be called when the fence |
||||
* is signaled |
||||
* @fence: [in] the fence to wait on |
||||
* @cb: [in] the callback to register |
||||
* @func: [in] the function to call |
||||
* |
||||
* cb will be initialized by fence_add_callback, no initialization |
||||
* by the caller is required. Any number of callbacks can be registered |
||||
* to a fence, but a callback can only be registered to one fence at a time. |
||||
* |
||||
* Note that the callback can be called from an atomic context. If |
||||
* fence is already signaled, this function will return -ENOENT (and |
||||
* *not* call the callback) |
||||
* |
||||
* Add a software callback to the fence. Same restrictions apply to |
||||
* refcount as it does to fence_wait, however the caller doesn't need to |
||||
* keep a refcount to fence afterwards: when software access is enabled, |
||||
* the creator of the fence is required to keep the fence alive until |
||||
* after it signals with fence_signal. The callback itself can be called |
||||
* from irq context. |
||||
* |
||||
*/ |
||||
int fence_add_callback(struct fence *fence, struct fence_cb *cb, |
||||
fence_func_t func) |
||||
{ |
||||
unsigned long flags; |
||||
int ret = 0; |
||||
bool was_set; |
||||
|
||||
if (WARN_ON(!fence || !func)) |
||||
return -EINVAL; |
||||
|
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) { |
||||
INIT_LIST_HEAD(&cb->node); |
||||
return -ENOENT; |
||||
} |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
|
||||
was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); |
||||
|
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
ret = -ENOENT; |
||||
else if (!was_set) { |
||||
trace_fence_enable_signal(fence); |
||||
|
||||
if (!fence->ops->enable_signaling(fence)) { |
||||
fence_signal_locked(fence); |
||||
ret = -ENOENT; |
||||
} |
||||
} |
||||
|
||||
if (!ret) { |
||||
cb->func = func; |
||||
list_add_tail(&cb->node, &fence->cb_list); |
||||
} else |
||||
INIT_LIST_HEAD(&cb->node); |
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
|
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL(fence_add_callback); |
||||
|
||||
/**
|
||||
* fence_remove_callback - remove a callback from the signaling list |
||||
* @fence: [in] the fence to wait on |
||||
* @cb: [in] the callback to remove |
||||
* |
||||
* Remove a previously queued callback from the fence. This function returns |
||||
* true if the callback is succesfully removed, or false if the fence has |
||||
* already been signaled. |
||||
* |
||||
* *WARNING*: |
||||
* Cancelling a callback should only be done if you really know what you're |
||||
* doing, since deadlocks and race conditions could occur all too easily. For |
||||
* this reason, it should only ever be done on hardware lockup recovery, |
||||
* with a reference held to the fence. |
||||
*/ |
||||
bool |
||||
fence_remove_callback(struct fence *fence, struct fence_cb *cb) |
||||
{ |
||||
unsigned long flags; |
||||
bool ret; |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
|
||||
ret = !list_empty(&cb->node); |
||||
if (ret) |
||||
list_del_init(&cb->node); |
||||
|
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
|
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL(fence_remove_callback); |
||||
|
||||
struct default_wait_cb { |
||||
struct fence_cb base; |
||||
struct task_struct *task; |
||||
}; |
||||
|
||||
static void |
||||
fence_default_wait_cb(struct fence *fence, struct fence_cb *cb) |
||||
{ |
||||
struct default_wait_cb *wait = |
||||
container_of(cb, struct default_wait_cb, base); |
||||
|
||||
wake_up_state(wait->task, TASK_NORMAL); |
||||
} |
||||
|
||||
/**
|
||||
* fence_default_wait - default sleep until the fence gets signaled |
||||
* or until timeout elapses |
||||
* @fence: [in] the fence to wait on |
||||
* @intr: [in] if true, do an interruptible wait |
||||
* @timeout: [in] timeout value in jiffies, or MAX_SCHEDULE_TIMEOUT |
||||
* |
||||
* Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or the |
||||
* remaining timeout in jiffies on success. |
||||
*/ |
||||
signed long |
||||
fence_default_wait(struct fence *fence, bool intr, signed long timeout) |
||||
{ |
||||
struct default_wait_cb cb; |
||||
unsigned long flags; |
||||
signed long ret = timeout; |
||||
bool was_set; |
||||
|
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
return timeout; |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
|
||||
if (intr && signal_pending(current)) { |
||||
ret = -ERESTARTSYS; |
||||
goto out; |
||||
} |
||||
|
||||
was_set = test_and_set_bit(FENCE_FLAG_ENABLE_SIGNAL_BIT, &fence->flags); |
||||
|
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
goto out; |
||||
|
||||
if (!was_set) { |
||||
trace_fence_enable_signal(fence); |
||||
|
||||
if (!fence->ops->enable_signaling(fence)) { |
||||
fence_signal_locked(fence); |
||||
goto out; |
||||
} |
||||
} |
||||
|
||||
cb.base.func = fence_default_wait_cb; |
||||
cb.task = current; |
||||
list_add(&cb.base.node, &fence->cb_list); |
||||
|
||||
while (!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) && ret > 0) { |
||||
if (intr) |
||||
__set_current_state(TASK_INTERRUPTIBLE); |
||||
else |
||||
__set_current_state(TASK_UNINTERRUPTIBLE); |
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
|
||||
ret = schedule_timeout(ret); |
||||
|
||||
spin_lock_irqsave(fence->lock, flags); |
||||
if (ret > 0 && intr && signal_pending(current)) |
||||
ret = -ERESTARTSYS; |
||||
} |
||||
|
||||
if (!list_empty(&cb.base.node)) |
||||
list_del(&cb.base.node); |
||||
__set_current_state(TASK_RUNNING); |
||||
|
||||
out: |
||||
spin_unlock_irqrestore(fence->lock, flags); |
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL(fence_default_wait); |
||||
|
||||
/**
|
||||
* fence_init - Initialize a custom fence. |
||||
* @fence: [in] the fence to initialize |
||||
* @ops: [in] the fence_ops for operations on this fence |
||||
* @lock: [in] the irqsafe spinlock to use for locking this fence |
||||
* @context: [in] the execution context this fence is run on |
||||
* @seqno: [in] a linear increasing sequence number for this context |
||||
* |
||||
* Initializes an allocated fence, the caller doesn't have to keep its |
||||
* refcount after committing with this fence, but it will need to hold a |
||||
* refcount again if fence_ops.enable_signaling gets called. This can |
||||
* be used for other implementing other types of fence. |
||||
* |
||||
* context and seqno are used for easy comparison between fences, allowing |
||||
* to check which fence is later by simply using fence_later. |
||||
*/ |
||||
void |
||||
fence_init(struct fence *fence, const struct fence_ops *ops, |
||||
spinlock_t *lock, unsigned context, unsigned seqno) |
||||
{ |
||||
BUG_ON(!lock); |
||||
BUG_ON(!ops || !ops->wait || !ops->enable_signaling || |
||||
!ops->get_driver_name || !ops->get_timeline_name); |
||||
|
||||
kref_init(&fence->refcount); |
||||
fence->ops = ops; |
||||
INIT_LIST_HEAD(&fence->cb_list); |
||||
fence->lock = lock; |
||||
fence->context = context; |
||||
fence->seqno = seqno; |
||||
fence->flags = 0UL; |
||||
|
||||
trace_fence_init(fence); |
||||
} |
||||
EXPORT_SYMBOL(fence_init); |
@ -0,0 +1,477 @@ |
||||
/*
|
||||
* Copyright (C) 2012-2014 Canonical Ltd (Maarten Lankhorst) |
||||
* |
||||
* Based on bo.c which bears the following copyright notice, |
||||
* but is dual licensed: |
||||
* |
||||
* Copyright (c) 2006-2009 VMware, Inc., Palo Alto, CA., USA |
||||
* All Rights Reserved. |
||||
* |
||||
* Permission is hereby granted, free of charge, to any person obtaining a |
||||
* copy of this software and associated documentation files (the |
||||
* "Software"), to deal in the Software without restriction, including |
||||
* without limitation the rights to use, copy, modify, merge, publish, |
||||
* distribute, sub license, and/or sell copies of the Software, and to |
||||
* permit persons to whom the Software is furnished to do so, subject to |
||||
* the following conditions: |
||||
* |
||||
* The above copyright notice and this permission notice (including the |
||||
* next paragraph) shall be included in all copies or substantial portions |
||||
* of the Software. |
||||
* |
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR |
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, |
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL |
||||
* THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, |
||||
* DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR |
||||
* OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE |
||||
* USE OR OTHER DEALINGS IN THE SOFTWARE. |
||||
* |
||||
**************************************************************************/ |
||||
/*
|
||||
* Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> |
||||
*/ |
||||
|
||||
#include <linux/reservation.h> |
||||
#include <linux/export.h> |
||||
|
||||
DEFINE_WW_CLASS(reservation_ww_class); |
||||
EXPORT_SYMBOL(reservation_ww_class); |
||||
|
||||
struct lock_class_key reservation_seqcount_class; |
||||
EXPORT_SYMBOL(reservation_seqcount_class); |
||||
|
||||
const char reservation_seqcount_string[] = "reservation_seqcount"; |
||||
EXPORT_SYMBOL(reservation_seqcount_string); |
||||
/*
|
||||
* Reserve space to add a shared fence to a reservation_object, |
||||
* must be called with obj->lock held. |
||||
*/ |
||||
int reservation_object_reserve_shared(struct reservation_object *obj) |
||||
{ |
||||
struct reservation_object_list *fobj, *old; |
||||
u32 max; |
||||
|
||||
old = reservation_object_get_list(obj); |
||||
|
||||
if (old && old->shared_max) { |
||||
if (old->shared_count < old->shared_max) { |
||||
/* perform an in-place update */ |
||||
kfree(obj->staged); |
||||
obj->staged = NULL; |
||||
return 0; |
||||
} else |
||||
max = old->shared_max * 2; |
||||
} else |
||||
max = 4; |
||||
|
||||
/*
|
||||
* resize obj->staged or allocate if it doesn't exist, |
||||
* noop if already correct size |
||||
*/ |
||||
fobj = krealloc(obj->staged, offsetof(typeof(*fobj), shared[max]), |
||||
GFP_KERNEL); |
||||
if (!fobj) |
||||
return -ENOMEM; |
||||
|
||||
obj->staged = fobj; |
||||
fobj->shared_max = max; |
||||
return 0; |
||||
} |
||||
EXPORT_SYMBOL(reservation_object_reserve_shared); |
||||
|
||||
static void |
||||
reservation_object_add_shared_inplace(struct reservation_object *obj, |
||||
struct reservation_object_list *fobj, |
||||
struct fence *fence) |
||||
{ |
||||
u32 i; |
||||
|
||||
fence_get(fence); |
||||
|
||||
preempt_disable(); |
||||
write_seqcount_begin(&obj->seq); |
||||
|
||||
for (i = 0; i < fobj->shared_count; ++i) { |
||||
struct fence *old_fence; |
||||
|
||||
old_fence = rcu_dereference_protected(fobj->shared[i], |
||||
reservation_object_held(obj)); |
||||
|
||||
if (old_fence->context == fence->context) { |
||||
/* memory barrier is added by write_seqcount_begin */ |
||||
RCU_INIT_POINTER(fobj->shared[i], fence); |
||||
write_seqcount_end(&obj->seq); |
||||
preempt_enable(); |
||||
|
||||
fence_put(old_fence); |
||||
return; |
||||
} |
||||
} |
||||
|
||||
/*
|
||||
* memory barrier is added by write_seqcount_begin, |
||||
* fobj->shared_count is protected by this lock too |
||||
*/ |
||||
RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); |
||||
fobj->shared_count++; |
||||
|
||||
write_seqcount_end(&obj->seq); |
||||
preempt_enable(); |
||||
} |
||||
|
||||
static void |
||||
reservation_object_add_shared_replace(struct reservation_object *obj, |
||||
struct reservation_object_list *old, |
||||
struct reservation_object_list *fobj, |
||||
struct fence *fence) |
||||
{ |
||||
unsigned i; |
||||
struct fence *old_fence = NULL; |
||||
|
||||
fence_get(fence); |
||||
|
||||
if (!old) { |
||||
RCU_INIT_POINTER(fobj->shared[0], fence); |
||||
fobj->shared_count = 1; |
||||
goto done; |
||||
} |
||||
|
||||
/*
|
||||
* no need to bump fence refcounts, rcu_read access |
||||
* requires the use of kref_get_unless_zero, and the |
||||
* references from the old struct are carried over to |
||||
* the new. |
||||
*/ |
||||
fobj->shared_count = old->shared_count; |
||||
|
||||
for (i = 0; i < old->shared_count; ++i) { |
||||
struct fence *check; |
||||
|
||||
check = rcu_dereference_protected(old->shared[i], |
||||
reservation_object_held(obj)); |
||||
|
||||
if (!old_fence && check->context == fence->context) { |
||||
old_fence = check; |
||||
RCU_INIT_POINTER(fobj->shared[i], fence); |
||||
} else |
||||
RCU_INIT_POINTER(fobj->shared[i], check); |
||||
} |
||||
if (!old_fence) { |
||||
RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); |
||||
fobj->shared_count++; |
||||
} |
||||
|
||||
done: |
||||
preempt_disable(); |
||||
write_seqcount_begin(&obj->seq); |
||||
/*
|
||||
* RCU_INIT_POINTER can be used here, |
||||
* seqcount provides the necessary barriers |
||||
*/ |
||||
RCU_INIT_POINTER(obj->fence, fobj); |
||||
write_seqcount_end(&obj->seq); |
||||
preempt_enable(); |
||||
|
||||
if (old) |
||||
kfree_rcu(old, rcu); |
||||
|
||||
if (old_fence) |
||||
fence_put(old_fence); |
||||
} |
||||
|
||||
/*
|
||||
* Add a fence to a shared slot, obj->lock must be held, and |
||||
* reservation_object_reserve_shared_fence has been called. |
||||
*/ |
||||
void reservation_object_add_shared_fence(struct reservation_object *obj, |
||||
struct fence *fence) |
||||
{ |
||||
struct reservation_object_list *old, *fobj = obj->staged; |
||||
|
||||
old = reservation_object_get_list(obj); |
||||
obj->staged = NULL; |
||||
|
||||
if (!fobj) { |
||||
BUG_ON(old->shared_count >= old->shared_max); |
||||
reservation_object_add_shared_inplace(obj, old, fence); |
||||
} else |
||||
reservation_object_add_shared_replace(obj, old, fobj, fence); |
||||
} |
||||
EXPORT_SYMBOL(reservation_object_add_shared_fence); |
||||
|
||||
void reservation_object_add_excl_fence(struct reservation_object *obj, |
||||
struct fence *fence) |
||||
{ |
||||
struct fence *old_fence = reservation_object_get_excl(obj); |
||||
struct reservation_object_list *old; |
||||
u32 i = 0; |
||||
|
||||
old = reservation_object_get_list(obj); |
||||
if (old) |
||||
i = old->shared_count; |
||||
|
||||
if (fence) |
||||
fence_get(fence); |
||||
|
||||
preempt_disable(); |
||||
write_seqcount_begin(&obj->seq); |
||||
/* write_seqcount_begin provides the necessary memory barrier */ |
||||
RCU_INIT_POINTER(obj->fence_excl, fence); |
||||
if (old) |
||||
old->shared_count = 0; |
||||
write_seqcount_end(&obj->seq); |
||||
preempt_enable(); |
||||
|
||||
/* inplace update, no shared fences */ |
||||
while (i--) |
||||
fence_put(rcu_dereference_protected(old->shared[i], |
||||
reservation_object_held(obj))); |
||||
|
||||
if (old_fence) |
||||
fence_put(old_fence); |
||||
} |
||||
EXPORT_SYMBOL(reservation_object_add_excl_fence); |
||||
|
||||
int reservation_object_get_fences_rcu(struct reservation_object *obj, |
||||
struct fence **pfence_excl, |
||||
unsigned *pshared_count, |
||||
struct fence ***pshared) |
||||
{ |
||||
unsigned shared_count = 0; |
||||
unsigned retry = 1; |
||||
struct fence **shared = NULL, *fence_excl = NULL; |
||||
int ret = 0; |
||||
|
||||
while (retry) { |
||||
struct reservation_object_list *fobj; |
||||
unsigned seq; |
||||
|
||||
seq = read_seqcount_begin(&obj->seq); |
||||
|
||||
rcu_read_lock(); |
||||
|
||||
fobj = rcu_dereference(obj->fence); |
||||
if (fobj) { |
||||
struct fence **nshared; |
||||
size_t sz = sizeof(*shared) * fobj->shared_max; |
||||
|
||||
nshared = krealloc(shared, sz, |
||||
GFP_NOWAIT | __GFP_NOWARN); |
||||
if (!nshared) { |
||||
rcu_read_unlock(); |
||||
nshared = krealloc(shared, sz, GFP_KERNEL); |
||||
if (nshared) { |
||||
shared = nshared; |
||||
continue; |
||||
} |
||||
|
||||
ret = -ENOMEM; |
||||
shared_count = 0; |
||||
break; |
||||
} |
||||
shared = nshared; |
||||
memcpy(shared, fobj->shared, sz); |
||||
shared_count = fobj->shared_count; |
||||
} else |
||||
shared_count = 0; |
||||
fence_excl = rcu_dereference(obj->fence_excl); |
||||
|
||||
retry = read_seqcount_retry(&obj->seq, seq); |
||||
if (retry) |
||||
goto unlock; |
||||
|
||||
if (!fence_excl || fence_get_rcu(fence_excl)) { |
||||
unsigned i; |
||||
|
||||
for (i = 0; i < shared_count; ++i) { |
||||
if (fence_get_rcu(shared[i])) |
||||
continue; |
||||
|
||||
/* uh oh, refcount failed, abort and retry */ |
||||
while (i--) |
||||
fence_put(shared[i]); |
||||
|
||||
if (fence_excl) { |
||||
fence_put(fence_excl); |
||||
fence_excl = NULL; |
||||
} |
||||
|
||||
retry = 1; |
||||
break; |
||||
} |
||||
} else |
||||
retry = 1; |
||||
|
||||
unlock: |
||||
rcu_read_unlock(); |
||||
} |
||||
*pshared_count = shared_count; |
||||
if (shared_count) |
||||
*pshared = shared; |
||||
else { |
||||
*pshared = NULL; |
||||
kfree(shared); |
||||
} |
||||
*pfence_excl = fence_excl; |
||||
|
||||
return ret; |
||||
} |
||||
EXPORT_SYMBOL_GPL(reservation_object_get_fences_rcu); |
||||
|
||||
long reservation_object_wait_timeout_rcu(struct reservation_object *obj, |
||||
bool wait_all, bool intr, |
||||
unsigned long timeout) |
||||
{ |
||||
struct fence *fence; |
||||
unsigned seq, shared_count, i = 0; |
||||
long ret = timeout; |
||||
|
||||
retry: |
||||
fence = NULL; |
||||
shared_count = 0; |
||||
seq = read_seqcount_begin(&obj->seq); |
||||
rcu_read_lock(); |
||||
|
||||
if (wait_all) { |
||||
struct reservation_object_list *fobj = rcu_dereference(obj->fence); |
||||
|
||||
if (fobj) |
||||
shared_count = fobj->shared_count; |
||||
|
||||
if (read_seqcount_retry(&obj->seq, seq)) |
||||
goto unlock_retry; |
||||
|
||||
for (i = 0; i < shared_count; ++i) { |
||||
struct fence *lfence = rcu_dereference(fobj->shared[i]); |
||||
|
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) |
||||
continue; |
||||
|
||||
if (!fence_get_rcu(lfence)) |
||||
goto unlock_retry; |
||||
|
||||
if (fence_is_signaled(lfence)) { |
||||
fence_put(lfence); |
||||
continue; |
||||
} |
||||
|
||||
fence = lfence; |
||||
break; |
||||
} |
||||
} |
||||
|
||||
if (!shared_count) { |
||||
struct fence *fence_excl = rcu_dereference(obj->fence_excl); |
||||
|
||||
if (read_seqcount_retry(&obj->seq, seq)) |
||||
goto unlock_retry; |
||||
|
||||
if (fence_excl && |
||||
!test_bit(FENCE_FLAG_SIGNALED_BIT, &fence_excl->flags)) { |
||||
if (!fence_get_rcu(fence_excl)) |
||||
goto unlock_retry; |
||||
|
||||
if (fence_is_signaled(fence_excl)) |
||||
fence_put(fence_excl); |
||||
else |
||||
fence = fence_excl; |
||||
} |
||||
} |
||||
|
||||
rcu_read_unlock(); |
||||
if (fence) { |
||||
ret = fence_wait_timeout(fence, intr, ret); |
||||
fence_put(fence); |
||||
if (ret > 0 && wait_all && (i + 1 < shared_count)) |
||||
goto retry; |
||||
} |
||||
return ret; |
||||
|
||||
unlock_retry: |
||||
rcu_read_unlock(); |
||||
goto retry; |
||||
} |
||||
EXPORT_SYMBOL_GPL(reservation_object_wait_timeout_rcu); |
||||
|
||||
|
||||
static inline int |
||||
reservation_object_test_signaled_single(struct fence *passed_fence) |
||||
{ |
||||
struct fence *fence, *lfence = passed_fence; |
||||
int ret = 1; |
||||
|
||||
if (!test_bit(FENCE_FLAG_SIGNALED_BIT, &lfence->flags)) { |
||||
int ret; |
||||
|
||||
fence = fence_get_rcu(lfence); |
||||
if (!fence) |
||||
return -1; |
||||
|
||||
ret = !!fence_is_signaled(fence); |
||||
fence_put(fence); |
||||
} |
||||
return ret; |
||||
} |
||||
|
||||
bool reservation_object_test_signaled_rcu(struct reservation_object *obj, |
||||
bool test_all) |
||||
{ |
||||
unsigned seq, shared_count; |
||||
int ret = true; |
||||
|
||||
retry: |
||||
shared_count = 0; |
||||
seq = read_seqcount_begin(&obj->seq); |
||||
rcu_read_lock(); |
||||
|
||||
if (test_all) { |
||||
unsigned i; |
||||
|
||||
struct reservation_object_list *fobj = rcu_dereference(obj->fence); |
||||
|
||||
if (fobj) |
||||
shared_count = fobj->shared_count; |
||||
|
||||
if (read_seqcount_retry(&obj->seq, seq)) |
||||
goto unlock_retry; |
||||
|
||||
for (i = 0; i < shared_count; ++i) { |
||||
struct fence *fence = rcu_dereference(fobj->shared[i]); |
||||
|
||||
ret = reservation_object_test_signaled_single(fence); |
||||
if (ret < 0) |
||||
goto unlock_retry; |
||||
else if (!ret) |
||||
break; |
||||
} |
||||
|
||||
/*
|
||||
* There could be a read_seqcount_retry here, but nothing cares |
||||
* about whether it's the old or newer fence pointers that are |
||||
* signaled. That race could still have happened after checking |
||||
* read_seqcount_retry. If you care, use ww_mutex_lock. |
||||
*/ |
||||
} |
||||
|
||||
if (!shared_count) { |
||||
struct fence *fence_excl = rcu_dereference(obj->fence_excl); |
||||
|
||||
if (read_seqcount_retry(&obj->seq, seq)) |
||||
goto unlock_retry; |
||||
|
||||
if (fence_excl) { |
||||
ret = reservation_object_test_signaled_single(fence_excl); |
||||
if (ret < 0) |
||||
goto unlock_retry; |
||||
} |
||||
} |
||||
|
||||
rcu_read_unlock(); |
||||
return ret; |
||||
|
||||
unlock_retry: |
||||
rcu_read_unlock(); |
||||
goto retry; |
||||
} |
||||
EXPORT_SYMBOL_GPL(reservation_object_test_signaled_rcu); |
@ -0,0 +1,73 @@ |
||||
/*
|
||||
* seqno-fence, using a dma-buf to synchronize fencing |
||||
* |
||||
* Copyright (C) 2012 Texas Instruments |
||||
* Copyright (C) 2012-2014 Canonical Ltd |
||||
* Authors: |
||||
* Rob Clark <robdclark@gmail.com> |
||||
* Maarten Lankhorst <maarten.lankhorst@canonical.com> |
||||
* |
||||
* This program is free software; you can redistribute it and/or modify it |
||||
* under the terms of the GNU General Public License version 2 as published by |
||||
* the Free Software Foundation. |
||||
* |
||||
* This program is distributed in the hope that it will be useful, but WITHOUT |
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
||||
* more details. |
||||
*/ |
||||
|
||||
#include <linux/slab.h> |
||||
#include <linux/export.h> |
||||
#include <linux/seqno-fence.h> |
||||
|
||||
static const char *seqno_fence_get_driver_name(struct fence *fence) |
||||
{ |
||||
struct seqno_fence *seqno_fence = to_seqno_fence(fence); |
||||
return seqno_fence->ops->get_driver_name(fence); |
||||
} |
||||
|
||||
static const char *seqno_fence_get_timeline_name(struct fence *fence) |
||||
{ |
||||
struct seqno_fence *seqno_fence = to_seqno_fence(fence); |
||||
return seqno_fence->ops->get_timeline_name(fence); |
||||
} |
||||
|
||||
static bool seqno_enable_signaling(struct fence *fence) |
||||
{ |
||||
struct seqno_fence *seqno_fence = to_seqno_fence(fence); |
||||
return seqno_fence->ops->enable_signaling(fence); |
||||
} |
||||
|
||||
static bool seqno_signaled(struct fence *fence) |
||||
{ |
||||
struct seqno_fence *seqno_fence = to_seqno_fence(fence); |
||||
return seqno_fence->ops->signaled && seqno_fence->ops->signaled(fence); |
||||
} |
||||
|
||||
static void seqno_release(struct fence *fence) |
||||
{ |
||||
struct seqno_fence *f = to_seqno_fence(fence); |
||||
|
||||
dma_buf_put(f->sync_buf); |
||||
if (f->ops->release) |
||||
f->ops->release(fence); |
||||
else |
||||
fence_free(&f->base); |
||||
} |
||||
|
||||
static signed long seqno_wait(struct fence *fence, bool intr, signed long timeout) |
||||
{ |
||||
struct seqno_fence *f = to_seqno_fence(fence); |
||||
return f->ops->wait(fence, intr, timeout); |
||||
} |
||||
|
||||
const struct fence_ops seqno_fence_ops = { |
||||
.get_driver_name = seqno_fence_get_driver_name, |
||||
.get_timeline_name = seqno_fence_get_timeline_name, |
||||
.enable_signaling = seqno_enable_signaling, |
||||
.signaled = seqno_signaled, |
||||
.wait = seqno_wait, |
||||
.release = seqno_release, |
||||
}; |
||||
EXPORT_SYMBOL(seqno_fence_ops); |
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,252 @@ |
||||
/*
|
||||
* drivers/base/sync.c |
||||
* |
||||
* Copyright (C) 2012 Google, Inc. |
||||
* |
||||
* This software is licensed under the terms of the GNU General Public |
||||
* License version 2, as published by the Free Software Foundation, and |
||||
* may be copied, distributed, and modified under those terms. |
||||
* |
||||
* This program is distributed in the hope that it will be useful, |
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of |
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
||||
* GNU General Public License for more details. |
||||
* |
||||
*/ |
||||
|
||||
#include <linux/debugfs.h> |
||||
#include <linux/export.h> |
||||
#include <linux/file.h> |
||||
#include <linux/fs.h> |
||||
#include <linux/kernel.h> |
||||
#include <linux/poll.h> |
||||
#include <linux/sched.h> |
||||
#include <linux/seq_file.h> |
||||
#include <linux/slab.h> |
||||
#include <linux/uaccess.h> |
||||
#include <linux/anon_inodes.h> |
||||
#include "sync.h" |
||||
|
||||
#ifdef CONFIG_DEBUG_FS |
||||
|
||||
static LIST_HEAD(sync_timeline_list_head); |
||||
static DEFINE_SPINLOCK(sync_timeline_list_lock); |
||||
static LIST_HEAD(sync_fence_list_head); |
||||
static DEFINE_SPINLOCK(sync_fence_list_lock); |
||||
|
||||
void sync_timeline_debug_add(struct sync_timeline *obj) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
spin_lock_irqsave(&sync_timeline_list_lock, flags); |
||||
list_add_tail(&obj->sync_timeline_list, &sync_timeline_list_head); |
||||
spin_unlock_irqrestore(&sync_timeline_list_lock, flags); |
||||
} |
||||
|
||||
void sync_timeline_debug_remove(struct sync_timeline *obj) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
spin_lock_irqsave(&sync_timeline_list_lock, flags); |
||||
list_del(&obj->sync_timeline_list); |
||||
spin_unlock_irqrestore(&sync_timeline_list_lock, flags); |
||||
} |
||||
|
||||
void sync_fence_debug_add(struct sync_fence *fence) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
spin_lock_irqsave(&sync_fence_list_lock, flags); |
||||
list_add_tail(&fence->sync_fence_list, &sync_fence_list_head); |
||||
spin_unlock_irqrestore(&sync_fence_list_lock, flags); |
||||
} |
||||
|
||||
void sync_fence_debug_remove(struct sync_fence *fence) |
||||
{ |
||||
unsigned long flags; |
||||
|
||||
spin_lock_irqsave(&sync_fence_list_lock, flags); |
||||
list_del(&fence->sync_fence_list); |
||||
spin_unlock_irqrestore(&sync_fence_list_lock, flags); |
||||
} |
||||
|
||||
static const char *sync_status_str(int status) |
||||
{ |
||||
if (status == 0) |
||||
return "signaled"; |
||||
|
||||
if (status > 0) |
||||
return "active"; |
||||
|
||||
return "error"; |
||||
} |
||||
|
||||
static void sync_print_pt(struct seq_file *s, struct sync_pt *pt, bool fence) |
||||
{ |
||||
int status = 1; |
||||
struct sync_timeline *parent = sync_pt_parent(pt); |
||||
|
||||
if (fence_is_signaled_locked(&pt->base)) |
||||
status = pt->base.status; |
||||
|
||||
seq_printf(s, " %s%spt %s", |
||||
fence ? parent->name : "", |
||||
fence ? "_" : "", |
||||
sync_status_str(status)); |
||||
|
||||
if (status <= 0) { |
||||
struct timeval tv = ktime_to_timeval(pt->base.timestamp); |
||||
|
||||
seq_printf(s, "@%ld.%06ld", tv.tv_sec, tv.tv_usec); |
||||
} |
||||
|
||||
if (parent->ops->timeline_value_str && |
||||
parent->ops->pt_value_str) { |
||||
char value[64]; |
||||
|
||||
parent->ops->pt_value_str(pt, value, sizeof(value)); |
||||
seq_printf(s, ": %s", value); |
||||
if (fence) { |
||||
parent->ops->timeline_value_str(parent, value, |
||||
sizeof(value)); |
||||
seq_printf(s, " / %s", value); |
||||
} |
||||
} |
||||
|
||||
seq_puts(s, "\n"); |
||||
} |
||||
|
||||
static void sync_print_obj(struct seq_file *s, struct sync_timeline *obj) |
||||
{ |
||||
struct list_head *pos; |
||||
unsigned long flags; |
||||
|
||||
seq_printf(s, "%s %s", obj->name, obj->ops->driver_name); |
||||
|
||||
if (obj->ops->timeline_value_str) { |
||||
char value[64]; |
||||
|
||||
obj->ops->timeline_value_str(obj, value, sizeof(value)); |
||||
seq_printf(s, ": %s", value); |
||||
} |
||||
|
||||
seq_puts(s, "\n"); |
||||
|
||||
spin_lock_irqsave(&obj->child_list_lock, flags); |
||||
list_for_each(pos, &obj->child_list_head) { |
||||
struct sync_pt *pt = |
||||
container_of(pos, struct sync_pt, child_list); |
||||
sync_print_pt(s, pt, false); |
||||
} |
||||
spin_unlock_irqrestore(&obj->child_list_lock, flags); |
||||
} |
||||
|
||||
static void sync_print_fence(struct seq_file *s, struct sync_fence *fence) |
||||
{ |
||||
wait_queue_t *pos; |
||||
unsigned long flags; |
||||
int i; |
||||
|
||||
seq_printf(s, "[%p] %s: %s\n", fence, fence->name, |
||||
sync_status_str(atomic_read(&fence->status))); |
||||
|
||||
for (i = 0; i < fence->num_fences; ++i) { |
||||
struct sync_pt *pt = |
||||
container_of(fence->cbs[i].sync_pt, |
||||
struct sync_pt, base); |
||||
|
||||
sync_print_pt(s, pt, true); |
||||
} |
||||
|
||||
spin_lock_irqsave(&fence->wq.lock, flags); |
||||
list_for_each_entry(pos, &fence->wq.task_list, task_list) { |
||||
struct sync_fence_waiter *waiter; |
||||
|
||||
if (pos->func != &sync_fence_wake_up_wq) |
||||
continue; |
||||
|
||||
waiter = container_of(pos, struct sync_fence_waiter, work); |
||||
|
||||
seq_printf(s, "waiter %pF\n", waiter->callback); |
||||
} |
||||
spin_unlock_irqrestore(&fence->wq.lock, flags); |
||||
} |
||||
|
||||
static int sync_debugfs_show(struct seq_file *s, void *unused) |
||||
{ |
||||
unsigned long flags; |
||||
struct list_head *pos; |
||||
|
||||
seq_puts(s, "objs:\n--------------\n"); |
||||
|
||||
spin_lock_irqsave(&sync_timeline_list_lock, flags); |
||||
list_for_each(pos, &sync_timeline_list_head) { |
||||
struct sync_timeline *obj = |
||||
container_of(pos, struct sync_timeline, |
||||
sync_timeline_list); |
||||
|
||||
sync_print_obj(s, obj); |
||||
seq_puts(s, "\n"); |
||||
} |
||||
spin_unlock_irqrestore(&sync_timeline_list_lock, flags); |
||||
|
||||
seq_puts(s, "fences:\n--------------\n"); |
||||
|
||||
spin_lock_irqsave(&sync_fence_list_lock, flags); |
||||
list_for_each(pos, &sync_fence_list_head) { |
||||
struct sync_fence *fence = |
||||
container_of(pos, struct sync_fence, sync_fence_list); |
||||
|
||||
sync_print_fence(s, fence); |
||||
seq_puts(s, "\n"); |
||||
} |
||||
spin_unlock_irqrestore(&sync_fence_list_lock, flags); |
||||
return 0; |
||||
} |
||||
|
||||
static int sync_debugfs_open(struct inode *inode, struct file *file) |
||||
{ |
||||
return single_open(file, sync_debugfs_show, inode->i_private); |
||||
} |
||||
|
||||
static const struct file_operations sync_debugfs_fops = { |
||||
.open = sync_debugfs_open, |
||||
.read = seq_read, |
||||
.llseek = seq_lseek, |
||||
.release = single_release, |
||||
}; |
||||
|
||||
static __init int sync_debugfs_init(void) |
||||
{ |
||||
debugfs_create_file("sync", S_IRUGO, NULL, NULL, &sync_debugfs_fops); |
||||
return 0; |
||||
} |
||||
late_initcall(sync_debugfs_init); |
||||
|
||||
#define DUMP_CHUNK 256 |
||||
static char sync_dump_buf[64 * 1024]; |
||||
void sync_dump(void) |
||||
{ |
||||
struct seq_file s = { |
||||
.buf = sync_dump_buf, |
||||
.size = sizeof(sync_dump_buf) - 1, |
||||
}; |
||||
int i; |
||||
|
||||
sync_debugfs_show(&s, NULL); |
||||
|
||||
for (i = 0; i < s.count; i += DUMP_CHUNK) { |
||||
if ((s.count - i) > DUMP_CHUNK) { |
||||
char c = s.buf[i + DUMP_CHUNK]; |
||||
|
||||
s.buf[i + DUMP_CHUNK] = 0; |
||||
pr_cont("%s", s.buf + i); |
||||
s.buf[i + DUMP_CHUNK] = c; |
||||
} else { |
||||
s.buf[s.count] = 0; |
||||
pr_cont("%s", s.buf + i); |
||||
} |
||||
} |
||||
} |
||||
|
||||
#endif |
@ -0,0 +1,360 @@ |
||||
/*
|
||||
* Fence mechanism for dma-buf to allow for asynchronous dma access |
||||
* |
||||
* Copyright (C) 2012 Canonical Ltd |
||||
* Copyright (C) 2012 Texas Instruments |
||||
* |
||||
* Authors: |
||||
* Rob Clark <robdclark@gmail.com> |
||||
* Maarten Lankhorst <maarten.lankhorst@canonical.com> |
||||
* |
||||
* This program is free software; you can redistribute it and/or modify it |
||||
* under the terms of the GNU General Public License version 2 as published by |
||||
* the Free Software Foundation. |
||||
* |
||||
* This program is distributed in the hope that it will be useful, but WITHOUT |
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
||||
* more details. |
||||
*/ |
||||
|
||||
#ifndef __LINUX_FENCE_H |
||||
#define __LINUX_FENCE_H |
||||
|
||||
#include <linux/err.h> |
||||
#include <linux/wait.h> |
||||
#include <linux/list.h> |
||||
#include <linux/bitops.h> |
||||
#include <linux/kref.h> |
||||
#include <linux/sched.h> |
||||
#include <linux/printk.h> |
||||
#include <linux/rcupdate.h> |
||||
|
||||
struct fence; |
||||
struct fence_ops; |
||||
struct fence_cb; |
||||
|
||||
/**
|
||||
* struct fence - software synchronization primitive |
||||
* @refcount: refcount for this fence |
||||
* @ops: fence_ops associated with this fence |
||||
* @rcu: used for releasing fence with kfree_rcu |
||||
* @cb_list: list of all callbacks to call |
||||
* @lock: spin_lock_irqsave used for locking |
||||
* @context: execution context this fence belongs to, returned by |
||||
* fence_context_alloc() |
||||
* @seqno: the sequence number of this fence inside the execution context, |
||||
* can be compared to decide which fence would be signaled later. |
||||
* @flags: A mask of FENCE_FLAG_* defined below |
||||
* @timestamp: Timestamp when the fence was signaled. |
||||
* @status: Optional, only valid if < 0, must be set before calling |
||||
* fence_signal, indicates that the fence has completed with an error. |
||||
* |
||||
* the flags member must be manipulated and read using the appropriate |
||||
* atomic ops (bit_*), so taking the spinlock will not be needed most |
||||
* of the time. |
||||
* |
||||
* FENCE_FLAG_SIGNALED_BIT - fence is already signaled |
||||
* FENCE_FLAG_ENABLE_SIGNAL_BIT - enable_signaling might have been called* |
||||
* FENCE_FLAG_USER_BITS - start of the unused bits, can be used by the |
||||
* implementer of the fence for its own purposes. Can be used in different |
||||
* ways by different fence implementers, so do not rely on this. |
||||
* |
||||
* *) Since atomic bitops are used, this is not guaranteed to be the case. |
||||
* Particularly, if the bit was set, but fence_signal was called right |
||||
* before this bit was set, it would have been able to set the |
||||
* FENCE_FLAG_SIGNALED_BIT, before enable_signaling was called. |
||||
* Adding a check for FENCE_FLAG_SIGNALED_BIT after setting |
||||
* FENCE_FLAG_ENABLE_SIGNAL_BIT closes this race, and makes sure that |
||||
* after fence_signal was called, any enable_signaling call will have either |
||||
* been completed, or never called at all. |
||||
*/ |
||||
struct fence { |
||||
struct kref refcount; |
||||
const struct fence_ops *ops; |
||||
struct rcu_head rcu; |
||||
struct list_head cb_list; |
||||
spinlock_t *lock; |
||||
unsigned context, seqno; |
||||
unsigned long flags; |
||||
ktime_t timestamp; |
||||
int status; |
||||
}; |
||||
|
||||
enum fence_flag_bits { |
||||
FENCE_FLAG_SIGNALED_BIT, |
||||
FENCE_FLAG_ENABLE_SIGNAL_BIT, |
||||
FENCE_FLAG_USER_BITS, /* must always be last member */ |
||||
}; |
||||
|
||||
typedef void (*fence_func_t)(struct fence *fence, struct fence_cb *cb); |
||||
|
||||
/**
|
||||
* struct fence_cb - callback for fence_add_callback |
||||
* @node: used by fence_add_callback to append this struct to fence::cb_list |
||||
* @func: fence_func_t to call |
||||
* |
||||
* This struct will be initialized by fence_add_callback, additional |
||||
* data can be passed along by embedding fence_cb in another struct. |
||||
*/ |
||||
struct fence_cb { |
||||
struct list_head node; |
||||
fence_func_t func; |
||||
}; |
||||
|
||||
/**
|
||||
* struct fence_ops - operations implemented for fence |
||||
* @get_driver_name: returns the driver name. |
||||
* @get_timeline_name: return the name of the context this fence belongs to. |
||||
* @enable_signaling: enable software signaling of fence. |
||||
* @signaled: [optional] peek whether the fence is signaled, can be null. |
||||
* @wait: custom wait implementation, or fence_default_wait. |
||||
* @release: [optional] called on destruction of fence, can be null |
||||
* @fill_driver_data: [optional] callback to fill in free-form debug info |
||||
* Returns amount of bytes filled, or -errno. |
||||
* @fence_value_str: [optional] fills in the value of the fence as a string |
||||
* @timeline_value_str: [optional] fills in the current value of the timeline |
||||
* as a string |
||||
* |
||||
* Notes on enable_signaling: |
||||
* For fence implementations that have the capability for hw->hw |
||||
* signaling, they can implement this op to enable the necessary |
||||
* irqs, or insert commands into cmdstream, etc. This is called |
||||
* in the first wait() or add_callback() path to let the fence |
||||
* implementation know that there is another driver waiting on |
||||
* the signal (ie. hw->sw case). |
||||
* |
||||
* This function can be called called from atomic context, but not |
||||
* from irq context, so normal spinlocks can be used. |
||||
* |
||||
* A return value of false indicates the fence already passed, |
||||
* or some failure occured that made it impossible to enable |
||||
* signaling. True indicates succesful enabling. |
||||
* |
||||
* fence->status may be set in enable_signaling, but only when false is |
||||
* returned. |
||||
* |
||||
* Calling fence_signal before enable_signaling is called allows |
||||
* for a tiny race window in which enable_signaling is called during, |
||||
* before, or after fence_signal. To fight this, it is recommended |
||||
* that before enable_signaling returns true an extra reference is |
||||
* taken on the fence, to be released when the fence is signaled. |
||||
* This will mean fence_signal will still be called twice, but |
||||
* the second time will be a noop since it was already signaled. |
||||
* |
||||
* Notes on signaled: |
||||
* May set fence->status if returning true. |
||||
* |
||||
* Notes on wait: |
||||
* Must not be NULL, set to fence_default_wait for default implementation. |
||||
* the fence_default_wait implementation should work for any fence, as long |
||||
* as enable_signaling works correctly. |
||||
* |
||||
* Must return -ERESTARTSYS if the wait is intr = true and the wait was |
||||
* interrupted, and remaining jiffies if fence has signaled, or 0 if wait |
||||
* timed out. Can also return other error values on custom implementations, |
||||
* which should be treated as if the fence is signaled. For example a hardware |
||||
* lockup could be reported like that. |
||||
* |
||||
* Notes on release: |
||||
* Can be NULL, this function allows additional commands to run on |
||||
* destruction of the fence. Can be called from irq context. |
||||
* If pointer is set to NULL, kfree will get called instead. |
||||
*/ |
||||
|
||||
struct fence_ops { |
||||
const char * (*get_driver_name)(struct fence *fence); |
||||
const char * (*get_timeline_name)(struct fence *fence); |
||||
bool (*enable_signaling)(struct fence *fence); |
||||
bool (*signaled)(struct fence *fence); |
||||
signed long (*wait)(struct fence *fence, bool intr, signed long timeout); |
||||
void (*release)(struct fence *fence); |
||||
|
||||
int (*fill_driver_data)(struct fence *fence, void *data, int size); |
||||
void (*fence_value_str)(struct fence *fence, char *str, int size); |
||||
void (*timeline_value_str)(struct fence *fence, char *str, int size); |
||||
}; |
||||
|
||||
void fence_init(struct fence *fence, const struct fence_ops *ops, |
||||
spinlock_t *lock, unsigned context, unsigned seqno); |
||||
|
||||
void fence_release(struct kref *kref); |
||||
void fence_free(struct fence *fence); |
||||
|
||||
/**
|
||||
* fence_get - increases refcount of the fence |
||||
* @fence: [in] fence to increase refcount of |
||||
* |
||||
* Returns the same fence, with refcount increased by 1. |
||||
*/ |
||||
static inline struct fence *fence_get(struct fence *fence) |
||||
{ |
||||
if (fence) |
||||
kref_get(&fence->refcount); |
||||
return fence; |
||||
} |
||||
|
||||
/**
|
||||
* fence_get_rcu - get a fence from a reservation_object_list with rcu read lock |
||||
* @fence: [in] fence to increase refcount of |
||||
* |
||||
* Function returns NULL if no refcount could be obtained, or the fence. |
||||
*/ |
||||
static inline struct fence *fence_get_rcu(struct fence *fence) |
||||
{ |
||||
if (kref_get_unless_zero(&fence->refcount)) |
||||
return fence; |
||||
else |
||||
return NULL; |
||||
} |
||||
|
||||
/**
|
||||
* fence_put - decreases refcount of the fence |
||||
* @fence: [in] fence to reduce refcount of |
||||
*/ |
||||
static inline void fence_put(struct fence *fence) |
||||
{ |
||||
if (fence) |
||||
kref_put(&fence->refcount, fence_release); |
||||
} |
||||
|
||||
int fence_signal(struct fence *fence); |
||||
int fence_signal_locked(struct fence *fence); |
||||
signed long fence_default_wait(struct fence *fence, bool intr, signed long timeout); |
||||
int fence_add_callback(struct fence *fence, struct fence_cb *cb, |
||||
fence_func_t func); |
||||
bool fence_remove_callback(struct fence *fence, struct fence_cb *cb); |
||||
void fence_enable_sw_signaling(struct fence *fence); |
||||
|
||||
/**
|
||||
* fence_is_signaled_locked - Return an indication if the fence is signaled yet. |
||||
* @fence: [in] the fence to check |
||||
* |
||||
* Returns true if the fence was already signaled, false if not. Since this |
||||
* function doesn't enable signaling, it is not guaranteed to ever return |
||||
* true if fence_add_callback, fence_wait or fence_enable_sw_signaling |
||||
* haven't been called before. |
||||
* |
||||
* This function requires fence->lock to be held. |
||||
*/ |
||||
static inline bool |
||||
fence_is_signaled_locked(struct fence *fence) |
||||
{ |
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
return true; |
||||
|
||||
if (fence->ops->signaled && fence->ops->signaled(fence)) { |
||||
fence_signal_locked(fence); |
||||
return true; |
||||
} |
||||
|
||||
return false; |
||||
} |
||||
|
||||
/**
|
||||
* fence_is_signaled - Return an indication if the fence is signaled yet. |
||||
* @fence: [in] the fence to check |
||||
* |
||||
* Returns true if the fence was already signaled, false if not. Since this |
||||
* function doesn't enable signaling, it is not guaranteed to ever return |
||||
* true if fence_add_callback, fence_wait or fence_enable_sw_signaling |
||||
* haven't been called before. |
||||
* |
||||
* It's recommended for seqno fences to call fence_signal when the |
||||
* operation is complete, it makes it possible to prevent issues from |
||||
* wraparound between time of issue and time of use by checking the return |
||||
* value of this function before calling hardware-specific wait instructions. |
||||
*/ |
||||
static inline bool |
||||
fence_is_signaled(struct fence *fence) |
||||
{ |
||||
if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)) |
||||
return true; |
||||
|
||||
if (fence->ops->signaled && fence->ops->signaled(fence)) { |
||||
fence_signal(fence); |
||||
return true; |
||||
} |
||||
|
||||
return false; |
||||
} |
||||
|
||||
/**
|
||||
* fence_later - return the chronologically later fence |
||||
* @f1: [in] the first fence from the same context |
||||
* @f2: [in] the second fence from the same context |
||||
* |
||||
* Returns NULL if both fences are signaled, otherwise the fence that would be |
||||
* signaled last. Both fences must be from the same context, since a seqno is |
||||
* not re-used across contexts. |
||||
*/ |
||||
static inline struct fence *fence_later(struct fence *f1, struct fence *f2) |
||||
{ |
||||
if (WARN_ON(f1->context != f2->context)) |
||||
return NULL; |
||||
|
||||
/*
|
||||
* can't check just FENCE_FLAG_SIGNALED_BIT here, it may never have been |
||||
* set if enable_signaling wasn't called, and enabling that here is |
||||
* overkill. |
||||
*/ |
||||
if (f2->seqno - f1->seqno <= INT_MAX) |
||||
return fence_is_signaled(f2) ? NULL : f2; |
||||
else |
||||
return fence_is_signaled(f1) ? NULL : f1; |
||||
} |
||||
|
||||
signed long fence_wait_timeout(struct fence *, bool intr, signed long timeout); |
||||
|
||||
|
||||
/**
|
||||
* fence_wait - sleep until the fence gets signaled |
||||
* @fence: [in] the fence to wait on |
||||
* @intr: [in] if true, do an interruptible wait |
||||
* |
||||
* This function will return -ERESTARTSYS if interrupted by a signal, |
||||
* or 0 if the fence was signaled. Other error values may be |
||||
* returned on custom implementations. |
||||
* |
||||
* Performs a synchronous wait on this fence. It is assumed the caller |
||||
* directly or indirectly holds a reference to the fence, otherwise the |
||||
* fence might be freed before return, resulting in undefined behavior. |
||||
*/ |
||||
static inline signed long fence_wait(struct fence *fence, bool intr) |
||||
{ |
||||
signed long ret; |
||||
|
||||
/* Since fence_wait_timeout cannot timeout with
|
||||
* MAX_SCHEDULE_TIMEOUT, only valid return values are |
||||
* -ERESTARTSYS and MAX_SCHEDULE_TIMEOUT. |
||||
*/ |
||||
ret = fence_wait_timeout(fence, intr, MAX_SCHEDULE_TIMEOUT); |
||||
|
||||
return ret < 0 ? ret : 0; |
||||
} |
||||
|
||||
unsigned fence_context_alloc(unsigned num); |
||||
|
||||
#define FENCE_TRACE(f, fmt, args...) \ |
||||
do { \
|
||||
struct fence *__ff = (f); \
|
||||
if (config_enabled(CONFIG_FENCE_TRACE)) \
|
||||
pr_info("f %u#%u: " fmt, \
|
||||
__ff->context, __ff->seqno, ##args); \
|
||||
} while (0) |
||||
|
||||
#define FENCE_WARN(f, fmt, args...) \ |
||||
do { \
|
||||
struct fence *__ff = (f); \
|
||||
pr_warn("f %u#%u: " fmt, __ff->context, __ff->seqno, \
|
||||
##args); \ |
||||
} while (0) |
||||
|
||||
#define FENCE_ERR(f, fmt, args...) \ |
||||
do { \
|
||||
struct fence *__ff = (f); \
|
||||
pr_err("f %u#%u: " fmt, __ff->context, __ff->seqno, \
|
||||
##args); \ |
||||
} while (0) |
||||
|
||||
#endif /* __LINUX_FENCE_H */ |
@ -0,0 +1,116 @@ |
||||
/*
|
||||
* seqno-fence, using a dma-buf to synchronize fencing |
||||
* |
||||
* Copyright (C) 2012 Texas Instruments |
||||
* Copyright (C) 2012 Canonical Ltd |
||||
* Authors: |
||||
* Rob Clark <robdclark@gmail.com> |
||||
* Maarten Lankhorst <maarten.lankhorst@canonical.com> |
||||
* |
||||
* This program is free software; you can redistribute it and/or modify it |
||||
* under the terms of the GNU General Public License version 2 as published by |
||||
* the Free Software Foundation. |
||||
* |
||||
* This program is distributed in the hope that it will be useful, but WITHOUT |
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or |
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for |
||||
* more details. |
||||
*/ |
||||
|
||||
#ifndef __LINUX_SEQNO_FENCE_H |
||||
#define __LINUX_SEQNO_FENCE_H |
||||
|
||||
#include <linux/fence.h> |
||||
#include <linux/dma-buf.h> |
||||
|
||||
enum seqno_fence_condition { |
||||
SEQNO_FENCE_WAIT_GEQUAL, |
||||
SEQNO_FENCE_WAIT_NONZERO |
||||
}; |
||||
|
||||
struct seqno_fence { |
||||
struct fence base; |
||||
|
||||
const struct fence_ops *ops; |
||||
struct dma_buf *sync_buf; |
||||
uint32_t seqno_ofs; |
||||
enum seqno_fence_condition condition; |
||||
}; |
||||
|
||||
extern const struct fence_ops seqno_fence_ops; |
||||
|
||||
/**
|
||||
* to_seqno_fence - cast a fence to a seqno_fence |
||||
* @fence: fence to cast to a seqno_fence |
||||
* |
||||
* Returns NULL if the fence is not a seqno_fence, |
||||
* or the seqno_fence otherwise. |
||||
*/ |
||||
static inline struct seqno_fence * |
||||
to_seqno_fence(struct fence *fence) |
||||
{ |
||||
if (fence->ops != &seqno_fence_ops) |
||||
return NULL; |
||||
return container_of(fence, struct seqno_fence, base); |
||||
} |
||||
|
||||
/**
|
||||
* seqno_fence_init - initialize a seqno fence |
||||
* @fence: seqno_fence to initialize |
||||
* @lock: pointer to spinlock to use for fence |
||||
* @sync_buf: buffer containing the memory location to signal on |
||||
* @context: the execution context this fence is a part of |
||||
* @seqno_ofs: the offset within @sync_buf |
||||
* @seqno: the sequence # to signal on |
||||
* @ops: the fence_ops for operations on this seqno fence |
||||
* |
||||
* This function initializes a struct seqno_fence with passed parameters, |
||||
* and takes a reference on sync_buf which is released on fence destruction. |
||||
* |
||||
* A seqno_fence is a dma_fence which can complete in software when |
||||
* enable_signaling is called, but it also completes when |
||||
* (s32)((sync_buf)[seqno_ofs] - seqno) >= 0 is true |
||||
* |
||||
* The seqno_fence will take a refcount on the sync_buf until it's |
||||
* destroyed, but actual lifetime of sync_buf may be longer if one of the |
||||
* callers take a reference to it. |
||||
* |
||||
* Certain hardware have instructions to insert this type of wait condition |
||||
* in the command stream, so no intervention from software would be needed. |
||||
* This type of fence can be destroyed before completed, however a reference |
||||
* on the sync_buf dma-buf can be taken. It is encouraged to re-use the same |
||||
* dma-buf for sync_buf, since mapping or unmapping the sync_buf to the |
||||
* device's vm can be expensive. |
||||
* |
||||
* It is recommended for creators of seqno_fence to call fence_signal |
||||
* before destruction. This will prevent possible issues from wraparound at |
||||
* time of issue vs time of check, since users can check fence_is_signaled |
||||
* before submitting instructions for the hardware to wait on the fence. |
||||
* However, when ops.enable_signaling is not called, it doesn't have to be |
||||
* done as soon as possible, just before there's any real danger of seqno |
||||
* wraparound. |
||||
*/ |
||||
static inline void |
||||
seqno_fence_init(struct seqno_fence *fence, spinlock_t *lock, |
||||
struct dma_buf *sync_buf, uint32_t context, |
||||
uint32_t seqno_ofs, uint32_t seqno, |
||||
enum seqno_fence_condition cond, |
||||
const struct fence_ops *ops) |
||||
{ |
||||
BUG_ON(!fence || !sync_buf || !ops); |
||||
BUG_ON(!ops->wait || !ops->enable_signaling || |
||||
!ops->get_driver_name || !ops->get_timeline_name); |
||||
|
||||
/*
|
||||
* ops is used in fence_init for get_driver_name, so needs to be |
||||
* initialized first |
||||
*/ |
||||
fence->ops = ops; |
||||
fence_init(&fence->base, &seqno_fence_ops, lock, context, seqno); |
||||
get_dma_buf(sync_buf); |
||||
fence->sync_buf = sync_buf; |
||||
fence->seqno_ofs = seqno_ofs; |
||||
fence->condition = cond; |
||||
} |
||||
|
||||
#endif /* __LINUX_SEQNO_FENCE_H */ |
@ -0,0 +1,128 @@ |
||||
#undef TRACE_SYSTEM |
||||
#define TRACE_SYSTEM fence |
||||
|
||||
#if !defined(_TRACE_FENCE_H) || defined(TRACE_HEADER_MULTI_READ) |
||||
#define _TRACE_FENCE_H |
||||
|
||||
#include <linux/tracepoint.h> |
||||
|
||||
struct fence; |
||||
|
||||
TRACE_EVENT(fence_annotate_wait_on, |
||||
|
||||
/* fence: the fence waiting on f1, f1: the fence to be waited on. */ |
||||
TP_PROTO(struct fence *fence, struct fence *f1), |
||||
|
||||
TP_ARGS(fence, f1), |
||||
|
||||
TP_STRUCT__entry( |
||||
__string(driver, fence->ops->get_driver_name(fence)) |
||||
__string(timeline, fence->ops->get_driver_name(fence)) |
||||
__field(unsigned int, context) |
||||
__field(unsigned int, seqno) |
||||
|
||||
__string(waiting_driver, f1->ops->get_driver_name(f1)) |
||||
__string(waiting_timeline, f1->ops->get_timeline_name(f1)) |
||||
__field(unsigned int, waiting_context) |
||||
__field(unsigned int, waiting_seqno) |
||||
), |
||||
|
||||
TP_fast_assign( |
||||
__assign_str(driver, fence->ops->get_driver_name(fence)) |
||||
__assign_str(timeline, fence->ops->get_timeline_name(fence)) |
||||
__entry->context = fence->context; |
||||
__entry->seqno = fence->seqno; |
||||
|
||||
__assign_str(waiting_driver, f1->ops->get_driver_name(f1)) |
||||
__assign_str(waiting_timeline, f1->ops->get_timeline_name(f1)) |
||||
__entry->waiting_context = f1->context; |
||||
__entry->waiting_seqno = f1->seqno; |
||||
|
||||
), |
||||
|
||||
TP_printk("driver=%s timeline=%s context=%u seqno=%u " \
|
||||
"waits on driver=%s timeline=%s context=%u seqno=%u", |
||||
__get_str(driver), __get_str(timeline), __entry->context, |
||||
__entry->seqno, |
||||
__get_str(waiting_driver), __get_str(waiting_timeline), |
||||
__entry->waiting_context, __entry->waiting_seqno) |
||||
); |
||||
|
||||
DECLARE_EVENT_CLASS(fence, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence), |
||||
|
||||
TP_STRUCT__entry( |
||||
__string(driver, fence->ops->get_driver_name(fence)) |
||||
__string(timeline, fence->ops->get_timeline_name(fence)) |
||||
__field(unsigned int, context) |
||||
__field(unsigned int, seqno) |
||||
), |
||||
|
||||
TP_fast_assign( |
||||
__assign_str(driver, fence->ops->get_driver_name(fence)) |
||||
__assign_str(timeline, fence->ops->get_timeline_name(fence)) |
||||
__entry->context = fence->context; |
||||
__entry->seqno = fence->seqno; |
||||
), |
||||
|
||||
TP_printk("driver=%s timeline=%s context=%u seqno=%u", |
||||
__get_str(driver), __get_str(timeline), __entry->context, |
||||
__entry->seqno) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_emit, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_init, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_destroy, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_enable_signal, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_signaled, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_wait_start, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
DEFINE_EVENT(fence, fence_wait_end, |
||||
|
||||
TP_PROTO(struct fence *fence), |
||||
|
||||
TP_ARGS(fence) |
||||
); |
||||
|
||||
#endif /* _TRACE_FENCE_H */ |
||||
|
||||
/* This part must be outside protection */ |
||||
#include <trace/define_trace.h> |
@ -0,0 +1,117 @@ |
||||
/*
|
||||
* This module provides an interface to trigger and test firmware loading. |
||||
* |
||||
* It is designed to be used for basic evaluation of the firmware loading |
||||
* subsystem (for example when validating firmware verification). It lacks |
||||
* any extra dependencies, and will not normally be loaded by the system |
||||
* unless explicitly requested by name. |
||||
*/ |
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt |
||||
|
||||
#include <linux/init.h> |
||||
#include <linux/module.h> |
||||
#include <linux/printk.h> |
||||
#include <linux/firmware.h> |
||||
#include <linux/device.h> |
||||
#include <linux/fs.h> |
||||
#include <linux/miscdevice.h> |
||||
#include <linux/slab.h> |
||||
#include <linux/uaccess.h> |
||||
|
||||
static DEFINE_MUTEX(test_fw_mutex); |
||||
static const struct firmware *test_firmware; |
||||
|
||||
static ssize_t test_fw_misc_read(struct file *f, char __user *buf, |
||||
size_t size, loff_t *offset) |
||||
{ |
||||
ssize_t rc = 0; |
||||
|
||||
mutex_lock(&test_fw_mutex); |
||||
if (test_firmware) |
||||
rc = simple_read_from_buffer(buf, size, offset, |
||||
test_firmware->data, |
||||
test_firmware->size); |
||||
mutex_unlock(&test_fw_mutex); |
||||
return rc; |
||||
} |
||||
|
||||
static const struct file_operations test_fw_fops = { |
||||
.owner = THIS_MODULE, |
||||
.read = test_fw_misc_read, |
||||
}; |
||||
|
||||
static struct miscdevice test_fw_misc_device = { |
||||
.minor = MISC_DYNAMIC_MINOR, |
||||
.name = "test_firmware", |
||||
.fops = &test_fw_fops, |
||||
}; |
||||
|
||||
static ssize_t trigger_request_store(struct device *dev, |
||||
struct device_attribute *attr, |
||||
const char *buf, size_t count) |
||||
{ |
||||
int rc; |
||||
char *name; |
||||
|
||||
name = kzalloc(count + 1, GFP_KERNEL); |
||||
if (!name) |
||||
return -ENOSPC; |
||||
memcpy(name, buf, count); |
||||
|
||||
pr_info("loading '%s'\n", name); |
||||
|
||||
mutex_lock(&test_fw_mutex); |
||||
release_firmware(test_firmware); |
||||
test_firmware = NULL; |
||||
rc = request_firmware(&test_firmware, name, dev); |
||||
if (rc) |
||||
pr_info("load of '%s' failed: %d\n", name, rc); |
||||
pr_info("loaded: %zu\n", test_firmware ? test_firmware->size : 0); |
||||
mutex_unlock(&test_fw_mutex); |
||||
|
||||
kfree(name); |
||||
|
||||
return count; |
||||
} |
||||
static DEVICE_ATTR_WO(trigger_request); |
||||
|
||||
static int __init test_firmware_init(void) |
||||
{ |
||||
int rc; |
||||
|
||||
rc = misc_register(&test_fw_misc_device); |
||||
if (rc) { |
||||
pr_err("could not register misc device: %d\n", rc); |
||||
return rc; |
||||
} |
||||
rc = device_create_file(test_fw_misc_device.this_device, |
||||
&dev_attr_trigger_request); |
||||
if (rc) { |
||||
pr_err("could not create sysfs interface: %d\n", rc); |
||||
goto dereg; |
||||
} |
||||
|
||||
pr_warn("interface ready\n"); |
||||
|
||||
return 0; |
||||
dereg: |
||||
misc_deregister(&test_fw_misc_device); |
||||
return rc; |
||||
} |
||||
|
||||
module_init(test_firmware_init); |
||||
|
||||
static void __exit test_firmware_exit(void) |
||||
{ |
||||
release_firmware(test_firmware); |
||||
device_remove_file(test_fw_misc_device.this_device, |
||||
&dev_attr_trigger_request); |
||||
misc_deregister(&test_fw_misc_device); |
||||
pr_warn("removed interface\n"); |
||||
} |
||||
|
||||
module_exit(test_firmware_exit); |
||||
|
||||
MODULE_AUTHOR("Kees Cook <keescook@chromium.org>"); |
||||
MODULE_LICENSE("GPL"); |
@ -1,90 +0,0 @@ |
||||
virtual patch |
||||
virtual report |
||||
|
||||
@depends on patch@ |
||||
expression base, dev, res; |
||||
@@ |
||||
|
||||
-base = devm_request_and_ioremap(dev, res); |
||||
+base = devm_ioremap_resource(dev, res); |
||||
... |
||||
if ( |
||||
-base == NULL |
||||
+IS_ERR(base) |
||||
|| ...) { |
||||
<... |
||||
- return ...; |
||||
+ return PTR_ERR(base); |
||||
...> |
||||
} |
||||
|
||||
@depends on patch@ |
||||
expression e, E, ret; |
||||
identifier l; |
||||
@@ |
||||
|
||||
e = devm_ioremap_resource(...); |
||||
... |
||||
if (IS_ERR(e) || ...) { |
||||
... when any |
||||
- ret = E; |
||||
+ ret = PTR_ERR(e); |
||||
... |
||||
( |
||||
return ret; |
||||
| |
||||
goto l; |
||||
) |
||||
} |
||||
|
||||
@depends on patch@ |
||||
expression e; |
||||
@@ |
||||
|
||||
e = devm_ioremap_resource(...); |
||||
... |
||||
if (IS_ERR(e) || ...) { |
||||
... |
||||
- \(dev_dbg\|dev_err\|pr_debug\|pr_err\|DRM_ERROR\)(...); |
||||
... |
||||
} |
||||
|
||||
@depends on patch@ |
||||
expression e; |
||||
identifier l; |
||||
@@ |
||||
|
||||
e = devm_ioremap_resource(...); |
||||
... |
||||
if (IS_ERR(e) || ...) |
||||
-{ |
||||
( |
||||
return ...; |
||||
| |
||||
goto l; |
||||
) |
||||
-} |
||||
|
||||
@r depends on report@ |
||||
expression e; |
||||
identifier l; |
||||
position p1; |
||||
@@ |
||||
|
||||
*e = devm_request_and_ioremap@p1(...); |
||||
... |
||||
if (e == NULL || ...) { |
||||
... |
||||
( |
||||
return ...; |
||||
| |
||||
goto l; |
||||
) |
||||
} |
||||
|
||||
@script:python depends on r@ |
||||
p1 << r.p1; |
||||
@@ |
||||
|
||||
msg = "ERROR: deprecated devm_request_and_ioremap() API used on line %s" % (p1[0].line) |
||||
coccilib.report.print_report(p1[0], msg) |
@ -0,0 +1,27 @@ |
||||
# Makefile for firmware loading selftests
|
||||
|
||||
# No binaries, but make sure arg-less "make" doesn't trigger "run_tests"
|
||||
all: |
||||
|
||||
fw_filesystem: |
||||
@if /bin/sh ./fw_filesystem.sh ; then \
|
||||
echo "fw_filesystem: ok"; \
|
||||
else \
|
||||
echo "fw_filesystem: [FAIL]"; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
fw_userhelper: |
||||
@if /bin/sh ./fw_userhelper.sh ; then \
|
||||
echo "fw_userhelper: ok"; \
|
||||
else \
|
||||
echo "fw_userhelper: [FAIL]"; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
run_tests: all fw_filesystem fw_userhelper |
||||
|
||||
# Nothing to clean up.
|
||||
clean: |
||||
|
||||
.PHONY: all clean run_tests fw_filesystem fw_userhelper |
@ -0,0 +1,62 @@ |
||||
#!/bin/sh |
||||
# This validates that the kernel will load firmware out of its list of |
||||
# firmware locations on disk. Since the user helper does similar work, |
||||
# we reset the custom load directory to a location the user helper doesn't |
||||
# know so we can be sure we're not accidentally testing the user helper. |
||||
set -e |
||||
|
||||
modprobe test_firmware |
||||
|
||||
DIR=/sys/devices/virtual/misc/test_firmware |
||||
|
||||
OLD_TIMEOUT=$(cat /sys/class/firmware/timeout) |
||||
OLD_FWPATH=$(cat /sys/module/firmware_class/parameters/path) |
||||
|
||||
FWPATH=$(mktemp -d) |
||||
FW="$FWPATH/test-firmware.bin" |
||||
|
||||
test_finish() |
||||
{ |
||||
echo "$OLD_TIMEOUT" >/sys/class/firmware/timeout |
||||
echo -n "$OLD_PATH" >/sys/module/firmware_class/parameters/path |
||||
rm -f "$FW" |
||||
rmdir "$FWPATH" |
||||
} |
||||
|
||||
trap "test_finish" EXIT |
||||
|
||||
# Turn down the timeout so failures don't take so long. |
||||
echo 1 >/sys/class/firmware/timeout |
||||
# Set the kernel search path. |
||||
echo -n "$FWPATH" >/sys/module/firmware_class/parameters/path |
||||
|
||||
# This is an unlikely real-world firmware content. :) |
||||
echo "ABCD0123" >"$FW" |
||||
|
||||
NAME=$(basename "$FW") |
||||
|
||||
# Request a firmware that doesn't exist, it should fail. |
||||
echo -n "nope-$NAME" >"$DIR"/trigger_request |
||||
if diff -q "$FW" /dev/test_firmware >/dev/null ; then |
||||
echo "$0: firmware was not expected to match" >&2 |
||||
exit 1 |
||||
else |
||||
echo "$0: timeout works" |
||||
fi |
||||
|
||||
# This should succeed via kernel load or will fail after 1 second after |
||||
# being handed over to the user helper, which won't find the fw either. |
||||
if ! echo -n "$NAME" >"$DIR"/trigger_request ; then |
||||
echo "$0: could not trigger request" >&2 |
||||
exit 1 |
||||
fi |
||||
|
||||
# Verify the contents are what we expect. |
||||
if ! diff -q "$FW" /dev/test_firmware >/dev/null ; then |
||||
echo "$0: firmware was not loaded" >&2 |
||||
exit 1 |
||||
else |
||||
echo "$0: filesystem loading works" |
||||
fi |
||||
|
||||
exit 0 |
@ -0,0 +1,89 @@ |
||||
#!/bin/sh |
||||
# This validates that the kernel will fall back to using the user helper |
||||
# to load firmware it can't find on disk itself. We must request a firmware |
||||
# that the kernel won't find, and any installed helper (e.g. udev) also |
||||
# won't find so that we can do the load ourself manually. |
||||
set -e |
||||
|
||||
modprobe test_firmware |
||||
|
||||
DIR=/sys/devices/virtual/misc/test_firmware |
||||
|
||||
OLD_TIMEOUT=$(cat /sys/class/firmware/timeout) |
||||
|
||||
FWPATH=$(mktemp -d) |
||||
FW="$FWPATH/test-firmware.bin" |
||||
|
||||
test_finish() |
||||
{ |
||||
echo "$OLD_TIMEOUT" >/sys/class/firmware/timeout |
||||
rm -f "$FW" |
||||
rmdir "$FWPATH" |
||||
} |
||||
|
||||
load_fw() |
||||
{ |
||||
local name="$1" |
||||
local file="$2" |
||||
|
||||
# This will block until our load (below) has finished. |
||||
echo -n "$name" >"$DIR"/trigger_request & |
||||
|
||||
# Give kernel a chance to react. |
||||
local timeout=10 |
||||
while [ ! -e "$DIR"/"$name"/loading ]; do |
||||
sleep 0.1 |
||||
timeout=$(( $timeout - 1 )) |
||||
if [ "$timeout" -eq 0 ]; then |
||||
echo "$0: firmware interface never appeared" >&2 |
||||
exit 1 |
||||
fi |
||||
done |
||||
|
||||
echo 1 >"$DIR"/"$name"/loading |
||||
cat "$file" >"$DIR"/"$name"/data |
||||
echo 0 >"$DIR"/"$name"/loading |
||||
|
||||
# Wait for request to finish. |
||||
wait |
||||
} |
||||
|
||||
trap "test_finish" EXIT |
||||
|
||||
# This is an unlikely real-world firmware content. :) |
||||
echo "ABCD0123" >"$FW" |
||||
NAME=$(basename "$FW") |
||||
|
||||
# Test failure when doing nothing (timeout works). |
||||
echo 1 >/sys/class/firmware/timeout |
||||
echo -n "$NAME" >"$DIR"/trigger_request |
||||
if diff -q "$FW" /dev/test_firmware >/dev/null ; then |
||||
echo "$0: firmware was not expected to match" >&2 |
||||
exit 1 |
||||
else |
||||
echo "$0: timeout works" |
||||
fi |
||||
|
||||
# Put timeout high enough for us to do work but not so long that failures |
||||
# slow down this test too much. |
||||
echo 4 >/sys/class/firmware/timeout |
||||
|
||||
# Load this script instead of the desired firmware. |
||||
load_fw "$NAME" "$0" |
||||
if diff -q "$FW" /dev/test_firmware >/dev/null ; then |
||||
echo "$0: firmware was not expected to match" >&2 |
||||
exit 1 |
||||
else |
||||
echo "$0: firmware comparison works" |
||||
fi |
||||
|
||||
# Do a proper load, which should work correctly. |
||||
load_fw "$NAME" "$FW" |
||||
if ! diff -q "$FW" /dev/test_firmware >/dev/null ; then |
||||
echo "$0: firmware was not loaded" >&2 |
||||
exit 1 |
||||
else |
||||
echo "$0: user helper firmware loading works" |
||||
fi |
||||
|
||||
exit 0 |
Loading…
Reference in new issue