Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Michael Kerrisk <mtk.manpages@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
Michael Olbrich reported that his test program fails when built with
-O2 -mcpu=cortex-a8 -mfpu=neon, and a kernel which supports v6 and v7
CPUs:
volatile int x = 2;
volatile int64_t y = 2;
int main() {
volatile int a = 0;
volatile int64_t b = 0;
while (1) {
a = (a + x) % (1 << 30);
b = (b + y) % (1 << 30);
assert(a == b);
}
}
and two instances are run. When built for just v7 CPUs, this program
works fine. It uses the "vadd.i64 d19, d18, d16" VFP instruction.
It appears that we do not save the high-16 double VFP registers across
context switches when the kernel is built for v6 CPUs. Fix that.
Cc: <stable@vger.kernel.org>
Tested-By: Michael Olbrich <m.olbrich@pengutronix.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
It appears that performing a "movs pc, lr" to force the kernel into
SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting
correctly (change introduced in 80c59da [ARM: virt: allow the kernel
to be entered in HYP mode]).
While the reason it fails is not understood yet (the same code runs
fine on the OMAP2430, ARM1136 as well), partially revert that change
for platforms that do not enter in HYP mode, preserving the new
feature and restoring a working kernel on the OMAP2420.
Reported-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The syscall tracing patch introduces a compile bug in lttng-modules
when the latter calls syscall_get_nr(), similar to the following:
<path-to-linux>/arch/arm/include/asm/syscall.h:21:2: error: implicit declaration of function 'task_thread_info' [-Werror=implicit-function-declaration]
The issue is that we are using task_thread_info() in the
syscall_get_nr() function in asm/syscall.h, but not explicitly
including sched.h from this file, so we can expect this bug might
surface any time that syscall_get_nr() is called.
Explicitly including sched.h solves the problem.
Cc: <stable@vger.kernel.org> [3.5, 3.6]
Signed-off-by: Wade Farnsworth <wade_farnsworth@mentor.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Set up empty UAPI Kbuild files to be populated by the header splitter.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
Convert #include "..." to #include <path/...> in kernel system headers.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Dave Jones <davej@redhat.com>
With ixp2xxx removed, there are no platforms that define arch_is_coherent,
so the last occurrences of arch_is_coherent can be removed. Any new
platform with coherent i/o should use coherent dma mapping functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
arch_is_coherent is problematic as it is a global symbol. This
doesn't work for multi-platform kernels or platforms which can support
per device coherent DMA.
This adds arm_coherent_dma_ops to be used for devices which connected
coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops
are modified at boot when arch_is_coherent is true.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Make default just return 0. The current default (checking
TIF_POLLING_NRFLAG) is taken to architectures that need it;
ones that don't do polling in their idle threads don't need
to defined TIF_POLLING_NRFLAG at all.
ia64 defined both TS_POLLING (used by its tsk_is_polling())
and TIF_POLLING_NRFLAG (not used at all). Killed the latter...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
... no need to read current_thread_info()->task only to
feed it to task_thread_page() immediately afterwards.
Moreover, not using current_thread_info() at all ends
up with better assembler - we need a location very close
to the top of kernel stack page and it's actually better
to do or with 0x1fff, followed be subtracting a small
constant than and with ~0x1fff, followed by adding a large
one. Both & and | would be a couple of insns (mvn lsr/mvn lsl
for |, a pair of bic for &), but the following addition
would cost a pair of add while the subtraction ends up
as a single sub.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Use the mapping of Elf_[SPE]hdr, Elf_Addr, Elf_Sym, Elf_Dyn, Elf_Rel/Rela,
ELF_R_TYPE() and ELF_R_SYM() to either the 32-bit version or the 64-bit version
into asm-generic/module.h for all arches bar MIPS.
Also, use the generic definition mod_arch_specific where possible.
To this end, I've defined three new config bools:
(*) HAVE_MOD_ARCH_SPECIFIC
Arches define this if they don't want to use the empty generic
mod_arch_specific struct.
(*) MODULES_USE_ELF_RELA
Arches define this if their modules can contain RELA records. This causes
the Elf_Rela mapping to be emitted and allows apply_relocate_add() to be
defined by the arch rather than have the core emit an error message.
(*) MODULES_USE_ELF_REL
Arches define this if their modules can contain REL records. This causes
the Elf_Rel mapping to be emitted and allows apply_relocate() to be
defined by the arch rather than have the core emit an error message.
Note that it is possible to allow both REL and RELA records: m68k and mips are
two arches that do this.
With this, some arch asm/module.h files can be deleted entirely and replaced
with a generic-y marker in the arch Kbuild file.
Additionally, I have removed the bits from m32r and score that handle the
unsupported type of relocation record as that's now handled centrally.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
The current timer-based delay loop relies on the architected timer to
initiate the switch away from the polling-based implementation. This is
unfortunate for platforms without the architected timers but with a
suitable delay source (that is, constant frequency, always powered-up
and ticking as long as the CPUs are online).
This patch introduces a registration mechanism for the delay timer
(which provides an unconditional read_current_timer implementation) and
updates the architected timer code to use the new interface.
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
ARM v7 architecture introduced the concept of cache levels and related
control registers. New processors like A7 and A15 embed an L2 unified cache
controller that becomes part of the cache level hierarchy. Some operations in
the kernel like cpu_suspend and __cpu_disable do not require a flush of the
entire cache hierarchy to DRAM but just the cache levels belonging to the
Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems
correspond to L1.
The current cache flushing API used in cpu_suspend and __cpu_disable,
flush_cache_all(), ends up flushing the whole cache hierarchy since for
v7 it cleans and invalidates all cache levels up to Level of Coherency
(LoC) which cripples system performance when used in hot paths like hotplug
and cpuidle.
Therefore a new kernel cache maintenance API must be added to cope with
latest ARM system requirements.
This patch adds flush_cache_louis() to the ARM kernel cache maintenance API.
This function cleans and invalidates all data cache levels up to the
Level of Unification Inner Shareable (LoUIS) and invalidates the instruction
cache for processors that support it (> v7).
This patch also creates an alias of the cache LoUIS function to flush_kern_all
for all processor versions prior to v7, so that the current cache flushing
behaviour is unchanged for those processors.
v7 cache maintenance code implements a cache LoUIS function that cleans and
invalidates the D-cache up to LoUIS and invalidates the I-cache, according
to the new API.
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
kcmp has appeared on x86, but has not been noticed because
checksyscalls.sh is broken at the moment. Reserve ARM syscall 378
for this should we ever need it, and add an __IGNORE entry for this
unimplemented syscall.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Remove the offset from ipi_msg_type and assume that SGI0 is the
wakeup interrupt now that all WFI hotplug users call
gic_raise_softirq() with 0 instead of 1. This allows us to
track how many wakeup interrupts are sent and also removes the
unknown IPI printk message for WFI hotplug based systems.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As specified by ftrace-design.txt, TIF_SYSCALL_TRACEPOINT was
added, as well as NR_syscalls in asm/unistd.h. Additionally,
__sys_trace was modified to call trace_sys_enter and
trace_sys_exit when appropriate.
Tests #2 - #4 of "perf test" now complete successfully.
Signed-off-by: Steven Walter <stevenrwalter@gmail.com>
Signed-off-by: Wade Farnsworth <wade_farnsworth@mentor.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
In order to easily detect pathological cases, print some diagnostics
when the kernel boots.
This also provides helpers to detect that HYP mode is actually available,
which can be used by other subsystems to enable HYP specific features.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
This patch does two things:
* Ensure that asynchronous aborts are masked at kernel entry.
The bootloader should be masking these anyway, but this reduces
the damage window just in case it doesn't.
* Enter svc mode via exception return to ensure that CPU state is
properly serialised. This does not matter when switching from
an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
parlance), but it potentially does matter when switching from a
another privileged mode such as hyp mode.
This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Enabling boot from HYP mode requires the use of some more
virt-specific instructions ("eret" and "msr elr_hyp, reg").
Add the necessary encoding to asm/opcode-virt.h.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Some subsystems (KVM for example) need access to a cycle counter.
In the KVM case, this is used to measure the time delta between
host and guest in order to accurately generate timer events for
the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
For now, this patch just adds a definition for the HVC instruction.
More can be added here later, as needed.
Now that we have a real example of how to use the opcode injection
macros properly, this patch also adds a cross-reference from the
explanation in opcodes.h (since without an example, figuring out
how to use the macros is not that easy).
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds some __inst_() macros for injecting custom opcodes
in assembler (both inline and in .S files). They should make it
easier and cleaner to get things right in little-/big-
endian/ARM/Thumb-2 kernels without a lot of #ifdefs.
This pure-preprocessor approach is preferred over the alternative
method of wedging extra assembler directives into the assembler
input using top-level asm() blocks, since there is no way to
guarantee that the compiler won't reorder those with respect to
each other or with respect to non-toplevel asm() blocks, unless
-fno-toplevel-reorder is passed (which is in itself somewhat
undesirable because it defeats some potential optimisations).
Currently <asm/unified.h> _does_ silently rely on the compiler not
reordering at the top level, but it seems better to avoid adding
extra code which depends on this if the same result can be achieved
in another way.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Most of the existing macros don't work with assembler, due to the
use of type casts and C functions from <linux/swab.h>.
This patch abstracts out those operations and provides simple
explicit versions for use in assembly code.
__opcode_is_thumb32() and __opcode_is_thumb16() are also converted
to do bitmask-based testing to avoid confusion if these are used in
assembly code (the assembler typically treats all arithmetic values
as signed).
These changes avoid the need for the compiler to pre-evaluate
constant expressions used to generate opcodes. By ensuring that
the forms of these expressions can be evaluated directly by the
assembler, we can just stringify the expressions directly into the
asm during the preprocessing pass. The alternative approach
(passing the evaluated expression via an inline asm "i" constraint)
gets painful because the contents of the asm and the constraints
must be kept in sync. This makes the resulting macros awkward to
use.
Retaining the C forms of the macros allows more efficient code to
be generated when opcodes are generated programmatically at run-
time, but there is no way to embed run-time-generated opcodes in
asm() blocks.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The existing __mem_to_opcode_thumb32() is incorrect for BE32
platforms. However, these don't support Thumb-2 kernels, so this
option is not so relevant for those platforms anyway.
This operation is complicated by the lack of unaligned memory
access support prior to ARMv6.
Rather than provide a "working" macro which will probably won't get
used (or worse, will get misused), this patch removes the macro for
BE32 kernels. People manipulating Thumb opcodes prior to ARMv6
should almost certainly be splitting these operations into
halfwords anyway, using __opcode_thumb32_{first,second,compose}()
and the 16-bit opcode transformations.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This lets us build a multiplatform kernel for experimental purposes.
However, it will not be useful for any real work, because it relies
on a number of useful things to be disabled for now:
* SMP support must be turned off because of conflicting symbols.
Marc Zyngier has proposed a solution by adding a new SOC
operations structure to hold indirect function pointers
for these, but that work is currently stalled
* We turn on SPARSE_IRQ unconditionally, which is not supported
on most platforms. Each of them is currently in a different
state, but most are being worked on.
* A common clock framework is in place since v3.4 but not yet
being used. Work on this is on its way.
* DEBUG_LL for early debugging is currently disabled.
* THUMB2_KERNEL does not work with allyesconfig because the
kernel gets too big
[Rob Herring]: Rebased to not be dependent on the mass mach header rename.
As a result, omap2plus, imx, mxs and ux500 are not converted. Highbank,
picoxcell, mvebu, and socfpga are converted.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Acked-by: Jamie Iles <jamie@jamieiles.com>
Cc: Dinh Nguyen <dinguyen@altera.com>
Move picoxcell debug-macro.S over to common debug macro directory.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Jamie Iles <jamie@jamieiles.com>
Move socfpga debug-macro.S over to common debug macro directory.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Acked-by: Dinh Nguyen <dinguyen@altera.com>
Move mvebu debug-macro.S over to common debug macro directory.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Andrew Lunn <andrew@lunn.ch>
Move vexpress debug-macro.S over to common debug macro directory.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Move highbank debug-macro.S over to common debug macro directory.
Also, remove v7 specific movw/movt instructions so this can compile under
v6 mode.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Based on suggestion by Russell King, create a common location for debug
macros and select the included debug macro file using config option.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Most platforms don't need mach/gpio.h and it prevents multi-platform
kernel images. Add CONFIG_NEED_MACH_GPIO_H and make platforns select it
if they need gpio.h. This is platforms that define __GPIOLIB_COMPLEX
or have lots of implicit includes pulled in by mach/gpio.h.
at91 and omap have gpio clean-up pending and can drop
CONFIG_NEED_MACH_GPIO_H once that is in.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Almost each SMP platform defines pen_release to manage booting secondary
CPUs. This of course clashes with the single zImage effort.
Add the pen_release definition to the ARM SMP code, and remove all others.
This should only be used by platforms which lack any kind of CPU power
management...
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Now that all SMP platforms have been converted to use struct
smp_operations, remove the "weak" attribute from the hooks
in smp.c, and make the functions static wherever possible.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
This adds a 'struct smp_operations' to abstract the CPU initialization
and hot plugging functions on SMP systems, which otherwise conflict
in a multiplatform kernel. This also helps shmobile and potentially
others that have more than one method to do these.
To allow the kernel to continue building, the platform hooks are
defined as weak symbols which are overrided by the platform code.
Once all platforms are converted, the "weak" attribute will be
removed and the function made static.
Unlike the original version from Marc, this new version from Arnd
does not use a generalized abstraction for per-soc data structures
but only tries to solve the problem for the SMP operations. This
way, we can collapse the previous four data structures into a
single struct, which is less systematic but also easier to follow
as a causal reader.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
The user access functions may generate a fault, resulting in invocation
of a handler that may sleep.
This patch annotates the accessors with might_fault() so that we print a
warning if they are invoked from atomic context and help lockdep keep
track of mmap_sem.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The {get,put}_user macros don't perform range checking on the provided
__user address when !CPU_HAS_DOMAINS.
This patch reworks the out-of-line assembly accessors to check the user
address against a specified limit, returning -EFAULT if is is out of
range.
[will: changed get_user register allocation to match put_user]
[rmk: fixed building on older ARM architectures]
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
During the p2v changes, the PHYS_OFFSET #define moved into a
!__ASSEMBLY__ section. This causes a XIP build to fail with
arch/arm/kernel/head.o: In function 'stext':
arch/arm/kernel/head.S:146: undefined reference to 'PHYS_OFFSET'
Momentarily leave the #ifndef __ASSEMBLY__ section so we can
define PHYS_OFFSET for all compilation units.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 774c096bf9 (ARM: v6/v7 cache: allow cache calls to be
optimized) got dropped when the merge conflicts for moving the contents
of the files in commit 753790e713 (ARM: move cache/processor/fault
glue to separate include files) was fixed up in merge bd1274dc00
(Merge branch 'v6v7' into devel).
This puts the change back.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is no point reserving space at the bottom of the kernel stack for
per-thread crunch state, and per-thread VFP state if these are not being
supported by the kernel being built. Remove these members from the
thread union when these features are disabled.
Reported-by: Tim Bird <tim.bird@am.sony.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some platforms might require to increase atomic coherent pool to make
sure that their device will be able to allocate all their buffers from
atomic context. This function can be also used to decrease atomic
coherent pool size if coherent allocations are not used for the given
sub-platform.
Suggested-by: Josh Coombs <josh.coombs@gmail.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Data aborts taken to hyp mode do not provide a valid instruction
syndrome field in the HSR if the faulting instruction is a memory
access using a writeback addressing mode.
For hypervisors emulating MMIO accesses to virtual peripherals, taking
such an exception requires disassembling the faulting instruction in
order to determine the behaviour of the access. Since this requires
manually walking the two stages of translation, the world must be
stopped to prevent races against page aging in the guest, where the
first-stage translation is invalidated after the hypervisor has
translated to an IPA and the physical page is reused for something else.
This patch avoids taking this heavy performance penalty when running
Linux as a guest by ensuring that our I/O accessors do not make use of
writeback addressing modes.
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit a76d7bd96d ("ARM: 7467/1: mutex: use generic xchg-based
implementation for ARMv6+") removed the barrier-less, ARM-specific
mutex implementation in favour of the generic xchg-based code.
Since then, a bug was uncovered in the xchg code when running on SMP
platforms, due to interactions between the locking paths and the
MUTEX_SPIN_ON_OWNER code. This was fixed in 0bce9c46bf ("mutex: place
lock in contended state after fastpath_lock failure"), however, the
atomic_dec-based mutex algorithm is now marginally more efficient for
ARM (~0.5% improvement in hackbench scores on dual A15).
This patch moves ARMv6+ platforms to the atomic_dec-based mutex code.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As pointed out by Arnd Bergmann, this fixes a couple of issues but will
increase code size:
The original macro user_termio_to_kernel_termios was not endian safe. It
used an unsigned short ptr to access the low bits in a 32-bit word.
Both user_termio_to_kernel_termios and kernel_termios_to_user_termio are
missing error checking on put_user/get_user and copy_to/from_user.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>