Commit e9da6e9905 ("ARM: dma-mapping:
remove custom consistent dma region") removed the last users of the
field. Remove it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Fix build warning in __dma_alloc() as below:
arch/arm/mm/dma-mapping.c: In function '__dma_alloc':
arch/arm/mm/dma-mapping.c:653:29: warning: 'page' may be used uninitialized in this function [-Wuninitialized]
Signed-off-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Because mov pc,<Rn> never switches instruction set when executed in
Thumb code, Thumb-2 kernels will silently execute the target code
after cpu_reset as Thumb code, even if the passed code pointer
denotes ARM (bit 0 clear).
This patch uses bx instead, ensuring the correct instruction set
for the target code.
Thumb code in the kernel is not supported prior to ARMv7, so other
CPUs are not affected.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Instead of having multiple functions belonging to outer_cache and
filling this structure on the fly, use a outer_cache_fns field inside
l2x0_of_data and just memcopy it into outer_cache depending of the
type of the l2x0 cache. For non DT case, the former code was kept.
[rmk: fixed a style issue]
Tested-and-Reviewed-by: Yehuda Yitschak <yehuday@marvell.com>
Tested-and-Reviewed-by: Lior Amsalem <alior@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
As suggested by Andrew Morton:
This is a pet peeve of mine. Any time there's a long list of items
(header file inclusions, kconfig entries, array initalisers, etc) and
someone wants to add a new item, they *always* go and stick it at the
end of the list.
Guys, don't do this. Either put the new item into a randomly-chosen
position or, probably better, alphanumerically sort the list.
lets sort all our select statements alphanumerically. This commit was
created by the following perl:
while (<>) {
while (/\\\s*$/) {
$_ .= <>;
}
undef %selects if /^\s*config\s+/;
if (/^\s+select\s+(\w+).*/) {
if (defined($selects{$1})) {
if ($selects{$1} eq $_) {
print STDERR "Warning: removing duplicated $1 entry\n";
} else {
print STDERR "Error: $1 differently selected\n".
"\tOld: $selects{$1}\n".
"\tNew: $_\n";
exit 1;
}
}
$selects{$1} = $_;
next;
}
if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
/^endif/ or /^endchoice/)) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
undef %selects;
}
print;
}
if (%selects) {
foreach $k (sort (keys %selects)) {
print "$selects{$k}";
}
}
It found two duplicates:
Warning: removing duplicated S5P_SETUP_MIPIPHY entry
Warning: removing duplicated HARDIRQS_SW_RESEND entry
and they are identical duplicates, hence the shrinkage in the diffstat
of two lines.
We have four testers reporting success of this change (Tony, Stephen,
Linus and Sekhar.)
Acked-by: Jason Cooper <jason@lakedaemon.net>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Sekhar Nori <nsekhar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
One such warning was recently fixed in a761cebf "ARM: Fix build warning
in arch/arm/mm/alignment.c" but only for the thumb2 case, this fixes
the other half.
arch/arm/mm/alignment.c: In function 'do_alignment':
arch/arm/mm/alignment.c:327:15: error: 'offset.un' may be used uninitialized in this function
arch/arm/mm/alignment.c:748:21: note: 'offset.un' was declared here
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
.fault now can retry. The retry can break state machine of .fault. In
filemap_fault, if page is miss, ra->mmap_miss is increased. In the second
try, since the page is in page cache now, ra->mmap_miss is decreased. And
these are done in one fault, so we can't detect random mmap file access.
Add a new flag to indicate .fault is tried once. In the second try, skip
ra->mmap_miss decreasing. The filemap_fault state machine is ok with it.
I only tested x86, didn't test other archs, but looks the change for other
archs is obvious, but who knows :)
Signed-off-by: Shaohua Li <shaohua.li@fusionio.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Implement an interval tree as a replacement for the VMA prio_tree. The
algorithms are similar to lib/interval_tree.c; however that code can't be
directly reused as the interval endpoints are not explicitly stored in the
VMA. So instead, the common algorithm is moved into a template and the
details (node type, how to get interval endpoints from the node, etc) are
filled in using the C preprocessor.
Once the interval tree functions are available, using them as a
replacement to the VMA prio tree is a relatively simple, mechanical job.
Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
With ixp2xxx removed, there are no platforms that define arch_is_coherent,
so the last occurrences of arch_is_coherent can be removed. Any new
platform with coherent i/o should use coherent dma mapping functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Remove arch_is_coherent() from iommu dma ops and implement separate
coherent ops functions.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
arch_is_coherent is problematic as it is a global symbol. This
doesn't work for multi-platform kernels or platforms which can support
per device coherent DMA.
This adds arm_coherent_dma_ops to be used for devices which connected
coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops
are modified at boot when arch_is_coherent is true.
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
With many IOMMU'able devices, console gets noisy.
Tegra30 has a few dozen of IOMMU'able devices.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Skip unnecessary operations if order == 0. A little bit easier to
read.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
arm: Add ARM ERRATA 775420 workaround
Workaround for the 775420 Cortex-A9 (r2p2, r2p6,r2p8,r2p10,r3p0) erratum.
In case a date cache maintenance operation aborts with MMU exception, it
might cause the processor to deadlock. This workaround puts DSB before
executing ISB if an abort may occur on cache maintenance.
Based on work by Kouei Abe and feedback from Catalin Marinas.
Signed-off-by: Kouei Abe <kouei.abe.cp@rms.renesas.com>
[ horms@verge.net.au: Changed to implementation
suggested by catalin.marinas@arm.com ]
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some architectures like xscale and feroceon have cache API variants that
map cache flushing functions as aliases to the base architecture.
This patch adds the required aliases to complete the implementation of
cache flushing LoUIS API.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allows /proc/vmallocinfo to show the physical address for
ioremap mappings.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The ARMv7 processor setup function __v7_setup() cleans and invalidates the
CPU cache before enabling MMU to start the CPU with a clean CPU local cache.
But on ARMv7 architectures like Cortex-[A15/A8], this code will end
up flushing the L2 caches(up to level of Coherency) which is undesirable
and expensive. The setup functions are used in the CPU hotplug scenario too
and hence flushing all cache levels should be avoided.
This patch replaces the cache flushing call with the newly introduced
v7 dcache LoUIS API where only cache levels up to LoUIS are cleaned and
invalidated when a processors executes __v7_setup which is the expected
behavior.
For processors like A9 and A5 where the L2 cache is an outer one the
behavior should be unchanged.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
This patch renames jump labels in v7_flush_dcache_all in order to define
a specific flush cache levels entry point.
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
ARM v7 architecture introduced the concept of cache levels and related
control registers. New processors like A7 and A15 embed an L2 unified cache
controller that becomes part of the cache level hierarchy. Some operations in
the kernel like cpu_suspend and __cpu_disable do not require a flush of the
entire cache hierarchy to DRAM but just the cache levels belonging to the
Level of Unification Inner Shareable (LoUIS), which in most of ARM v7 systems
correspond to L1.
The current cache flushing API used in cpu_suspend and __cpu_disable,
flush_cache_all(), ends up flushing the whole cache hierarchy since for
v7 it cleans and invalidates all cache levels up to Level of Coherency
(LoC) which cripples system performance when used in hot paths like hotplug
and cpuidle.
Therefore a new kernel cache maintenance API must be added to cope with
latest ARM system requirements.
This patch adds flush_cache_louis() to the ARM kernel cache maintenance API.
This function cleans and invalidates all data cache levels up to the
Level of Unification Inner Shareable (LoUIS) and invalidates the instruction
cache for processors that support it (> v7).
This patch also creates an alias of the cache LoUIS function to flush_kern_all
for all processor versions prior to v7, so that the current cache flushing
behaviour is unchanged for those processors.
v7 cache maintenance code implements a cache LoUIS function that cleans and
invalidates the D-cache up to LoUIS and invalidates the I-cache, according
to the new API.
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Shawn Guo <shawn.guo@linaro.org>
When either of __alloc_from_contiguous or __alloc_remap_buffer fails
to provide a valid pointer, allocated memory is freed up and an error
is returned. 'pages' was however not freed before returning error.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
It is now possible to enable the virtualization extention support.
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Fix this harmless build warning:
arch/arm/mm/alignment.c: In function 'do_alignment':
arch/arm/mm/alignment.c:749:21: warning: 'offset.un' may be used uninitialized in this function
This is caused by the compiler not being able to properly analyse the
code to prove that offset.un is assigned in every case. The case it
struggles with is where we assign the handler from the Thumb parser -
do_alignment_t32_to_handler(). As this starts by zeroing this variable
via a pointer, move it into the calling function. This fixes the
warning.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
There is a bug if l2x0 controller has been enabled when calling
l2x0_init, the aux ctrl register will not be saved in l2x0_saved_regs.
Therefore we will use uninitialized l2x0_saved_regs.aux_ctrl for
resuming later.
In this patch, the aux ctrl value is read and saved after it is
initialized. So we have the real value being set for resuming.
Link: http://lkml.kernel.org/r/1336046857-24133-1-git-send-email-ylmao@marvell.com
Signed-off-by: Yilu Mao <ylmao@marvell.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This prepares *of_device_id.data becoming const. Without this change the
following warning would occur:
arch/arm/mm/cache-l2x0.c: In function 'l2x0_of_init':
arch/arm/mm/cache-l2x0.c:573:7: warning: assignment discards 'const' qualifier from pointer target type [enabled by default]
though.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
The __free_from_pool() function was changed in
e9da6e9905. Unfortunately, the test that
checks whether the provided (start,size) is within the DMA pool has
been improperly modified. It used to be:
if (start < coherent_head.vm_start || end > coherent_head.vm_end)
Where coherent_head.vm_end was non-inclusive (i.e, it did not include
the first byte after the pool). The test has been changed to:
if (start < pool->vaddr || start > pool->vaddr + pool->size)
So now pool->vaddr + pool->size is inclusive (i.e, it includes the
first byte after the pool), so the test should be >= instead of >.
This bug causes the following message when freeing the *first* DMA
coherent buffer that has been allocated, because its virtual address
is exactly equal to pool->vaddr + pool->size :
WARNING: at /home/thomas/projets/linux-2.6/arch/arm/mm/dma-mapping.c:463 __free_from_pool+0xa4/0xc0()
freeing wrong coherent size from pool
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Lior Amsalem <alior@marvell.com>
Cc: Maen Suleiman <maen@marvell.com>
Cc: Tawfik Bayouk <tawfik@marvell.com>
Cc: Shadi Ammouri <shadi@marvell.com>
Cc: Eran Ben-Avi <benavi@marvell.com>
Cc: Yehuda Yitschak <yehuday@marvell.com>
Cc: Nadav Haklai <nadavh@marvell.com>
[m.szyprowski: rebased onto v3.6-rc5 and resolved conflict]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Make use of the same atomic pool as DMA does, and skip a kernel page
mapping which can involve sleep'able operations at allocating a kernel
page table.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Support atomic allocation in __iommu_get_pages().
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
[moved __atomic_get_pages() under #ifdef CONFIG_ARM_DMA_USE_IOMMU
to avoid unused fuction warning for no-IOMMU case]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Check the given range("start", "size") is included in "atomic_pool" or not.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
struct page **pages is necessary to align with non atomic path in
__iommu_get_pages(). atomic_pool() has the intialized **pages instead
of just *page.
Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Print a loud warning when system runs out of memory from atomic DMA
coherent pool to let users notice the potential problem.
Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Some platforms might require to increase atomic coherent pool to make
sure that their device will be able to allocate all their buffers from
atomic context. This function can be also used to decrease atomic
coherent pool size if coherent allocations are not used for the given
sub-platform.
Suggested-by: Josh Coombs <josh.coombs@gmail.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
With !HIGHMEM, sanity_check_meminfo checks for banks that completely or
partially overlap the vmalloc region. The test for partial overlap checks
__va(bank->start + bank->size) > vmalloc_min. This is not appropriate if
there is a non-linear translation between virtual and physical addresses,
as bank->start + bank->size is actually in the bank following the one being
interrogated.
In most cases, even when using SPARSEMEM, this is not problematic as the
subsequent bank will start at a higher va than the one in question. However
if the physical to virtual address conversion is not monotonic increasing,
the incorrect test could result in a bank not being truncated when it
should be.
This patch ensures we perform the va-pa conversion on memory from the
bank we are interested in, not the following one.
Reported-by: ??? (Steve) <zhanzhenbo@gmail.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The bfi instruction is not available on ARMv6, so instead use an and/orr
sequence in the contextidr_notifier. This gets rid of the assembler
error:
Assembler messages:
Error: selected processor does not support ARM mode `bfi r3,r2,#0,#8'
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Murali Nalajala reports a regression that ioremapping address zero
results in an oops dump:
Unable to handle kernel paging request at virtual address fa200000
pgd = d4f80000
[fa200000] *pgd=00000000
Internal error: Oops: 5 [#1] PREEMPT SMP ARM
Modules linked in:
CPU: 0 Tainted: G W (3.4.0-g3b5f728-00009-g638207a #13)
PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30
LR is at msm_pm_boot_config_before_pc+0x18/0x20
pc : [<c0078f84>] lr : [<c007903c>] psr: a0000093
sp : c0837ef0 ip : cfe00000 fp : 0000000d
r10: da7efc17 r9 : 225c4278 r8 : 00000006
r7 : 0003c000 r6 : c085c824 r5 : 00000001 r4 : fa101000
r3 : fa200000 r2 : c095080c r1 : 002250fc r0 : 00000000
Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment kernel
Control: 10c5387d Table: 25180059 DAC: 00000015
[<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20)
[<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04)
[<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0)
[<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c)
[<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4)
[<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0)
[<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c)
Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000)
This is caused by the 'reserved' entries which we insert (see
19b52abe3c - ARM: 7438/1: fill possible PMD empty section gaps)
which get matched for physical address zero.
Resolve this by marking these reserved entries with a different flag.
Cc: <stable@vger.kernel.org>
Tested-by: Murali Nalajala <mnalajal@codeaurora.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The extra feature may be used by SOCs are prefetch, burst8,
write buffer coalesce
Signed-off-by: Chao Xie <xiechao.mail@gmail.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
init the variable "mode" to NULL to ensure the later NULL checking is
taking effect.
Signed-off-by: Chao Xie <xiechao.mail@gmail.com>
Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com>
Allow arm_memblock_steal() to remove memory from any RAM region,
including highmem areas. This allows memory to be stolen from the
very top of declared memory, including highmem areas, rather than
our precious lowmem.
Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Commit 5a783cbc48 ("ARM: 7478/1: errata: extend workaround for erratum
#720789") added workarounds for erratum #720789 to the range TLB
invalidation functions with the observation that the erratum only
affects SMP platforms. However, when running an SMP_ON_UP kernel on a
uniprocessor platform we must take care to preserve the ASID as the
workaround is not required.
This patch ensures that we don't set the ASID to 0 when flushing the TLB
on such a system, preserving the original behaviour with the workaround
disabled.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Swap entries are encoding in ptes such that !pte_present(pte) and
pte_file(pte). The remaining bits of the descriptor are used to identify
the swapfile and offset within it to the swap entry.
When writing such a pte for a user virtual address, set_pte_at
unconditionally sets the nG bit, which (in the case of LPAE) will
corrupt the swapfile offset and lead to a BUG:
[ 140.494067] swap_free: Unused swap offset entry 000763b4
[ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003
This patch fixes the problem by only setting the nG bit for user
mappings that are actually present.
Cc: <stable@vger.kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The alignment mask is calculated incorrectly. Fixing the calculation
makes strange hangs/lockups disappear during the boot with Amstrad E3
and 3.6-rc1 kernel.
Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Fix dma_contiguous_remap() so that it continues through all the
regions, even after encountering one that is outside lowmem.
Without this change, if you have two CMA regions, the first outside
lowmem and the seocnd inside lowmem, only the second one will get
set up in the MMU. Data written to that region then doesn't get
automatically flushed from the cache into memory.
Signed-off-by: Chris Brand <cbrand@broadcom.com>
[extended patch subject with 'fix' word]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Commit cdf357f1 ("ARM: 6299/1: errata: TLBIASIDIS and TLBIMVAIS
operations can broadcast a faulty ASID") replaced by-ASID TLB flushing
operations with all-ASID variants to workaround A9 erratum #720789.
This patch extends the workaround to include the tlb_range operations,
which were overlooked by the original patch.
Cc: <stable@vger.kernel.org>
Tested-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC attribute for
dma_(un)map_(single,page,sg) functions family. It lets dma mapping clients
to create a mapping for the buffer for the given device without performing
a CPU cache synchronization. CPU cache synchronization can be skipped for
the buffers which it is known that they are already in 'device' domain (CPU
caches have been already synchronized or there are only coherent mappings
for the buffer). For advanced users only, please use it with care.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch adds support for dma_get_sgtable() function which is required
to let drivers to share the buffers allocated by DMA-mapping subsystem.
Generic implementation based on virt_to_page() is not suitable for ARM
dma-mapping subsystem.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch adds support for DMA_ATTR_NO_KERNEL_MAPPING attribute for
IOMMU allocations, what let drivers to save precious kernel virtual
address space for large buffers that are intended to be accessed only
from userspace.
This patch is heavily based on initial work kindly provided by Abhinav
Kochhar <abhinav.k@samsung.com>.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
This patch fixes incorrect check in error path. When the allocation of
first page fails, the kernel ops appears due to accessing -1 element of
the pages array.
Reported-by: Sylwester Nawrocki <s.nawrocki@samsung.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>