My latency breaking in copy_pte_range didn't work as intended: instead of
checking at regularish intervals, after the first interval it checked every
time around the loop, too impatient to be preempted. Fix that.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch adds some stack dumps if the slab logic is processing slab
blocks from the wrong node. This is necessary in order to detect
situations as encountered by Petr.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Martin Hicks' page cache reclaim patch added the 'may_swap' flag to the
scan_control struct; and modified shrink_list() not to add anon pages to
the swap cache if may_swap is not asserted.
Ref: http://marc.theaimsgroup.com/?l=linux-mm&m=111461480725322&w=4
However, further down, if the page is mapped, shrink_list() calls
try_to_unmap() which will call try_to_unmap_one() via try_to_unmap_anon ().
try_to_unmap_one() will BUG_ON() an anon page that is NOT in the swap
cache. Martin says he never encountered this path in his testing, but
agrees that it might happen.
This patch modifies shrink_list() to skip anon pages that are not already
in the swap cache when !may_swap, rather than just not adding them to the
cache.
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This is not problem actually, but sync_page_range() is using for exported
function to filesystems.
The msync_xxx is more readable at least to me.
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Most of them can never be triggered and were only for development.
Signed-off-by: "Andi Kleen" <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The NUMA policy code predated nodemask_t so it used open coded bitmaps.
Convert everything to nodemask_t. Big patch, but shouldn't have any actual
behaviour changes (except I removed one unnecessary check against
node_online_map and one unnecessary BUG_ON)
Signed-off-by: "Andi Kleen" <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Set the low water mark for hot pages in pcp to zero.
(akpm: for the life of me I cannot remember why we created pcp->low. Neither
can Martin and the changelog is silent. Maybe it was just a brainfart, but I
have this feeling that there was a reason. If not, we should remove the
fields completely. We'll see.)
Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Cc: <linux-mm@kvack.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Increase the page allocator's per-cpu magazines from 1/4MB to 1/2MB.
Over 100+ runs for a workload, the difference in mean is about 2%. The best
results for both are almost same. Though the max variation in results with
1/2MB is only 2.2%, whereas with 1/4MB it is 12%.
Signed-off-by: Rohit Seth <rohit.seth@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
It turns out that the original swap token implementation, by Song Jiang, only
enforced the swap token while the task holding the token is handling a page
fault. This patch approximates that, without adding an additional flag to the
mm_struct, by checking whether the mm->mmap_sem is held for reading, like the
page fault code does.
This patch has the effect of automatically, and gradually, disabling the
enforcement of the swap token when there is little or no paging going on, and
"turning up" the intensity of the swap token code the more the task holding
the token is thrashing.
Thanks to Song Jiang for pointing out this aspect of the token based thrashing
control concept.
The new code shows a slight degradation over the old swap token code, but
still a big win over running without the swap token.
2.6.12+ swap token disabled
$ for i in `seq 10` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
101.74user 23.13system 8:26.91elapsed 24%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (38597major+430315minor)pagefaults 0swaps
101.98user 24.91system 8:03.06elapsed 26%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (33939major+430457minor)pagefaults 0swaps
101.93user 22.12system 7:34.90elapsed 27%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (33166major+421267minor)pagefaults 0swaps
101.82user 22.38system 8:31.40elapsed 24%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (39338major+433262minor)pagefaults 0swaps
2.6.12+ swap token enabled, timeout 300 seconds
$ for i in `seq 4` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
102.58user 16.08system 3:41.44elapsed 53%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19707major+285786minor)pagefaults 0swaps
102.07user 19.56system 4:00.64elapsed 50%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19012major+299259minor)pagefaults 0swaps
102.64user 18.25system 4:07.31elapsed 48%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (21990major+304831minor)pagefaults 0swaps
101.39user 19.41system 5:15.81elapsed 38%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (24850major+323321minor)pagefaults 0swaps
2.6.12+ with new swap token code, timeout 300 seconds
$ for i in `seq 4` ; do /usr/bin/time ./qsbench -n 30000000 -p 3 ; done
101.87user 24.66system 5:53.20elapsed 35%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (26848major+363497minor)pagefaults 0swaps
102.83user 19.95system 4:17.25elapsed 47%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (19946major+305722minor)pagefaults 0swaps
102.09user 19.46system 5:12.57elapsed 38%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (25461major+334994minor)pagefaults 0swaps
101.67user 20.61system 4:52.97elapsed 41%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (22190major+329508minor)pagefaults 0swaps
Signed-off-by: Rik Van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add sem_is_read/write_locked functions to the read/write semaphores, along the
same lines of the *_is_locked spinlock functions. The swap token tuning patch
uses sem_is_read_locked; sem_is_write_locked is added for completeness.
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
barrier.h uses barrier() in non-SMP case. And doesn't include compiler.h.
Cc: Al Viro <viro@ftp.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This patch adds
vmalloc_node(size, node) -> Allocate necessary memory on the specified node
and
get_vm_area_node(size, flags, node)
and the other functions that it depends on.
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Patch from Nicolas Pitre
Since vmlinux.lds.S is preprocessed, we can use the defines already
present in asm/memory.h (allowed by patch #3060) for the XIP kernel link
address instead of relying on a duplicated Makefile hardcoded value, and
also get rid of its dependency on awk to handle it at the same time.
While at it let's clean XIP stuff even further and make things clearer
in head.S with a nice code reduction.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Patch from Nicolas Pitre
This patch allows for assorted type of cleanups by letting assembly code
use the same set of defines for constant values and avoid duplicated
definitions that might not always be in sync, or that might simply be
confusing due to the different names for the same thing.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Some boards declare prom_free_prom_memory as a void function but the
caller free_initmem() expects a return value.
Fix those up and return 0 instead, just like everyone else does.
Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Prefetching may be fatal on some systems if we're prefetching beyond the
end of memory on some systems. It's also a seriously bad idea on non
dma-coherent systems.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Limit the number of cpu type options in the cpu menu to just those
types that are actually available for the select platform.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
CFE 1.2.5 and earlier fails to turn on the ExpMemEn bit in the
PCIFeatureControl register, which means that DMA does not work
beyond physical address 01_0000_0000, ergo to DRAM beyond 1GB.
With ExpMemEn turned on, 01_0000_0000-0f_ffff_ffff is mapped,
so DMA works for up to 61 GB of DRAM.
Will be fixed in CFE 1.2.6 (yet to be released).
Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
PCI support code for PLX 7250 PCI-X tunnel on BCM91480B BigSur board.
Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
- Kconfig and Makefile changes
- arch/mips/sibyte/bcm1480/
- changes to sibyte common code to support 1480
Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Update sibyte headers to match Broadcom internal copies:
- comment cleanup and updates
- fix LittleSur part number to match the board silkscreen
Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Add header files for BCM1480/1280/1455/1255 family of chips, and
update sb1250 headers which are shared by BCM1480 family.
Signed-Off-By: Andy Isaacson <adi@broadcom.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
diff --git a/include/asm-mips/sibyte/bcm1480_int.h b/include/asm-mips/sibyte/bcm1480_int.h
new file mode 100644
Cacheflush(0, 0, 0) was crashing the system. This is because
flush_icache_range(start, end) tries to flushing whole address space
(0 - ~0UL) if both start and end are zero.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>