Commit Graph

219 Commits (399ec807ddc38ecccf8c06dbde04531cbdc63e11)

Author SHA1 Message Date
Kirill A. Shutemov b1aabecd55 mm: Export symbol ksize() 16 years ago
David Rientjes 3718909448 slub: fix per cpu kmem_cache_cpu array memory leak 16 years ago
Frederik Schwarzer 0211a9c850 trivial: fix an -> a typos in documentation and comments 16 years ago
Rusty Russell 174596a0b9 cpumask: convert mm/ 16 years ago
David Rientjes 7b8f3b66d9 slub: avoid leaking caches or refcounts on sysfs error 16 years ago
OGAWA Hirofumi 89124d706d slub: Add might_sleep_if() to slab_alloc() 16 years ago
Akinobu Mita 773ff60e84 SLUB: failslab support 16 years ago
Rusty Russell 29c0177e6a cpumask: change cpumask_scnprintf, cpumask_parse_user, cpulist_parse, and cpulist_scnprintf to take pointers. 16 years ago
Hugh Dickins 9c24624727 KSYM_SYMBOL_LEN fixes 16 years ago
Nick Andrew 9f6c708e5c slub: Fix incorrect use of loose 16 years ago
KAMEZAWA Hiroyuki dc19f9db38 memcg: memory hotplug fix for notifier callback 16 years ago
David Rientjes 0094de92a4 slub: make early_kmem_cache_node_alloc void 16 years ago
Cyrill Gorcunov e9beef1815 slub - fix get_object_page comment 16 years ago
Eduard - Gabriel Munteanu ce71e27c6f SLUB: Replace __builtin_return_address(0) with _RET_IP_. 16 years ago
Cyrill Gorcunov 210b5c0613 SLUB: cleanup - define macros instead of hardcoded numbers 16 years ago
Alexey Dobriyan 7b3c3a50a3 proc: move /proc/slabinfo boilerplate to mm/slub.c, mm/slab.c 17 years ago
Salman Qazi 02b71b7012 slub: fixed uninitialized counter in struct kmem_cache_node 17 years ago
Christoph Lameter e2cb96b7ec slub: Disable NUMA remote node defragmentation by default 17 years ago
Pekka Enberg 5595cffc82 SLUB: dynamic per-cache MIN_PARTIAL 17 years ago
Adrian Bunk 231367fd9b mm: unexport ksize 17 years ago
Alexey Dobriyan 51cc50685a SL*B: drop kmem cache argument from constructor 17 years ago
Andy Whitcroft 8a38082d21 slub: record page flag overlays explicitly 17 years ago
Pekka Enberg 0ebd652b35 slub: dump more data on slab corruption 17 years ago
Alexey Dobriyan 41ab8592ca SLUB: simplify re on_each_cpu() 17 years ago
Alexey Dobriyan 88e4ccf294 slub: current is always valid 17 years ago
Christoph Lameter 0937502af7 slub: Add check for kfree() of non slab objects. 17 years ago
Linus Torvalds 7daf705f36 Start using the new '%pS' infrastructure to print symbols 17 years ago
Dmitry Adamushko bdb2192851 slub: Fix use-after-preempt of per-CPU data structure 17 years ago
Christoph Lameter cde5353599 Christoph has moved 17 years ago
Christoph Lameter 41d54d3bf8 slub: Do not use 192 byte sized cache if minimum alignment is 128 byte 17 years ago
Jens Axboe 15c8b6c1aa on_each_cpu(): kill unused 'retry' parameter 17 years ago
Pekka Enberg 76994412f8 slub: ksize() abuse checks 17 years ago
Benjamin Herrenschmidt 4ea33e2dc2 slub: fix atomic usage in any_slab_objects() 17 years ago
Christoph Lameter f6acb63508 slub: #ifdef simplification 17 years ago
Christoph Lameter 0121c619d0 slub: Whitespace cleanup and use of strict_strtoul 17 years ago
Roman Zippel f8bd2258e2 remove div_long_long_rem 17 years ago
Thomas Gleixner 3ac7fe5a4a infrastructure to debug (dynamic) objects 17 years ago
Nadia Derbey 0c40ba4fd6 ipc: define the slab_memory_callback priority as a constant 17 years ago
Pekka Enberg 1b27d05b6e mm: move cache_line_size() to <linux/cache.h> 17 years ago
Mel Gorman dd1a239f6f mm: have zonelist contains structs with both a zone pointer and zone_idx 17 years ago
Mel Gorman 54a6eb5c47 mm: use two zonelist that are filtered by GFP mask 17 years ago
Mel Gorman 0e88460da6 mm: introduce node_zonelist() for accessing the zonelist for a GFP mask 17 years ago
Christoph Lameter c124f5b54f slub: pack objects denser 17 years ago
Christoph Lameter 9b2cd506e5 slub: Calculate min_objects based on number of processors. 17 years ago
Christoph Lameter 114e9e89e6 slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTS 17 years ago
Christoph Lameter 31d33baf36 slub: Simplify any_slab_object checks 17 years ago
Christoph Lameter 06b285dc3d slub: Make the order configurable for each slab cache 17 years ago
Christoph Lameter 319d1e2406 slub: Drop fallback to page allocator method 17 years ago
Christoph Lameter 65c3376aac slub: Fallback to minimal order during slab page allocation 17 years ago
Christoph Lameter 205ab99dd1 slub: Update statistics handling for variable order slabs 17 years ago