Commit Graph

37 Commits (fe962e90cb17a8426e144dee970e77ed789d98ee)

Author SHA1 Message Date
Christoph Lameter cde5353599 Christoph has moved 17 years ago
Christoph Lameter 41d54d3bf8 slub: Do not use 192 byte sized cache if minimum alignment is 128 byte 17 years ago
Christoph Lameter 65c3376aac slub: Fallback to minimal order during slab page allocation 17 years ago
Christoph Lameter 205ab99dd1 slub: Update statistics handling for variable order slabs 17 years ago
Christoph Lameter 834f3d1192 slub: Add kmem_cache_order_objects struct 17 years ago
Christoph Lameter 0f389ec630 slub: No need for per node slab counters if !SLUB_DEBUG 17 years ago
Christoph Lameter 6446faa2ff slub: Fix up comments 17 years ago
Christoph Lameter 331dc558fa slub: Support 4k kmallocs again to compensate for page allocator slowness 17 years ago
Christoph Lameter b7a49f0d4c slub: Determine gfpflags once and not every time a slab is allocated 17 years ago
Pekka Enberg eada35efcb slub: kmalloc page allocator pass-through cleanup 17 years ago
Christoph Lameter 8ff12cfc00 SLUB: Support for performance statistics 17 years ago
Christoph Lameter da89b79ed0 Explain kmem_cache_cpu fields 17 years ago
Christoph Lameter 9824601ead SLUB: rename defrag to remote_node_defrag_ratio 17 years ago
Linus Torvalds 158a962422 Unify /proc/slabinfo configuration 17 years ago
Pekka J Enberg 57ed3eda97 slub: provide /proc/slabinfo 17 years ago
Christoph Lameter 4ba9b9d0ba Slab API: remove useless ctor parameter and reorder parameters 18 years ago
Christoph Lameter 42a9fdbb12 SLUB: Optimize cacheline use for zeroing 18 years ago
Christoph Lameter 4c93c355d5 SLUB: Place kmem_cache_cpu structures in a NUMA aware way 18 years ago
Christoph Lameter b3fba8da65 SLUB: Move page->offset to kmem_cache_cpu->offset 18 years ago
Christoph Lameter dfb4f09609 SLUB: Avoid page struct cacheline bouncing due to remote frees to cpu slab 18 years ago
Christoph Lameter aadb4bc4a1 SLUB: direct pass through of page size or higher kmalloc requests 18 years ago
Christoph Lameter aa137f9d29 SLUB: Force inlining for functions in slub_def.h 18 years ago
Al Viro d046943cba fix gfp_t annotations for slub 18 years ago
Christoph Lameter 81cda66261 Slab allocators: Cleanup zeroing allocations 18 years ago
Christoph Lameter 0c71001320 SLUB: add some more inlines and #ifdef CONFIG_SLUB_DEBUG 18 years ago
Christoph Lameter 6cb8f91320 Slab allocators: consistent ZERO_SIZE_PTR support and NULL result semantics 18 years ago
Paul Mundt 6193a2ff18 slob: initial NUMA support 18 years ago
Christoph Lameter 4b356be019 SLUB: minimum alignment fixes 18 years ago
Christoph Lameter 272c1d21d6 SLUB: return ZERO_SIZE_PTR for kmalloc(0) 18 years ago
Christoph Lameter 0aa817f078 Slab allocators: define common size limitations 18 years ago
Andrew Morton ade3aff25f slub: fix handling of oversized slabs 18 years ago
Christoph Lameter c59def9f22 Slab allocators: Drop support for destructors 18 years ago
Christoph Lameter 1abd727ed7 SLUB: It is legit to allocate a slab of the maximum permitted size 18 years ago
Christoph Lameter cfbf07f2a8 SLUB: CONFIG_LARGE_ALLOCS must consider MAX_ORDER limit 18 years ago
Christoph Lameter 643b113849 slub: enable tracking of full slabs 18 years ago
Christoph Lameter 614410d589 SLUB: allocate smallest object size if the user asks for 0 bytes 18 years ago
Christoph Lameter 81819f0fc8 SLUB core 18 years ago