Added a new macro snd_pcm_group_for_each_entry() just for code cleanup.
Old macros, snd_pcm_group_for_each() and snd_pcm_group_substream_entry(),
are removed.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jaroslav Kysela <perex@suse.cz>
It has some hackish code and it odd DMA results in the need to support
old features in kernel code.
Signed-off-by: Franck Bui-Huu <fbuihuu@gmail.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Move FPU hazard handling to hazards.h and provide proper support for
MIPSR2 processors
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
to generic_make_request can use up a lot of space, and we would rather they
didn't.
As generic_make_request is a void function, and as it is generally not
expected that it will have any effect immediately, it is safe to delay any
call to generic_make_request until there is sufficient stack space
available.
As ->bi_next is reserved for the driver to use, it can have no valid value
when generic_make_request is called, and as __make_request implicitly
assumes it will be NULL (ELEVATOR_BACK_MERGE fork of switch) we can be
certain that all callers set it to NULL. We can therefore safely use
bi_next to link pending requests together, providing we clear it before
making the real call.
So, we choose to allow each thread to only be active in one
generic_make_request at a time. If a subsequent (recursive) call is made,
the bio is linked into a per-thread list, and is handled when the active
call completes.
As the list of pending bios is per-thread, there are no locking issues to
worry about.
I say above that it is "safe to delay any call...". There are, however,
some behaviours of a make_request_fn which would make it unsafe. These
include any behaviour that assumes anything will have changed after a
recursive call to generic_make_request.
These could include:
- waiting for that call to finish and call it's bi_end_io function.
md use to sometimes do this (marking the superblock dirty before
completing a write) but doesn't any more
- inspecting the bio for fields that generic_make_request might
change, such as bi_sector or bi_bdev. It is hard to see a good
reason for this, and I don't think anyone actually does it.
- inspecing the queue to see if, e.g. it is 'full' yet. Again, I
think this is very unlikely to be useful, or to be done.
Signed-off-by: Neil Brown <neilb@suse.de>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: <dm-devel@redhat.com>
Alasdair G Kergon <agk@redhat.com> said:
I can see nothing wrong with this in principle.
For device-mapper at the moment though it's essential that, while the bio
mappings may now get delayed, they still get processed in exactly
the same order as they were passed to generic_make_request().
My main concern is whether the timing changes implicit in this patch
will make the rare data-corrupting races in the existing snapshot code
more likely. (I'm working on a fix for these races, but the unfinished
patch is already several hundred lines long.)
It would be helpful if some people on this mailing list would test
this patch in various scenarios and report back.
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Hi,
I have been working on some code that detects abnormal events based on audit
system events. One kind of event that we currently have no visibility for is
when a program terminates due to segfault - which should never happen on a
production machine. And if it did, you'd want to investigate it. Attached is a
patch that collects these events and sends them into the audit system.
Signed-off-by: Steve Grubb <sgrubb@redhat.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Handle the edge cases for POSIX message queue auditing. Collect inode
info when opening an existing mq, and for send/receive operations. Remove
audit_inode_update() as it has really evolved into the equivalent of
audit_inode().
Signed-off-by: Amy Griffis <amy.griffis@hp.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
When auditing syscalls that send signals, log the pid and security
context for each target process. Optimize the data collection by
adding a counter for signal-related rules, and avoiding allocating an
aux struct unless we have more than one target process. For process
groups, collect pid/context data in blocks of 16. Move the
audit_signal_info() hook up in check_kill_permission() so we audit
attempts where permission is denied.
Signed-off-by: Amy Griffis <amy.griffis@hp.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
These are also in include/net/netfilter/nf_conntrack_helper.h
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
nf_nat_rule_find, alloc_null_binding and alloc_null_binding_confirmed
do not use the argument 'info', which is actually ct->nat.info.
If they are necessary to access it again, we can use the argument 'ct'
instead.
Signed-off-by: Yasuyuki Kozakai <yasuyuki.kozakai@toshiba.co.jp>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
- move arp_tables initial table structure definitions to arp_tables.h
similar to ip_tables and ip6_tables
- use C99 initializers
- use initializer macros where possible
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
__udp_lib_port_inuse() cannot make direct references to
inet_sk(sk)->rcv_saddr as that is ipv4 specific state and
this code is used by ipv6 too.
Use an operations vector to solve this, and this also paves
the way for ipv6 support for non-wild saddr hashing in UDP.
Signed-off-by: David S. Miller <davem@davemloft.net>
These days the link watch mechanism is an integral part of the
network subsystem as it manages the carrier status. So it now
makes sense to allocate some memory for it in net_device rather
than allocating it on demand.
In fact, this is necessary because we can't tolerate a memory
allocation failure since that means we'd have to potentially
throw a link up event away.
It also simplifies the code greatly.
In doing so I discovered a subtle race condition in the use
of singleevent. This race condition still exists (and is
somewhat magnified) without singleevent but it's now plugged
thanks to an smp_mb__before_clear_bit.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Add some OMAP 24xx pin mux declarations to support:
- TUSB 6010 EVM (on H4)
- All three full speed USB ports
- GPIOs used with USB0 on Apollon and H4
For OMAP2, issue MUX_WARNINGS and debug messages correctly; and make the
message look more like the OMAP1 message.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Tony Lindgren <tony@atomide.com>
This reverts commit c9ccf30d77.
Entering the kernel at startup_32 without passing our real mode data in
%esi, and without guaranteeing that physical and virtual addresses are
identity mapped makes head.S impossible to maintain.
The only user of this infrastructure is lguest which is not merged so
nothing we currently support will break by removing this over designed
nightmare, and only the pending lguest patches will be affected. The
pending Xen patches have a different entry point that they use.
We are currently discussing what Xen and lguest need to do to boot the
kernel in a more normal fashion so using startup_32 in this weird manner is
clearly not their long term direction.
So let's remove this code in head.S before it causes brain damage to people
trying to maintain head.S
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Andi Kleen <ak@suse.de>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Cc: Zachary Amsden <zach@vmware.com>
CC: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
We keep on getting "right shift count >= width of type" warnings when doing
things like
sector_t s;
x = s >> 56;
because with CONFIG_LBD=n, s is only 32-bit. Similar problems can occur with
dma_addr_t's.
So add a simple wrapper function which code can use to avoid this warning.
The above example would become
x = upper_32_bits(s) >> 24;
The first user is in fact AFS.
Cc: James Bottomley <James.Bottomley@SteelEye.com>
Cc: "Cameron, Steve" <Steve.Cameron@hp.com>
Cc: "Miller, Mike (OS Dev)" <Mike.Miller@hp.com>
Cc: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Avoid atomic overhead in slab_alloc and slab_free
SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
potential kfree operations. This patch avoids that need by moving all free
objects onto a lockless_freelist. The regular freelist continues to exist
and will be used to free objects. So while we consume the
lockless_freelist the regular freelist may build up objects.
If we are out of objects on the lockless_freelist then we may check the
regular freelist. If it has objects then we move those over to the
lockless_freelist and do this again. There is a significant savings in
terms of atomic operations that have to be performed.
We can even free directly to the lockless_freelist if we know that we are
running on the same processor. So this speeds up short lived objects.
They may be allocated and freed without taking the slab_lock. This is
particular good for netperf.
In order to maximize the effect of the new faster hotpath we extract the
hottest performance pieces into inlined functions. These are then inlined
into kmem_cache_alloc and kmem_cache_free. So hotpath allocation and
freeing no longer requires a subroutine call within SLUB.
[I am not sure that it is worth doing this because it changes the easy to
read structure of slub just to reduce atomic ops. However, there is
someone out there with a benchmark on 4 way and 8 way processor systems
that seems to show a 5% regression vs. Slab. Seems that the regression is
due to increased atomic operations use vs. SLAB in SLUB). I wonder if
this is applicable or discernable at all in a real workload?]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This will add the CRC calculation according
to the CRC ITU-T V.41 to the kernel lib/ folder.
This code has been derived from the rt2x00 driver,
currently found only in the wireless-dev tree, but
this library is generic and could be used by more
drivers who currently use their own implementation.
Signed-off-by: Ivo van Doorn <IvDoorn@gmail.com>
Also useful for the new firewire stack.
Signed-off-by: Kristian Hoegsberg <krh@redhat.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Disband drivers/s390/Kconfig, use the common Kconfig files. The s390
specific config options from drivers/s390/Kconfig are moved to the
respective common Kconfig files.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The function shouldn't have existed in the first place (not MSS-aware).
Introduce a new function ccw_device_get_id() that extracts the
ccw_dev_id structure of a ccw device and convert all users of
_ccw_device_get_device_number to ccw_device_get_id.
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
This adds the necessary support to hpte_decode() to handle 1TB
segments and 16GB pages, and removes an uninitialized value
warning on avpn.
We don't have any code to generate HPTEs for 1TB segments or 16GB
pages yet, so this is mostly for completeness, and to fix the
warning.
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Fix a PS3 build error when CONFIG_PS3_SYS_MANAGER=n.
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
thus we get a link error on ppc64 with CONFIG_PM=y. This fixes it.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
The ACPI EC that is used in MSI laptops knows some non-standard
commands for changing the screen brighntess and a few other things,
which are used by the msi-laptop.c driver. Unfortunately for these
commands no GPE events for IBF and OBF are triggered. Since nowadays
the EC code uses the ec_intr=1 mode by default, this causes these
operations to timeout, although they don't fail. In result, all
operations that you can do with the msi-laptop.c driver take more or
less 1s to complete, which is awfully slow.
In one of the more recent kernels (2.6.20?) the EC subsystem has been
revamped. With that change the EC timeout has been increased. before
that increase the MSI EC accesses were slow -- but not *that* slow,
hence I took notice of this limitation of the MSI EC hardware only very
recently.
The standard EC operations on the MSI EC as defined in the ACPI spec
support GPE events properly.
The following patch adds a new argument "force_poll" to the
ec_transaction() function (and friends). If set to 1, the function
will poll for IBF/OBF even if ec_intr=1 is enabled. If set to 0 the
current behaviour is used. The msi-laptop driver is modified to make
use of this new flag, so that OBF/IBF is polled for the special MSI EC
transactions -- but only for them.
Signed-off-by: Lennart Poettering <mzxreary@0pointer.de>
Acked-by: Alexey Starikovskiy <aystarik@gmail.com>
Signed-off-by: Len Brown <len.brown@intel.com>
The rheap allocation functions return a pointer, but the actual value is based
on how the heap was initialized, and so it can be anything, e.g. an offset
into a buffer. A ulong is a better representation of the value returned by
the allocation functions.
This patch changes all of the relevant rheap functions to use a unsigned long
integers instead of a pointer. In case of an error, the value returned is
a negative error code that has been cast to an unsigned long. The caller can
use the IS_ERR_VALUE() macro to check for this.
All code which calls the rheap functions is updated accordingly. Macros
IS_MURAM_ERR() and IS_DPERR(), have been deleted in favor of IS_ERR_VALUE().
Also added error checking to rh_attach_region().
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
This patch moves a copy of reg_booke.h to include/asm-powerpc and fixes
up the ifdef protection.
Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
This reverts commit c0d127b569.
These changes to AML locking were made to allow
Notify handlers to be called on the stack
and not deadlock. However, that scheme turns
out to be flawed and was reverted by the previous commit,
so this commit restores the locking to it previous design.
Signed-off-by: Len Brown <len.brown@intel.com>
This reverts commit a8f4af6dc6.
Thus restoring ACPICA's new acpi_serialize code.
This commit by itself may cause a regression, but
it is reverted in this order so that subsequent
reverts reverts under this one can be made
without conflict.
Signed-off-by: Len Brown <len.brown@intel.com>
This reverts commit 5b479c91da.
Quoth Neil Brown:
"It causes an oops when auto-detecting raid arrays, and it doesn't
seem easy to fix.
The array may not be 'open' when do_md_run is called, so
bdev->bd_disk might be NULL, so bd_set_size can oops.
This whole approach of opening an md device before it has been
assembled just seems to get more and more painful. I think I'm going
to have to come up with something clever to provide both backward
comparability with usage expectation, and sane integration into the
rest of the kernel."
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
IDE PCI host drivers should register themselves with IDE core only when
IDE driver is built-in, otherwise (IDE driver is modular and thus IDE PCI
host drivers are also modular) the code has no effect and just complicates
the probing.
Fix it by adding new config option CONFIG_IDEPCI_PCIBUS (defined only when
needed and invisible to the user) and covering by #ifdef/#endif the code
in question. It turned out that "ide=reverse" was silently accepted but did
nothing in case when IDE driver was modular, this is fixed now.
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>