Fix forgotten kmemleak headers inclusion for kmemleak_not_leak()
declaration.
This fixes the following build error:
arch/sparc/kernel/irq_64.c: In function ‘sun4v_build_virq’:
arch/sparc/kernel/irq_64.c:657: error: implicit declaration of function ‘kmemleak_not_leak’
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The only reference we store to this memory is in the form of a
physical address, so kmemleak can't see it.
Add a kmemleak_not_leak() annotation.
It's probably useful to be able to look at a dump of these things
either via debugfs or similar, and thus we could at some point store
them in some kind of table and therefore get rid of this annotation.
Signed-off-by: David S. Miller <davem@davemloft.net>
Can't reference irq_desc[].affinity when !SMP.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As noted by Benjamin Herrenschmidt, the generic IRQ layer
only sets irq_desc[irq].affinity after ->set_affinity()
succeeds.
So we have to use the passed in cpumask.
Signed-off-by: David S. Miller <davem@davemloft.net>
Convert locks which cannot be sleeping locks in preempt-rt to
raw_spinlocks.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Ingo Molnar <mingo@elte.hu>
The typename member of struct irq_chip was kept for migration purposes
and is obsolete since more than 2 years. Fix up the leftovers.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Functions invoked early when booting up a cpu can't use
tracing because mcount requires a valid 'current_thread_info()'
and TLB mappings to be setup.
The code path of sun4v_register_mondo_queues --> register_one_mondo
is one such case. sun4v_register_mondo_queues already has the
necessary 'notrace' annotation, but register_one_mondo does not.
Normally register_one_mondo is inlined so the bug doesn't trigger,
but with some config/compiler combinations, it won't be so we
must properly mark it notrace.
While we're here, add 'notrace' annoations to prom_printf and
prom_halt so that early error handling won't have the same problem.
Reported-by: Alexander Beregalov <a.beregalov@gmail.com>
Reported-by: Leif Sawyer <lsawyer@gci.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The page allocator and SLAB are available at this point now,
and if we still try to use bootmem allocations here the kernel
spits out warnings.
Signed-off-by: David S. Miller <davem@davemloft.net>
irq_choose_cpu() should compare the affinity mask against cpu_online_map
rather than CPU_MASK_ALL, since irq_select_affinity() sets the interrupt's
affinity mask to cpu_online_map "and" CPU_MASK_ALL (which ends up being
just cpu_online_map). The mask comparison in irq_choose_cpu() will always
fail since the two masks are not the same. So the CPU chosen is the first CPU
in the intersection of cpu_online_map and CPU_MASK_ALL, which is always CPU0.
That means all interrupts are reassigned to CPU0...
Distributing interrupts to CPUs in a linearly increasing round robin fashion
is not optimal for the UltraSPARC T1/T2. Also, the irq_rover in
irq_choose_cpu() causes an interrupt to be assigned to a different
processor each time the interrupt is allocated and released. This may lead
to an unbalanced distribution over time.
A static mapping of interrupts to processors is done to optimize and balance
interrupt distribution. For the T1/T2, interrupts are spread to different
cores first, and then to strands within a core.
The following is some benchmarks showing the effects of interrupt
distribution on a T2. The test was done with iperf using a pair of T5220
boxes, each with a 10GBe NIU (XAUI) connected back to back.
TCP | Stock Linear RR IRQ Optimized IRQ
Streams | 2.6.30-rc5 Distribution Distribution
| GBits/sec GBits/sec GBits/sec
--------+-----------------------------------------
1 0.839 0.862 0.868
8 1.16 4.96 5.88
16 1.15 6.40 8.04
100 1.09 7.28 8.68
Signed-off-by: Hong H. Pham <hong.pham@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
according to Ingo, change set_affinity() in irq_chip should return int,
because that way we can handle failure cases in a much cleaner way, in
the genirq layer.
v2: fix two typos
[ Impact: extend API ]
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: linux-arch@vger.kernel.org
LKML-Reference: <49F654E9.4070809@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Impact: cleanup, futureproof
In fact, all cpumask ops will only be valid (in general) for bit
numbers < nr_cpu_ids. So use that instead of NR_CPUS in various
places.
This is always safe: no cpu number can be >= nr_cpu_ids, and
nr_cpu_ids is initialized to NR_CPUS at boot.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Mike Travis <travis@sgi.com>
Acked-by: Ingo Molnar <mingo@elte.hu>
Based upon a report by Meelis Roos.
Sparc64 SBUS and PCI controllers use a combination of IMAP and ICLR
registers to manage device interrupts.
The IMAP register contains the "valid" enable bit as well as CPU
targetting information. Whereas the ICLR register is written with
zero at the end of handling an interrupt to reset the state machine
for that interrupt to IDLE so it can be sent again.
For PCI slot and SBUS slot devices we can have multiple interrupts
sharing the same IMAP register. There are individual ICLR registers
but only one IMAP register for managing those.
We represent each shared case with individual virtual IRQs so the
generic IRQ layer thinks there is only one user of the IRQ instance.
In such shared IMAP cases this is wrong, so if there are multiple
active users then a free_irq() call will prematurely turn off the
interrupt by clearing the Valid bit in the IMAP register even though
there are other active users.
Fix this by simply doing nothing in sun4u_disable_irq() and checking
IRQF_DISABLED during IRQ dispatch.
This situation doesn't exist in the hypervisor sun4v cases, so I left
those alone.
Tested-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: David S. Miller <davem@davemloft.net>
Changeset d7e51e6689 ("sparseirq: make
some func to be used with genirq") broke the build on sparc64:
arch/sparc/kernel/irq_64.c: In function ‘show_interrupts’:
arch/sparc/kernel/irq_64.c:188: error: ‘struct kernel_stat’ has no member named ‘irqs’
make[1]: *** [arch/sparc/kernel/irq_64.o] Error 1
Fix by using the kstat_irqs_cpu() interface.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Impact: cleanup, update to new cpumask API
Irq_desc.affinity and irq_desc.pending_mask are now cpumask_var_t's
so access to them should be using the new cpumask API.
Signed-off-by: Mike Travis <travis@sgi.com>
o Move all files from sparc64/kernel/ to sparc/kernel
- rename as appropriate
o Update sparc/Makefile to the changes
o Update sparc/kernel/Makefile to include the sparc64 files
NOTE: This commit changes link order on sparc64!
Link order had to change for either of sparc32 and sparc64.
And assuming sparc64 see more testing than sparc32 change link
order on sparc64 where issues will be caught faster.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Based upon a report by Meelis Roos.
Any function call can try to access the current
thread register via the _mcount hooks when the kernel
is built with -pg (via ftrace or STACK_DEBUG).
That can't be setup properly very early on during
the bootup of other cpus for sun4u and some early
sun4v systems.
So add notrace markers to these specific functions, so
that _mcount doesn't get invoked too early.
Signed-off-by: David S. Miller <davem@davemloft.net>
When a CPU is offlined, we leave the timer interrupts disabled
because fixup_irqs() does not explicitly take care of that case.
Fix this by invoking tick_ops->disable_irq().
Based upon analysis done by Paul E. McKenney.
Signed-off-by: David S. Miller <davem@davemloft.net>
In order to make this week I also had to add an include
of linux/dma-mapping.h to asm/pci_32.h because drivers/pci/pci.c
really depends upon getting this header somehow.
Signed-off-by: David S. Miller <davem@davemloft.net>
We're calling request_irq() with a IRQs disabled.
No straightforward fix exists because we want to
enable these IRQs and setup state atomically before
getting into the IRQ handler the first time.
What happens now is that we mark the VIRQ to not be
automatically enabled by request_irq(). Then we
make explicit enable_irq() calls when we grab the
LDC channel.
This way we don't need to call request_irq() illegally
under the LDC channel lock any more.
Bump LDC version and release date.
Signed-off-by: David S. Miller <davem@davemloft.net>
Kernel bugzilla 10273
As reported by Jos van der Ende, ever since commit
5a606b72a4 ("[SPARC64]: Do not ACK an
INO if it is disabled or inprogress.") sun4u interrupts
can get stuck.
What this changset did was add the following conditional to
the various IRQ chip ->enable() handlers on sparc64:
if (unlikely(desc->status & (IRQ_DISABLED|IRQ_INPROGRESS)))
return;
which is correct, however it means that special care is needed
in the ->enable() method.
Specifically we must put the interrupt into IDLE state during
an enable, or else it might never be sent out again.
Setting the INO interrupt state to IDLE resets the state machine,
the interrupt input to the INO is retested by the hardware, and
if an interrupt is being signalled by the device, the INO
moves back into TRANSMIT state, and an interrupt vector is sent
to the cpu.
The two sun4v IRQ chip handlers were already doing this properly,
only sun4u got it wrong.
Signed-off-by: David S. Miller <davem@davemloft.net>
Invoke the desc->handle_irq directly in the top-level dispatch,
just like other sophisticated ports.
This will allow us to decrease the cost of the MSI queue dispatch.
Signed-off-by: David S. Miller <davem@davemloft.net>
Do not use *alloc_bootmem_low*(), because ARCH_LOW_ADDRESS_LIMIT
is 4GB and this results in boot failures if all of the physical
memory in the machine is above 4GB.
Signed-off-by: David S. Miller <davem@davemloft.net>
It no longer translates to "real irqs" (aka. INO buckets)
so reflect that by using a simpler name for it.
Signed-off-by: David S. Miller <davem@davemloft.net>
All the users go through virt_irq_to_bucket() and essentially
want to go from a virt_irq to an INO, but we have a way
to do that already via virt_to_real_irq_table[].dev_ino.
This also allows us to kill both virt_to_real_irq() and
virt_irq_to_bucket().
Signed-off-by: David S. Miller <davem@davemloft.net>
We have a place to stick INO information in the
virt_to_real_irq_table[], which is currently only used for VIRQs.
And that is readily accessible from the one __irq_ino() call site.
Signed-off-by: David S. Miller <davem@davemloft.net>
We were simply concatenating the devhandle and devino and using that
as the cookie, which defeats the entire purpose of the VIRQ hypervisor
interfaces.
Now that we use physical addresses for the INO buckets, we can
allocate them dynamically for VIRQs and encode the cookies as
~__pa(bucket). This allows us to test for and decode the cookie with
a simple:
brlz $reg1, 1f
xnor $reg1, %g0, $reg2
sequence.
This works because bit 64 is never set in traditional
INO vectors, and it is also never set in a physical
address. So xnor'ing the physical address of the bucket
always gives us a negative number, and thus a unique
condition we can test cheaply.
Inspired by ideas from Greg Onufer.
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently we chain IVEC entries using 32-bit "pointers"
because we know that the ivector_table is in the main
kernel image, thus below 4GB.
This uses proper 64-bit pointers instead.
Whilst this bloats up the kernel image size, this sets
the infrastructure necessary to significantly shrink the
kernel size by using physical addresses and dynamically
allocating the ivector table.
Signed-off-by: David S. Miller <davem@davemloft.net>
This also makes us use the MSI queues correctly.
Each MSI queue is serviced by a normal sun4u/sun4v INO interrupt
handler. This handler runs the MSI queue and dispatches the
virtual interrupts indicated by arriving MSIs in that MSI queue.
All of the common logic is placed in pci_msi.c, with callbacks to
handle the PCI controller specific aspects of the operations.
This common infrastructure will make it much easier to add MSG
support.
Signed-off-by: David S. Miller <davem@davemloft.net>
The support code is identical to the hypervisor sun4v stuff,
just replacing the hypervisor calls with register reads and
writes in the Fire controller.
Signed-off-by: David S. Miller <davem@davemloft.net>
1) sun4{u,v}_build_msi() have improper return value handling.
We should always return negative error codes, instead of
using the magic value "0" which could in fact be a valid
MSI number.
2) sun4{u,v}_build_msi() should return -ENOMEM instead of
calling prom_prom() halt with kzalloc() of the interrupt
data fails.
3) We 'remembered' the MSI number using a singleton in the
struct device archdata area, this doesn't work for MSI-X
which can cause multiple MSIs assosciated with one device.
Delete that archdata member, and instead store the MSI
number in the IRQ chip data area.
Signed-off-by: David S. Miller <davem@davemloft.net>
Sometimes we were using 32-bit values and the top bits were
getting inadvertantly chopped off. This will matter for the
forthcoming Fire controller MSI support.
Signed-off-by: David S. Miller <davem@davemloft.net>
Every time a cpu is added via hotplug, we allocate the per-cpu MONDO
queues but we never free them up. Freeing isn't easy since the first
cpu gets this memory from bootmem.
Therefore, the simplest thing to do to fix this bug is to allocate the
queues for all possible cpus at boot time.
Signed-off-by: David S. Miller <davem@davemloft.net>
The dev_handle and dev_ino fields don't match up exactly to
the traditional IMAP_IGN and IMAP_INO masks.
So store them away in a table and look them up directly.
Signed-off-by: David S. Miller <davem@davemloft.net>
dr-cpu unconfigure requests will walk throught he enabled
IRQs and trigger ->set_affinity so that the going-down
cpu no longer has INOs targetted to it.
Signed-off-by: David S. Miller <davem@davemloft.net>