[ Upstream commit 20c20bd11a0702ce4dc9300c3da58acf551d9725 ]
map is the pointer of outer map, and need_defer needs some explanation.
need_defer tells the implementation to defer the reference release of
the passed element and ensure that the element is still alive before
the bpf program, which may manipulate it, exits.
The following three cases will invoke map_fd_put_ptr() and different
need_defer values will be passed to these callers:
1) release the reference of the old element in the map during map update
or map deletion. The release must be deferred, otherwise the bpf
program may incur use-after-free problem, so need_defer needs to be
true.
2) release the reference of the to-be-added element in the error path of
map update. The to-be-added element is not visible to any bpf
program, so it is OK to pass false for need_defer parameter.
3) release the references of all elements in the map during map release.
Any bpf program which has access to the map must have been exited and
released, so need_defer=false will be OK.
These two parameters will be used by the following patches to fix the
potential use-after-free problem for map-in-map.
Signed-off-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20231204140425.1480317-3-houtao@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 5aa1e7d3f6d0db96c7139677d9e898bbbd6a7dcf)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
[ Upstream commit 1d4e1eab456e1ee92a94987499b211db05f900ea ]
Fix HASH_OF_MAPS bug of not putting inner map pointer on bpf_map_elem_update()
operation. This is due to per-cpu extra_elems optimization, which bypassed
free_htab_elem() logic doing proper clean ups. Make sure that inner map is put
properly in optimized case as well.
Fixes: 8c290e60fa ("bpf: fix hashmap extra_elems logic")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200729040913.2815687-1-andriin@fb.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 9b75dbeb36fcd9fc7ed51d370310d0518a387769 ]
When looking up an element in LPM trie, the condition 'matchlen ==
trie->max_prefixlen' will never return true, if key->prefixlen is larger
than trie->max_prefixlen. Consequently all elements in the LPM trie will
be visited and no element is returned in the end.
To resolve this, check key->prefixlen first before walking the LPM trie.
Fixes: b95a5c4db0 ("bpf: add a longest prefix match trie map implementation")
Signed-off-by: Florian Lehner <dev@der-flo.net>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20231105085801.3742-1-dev@der-flo.net
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
(cherry picked from commit 1b653d866e0fe86e424fe4b8fa743d716eee71b6)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
We allow more then 255 binderfs binder devices to be created since there
are workloads that require more than that. If we use __u8 we'll overflow
after 255. So let's use a __u32.
Note that there's no released kernel with binderfs out there so this is
not a regression.
Signed-off-by: Christian Brauner <christian@brauner.io>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 7d0174065f4903fb0ce0bab3d5047284faa7226d)
Bug: 228263403
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Change-Id: If143c0d6511946fac6349c5db7c013535950de4a
commit c6d05e0762ab276102246d24affd1e116a46aa0c upstream.
Each transaction is associated with a 'struct binder_buffer' that stores
the metadata about its buffer area. Since commit 74310e06be ("android:
binder: Move buffer out of area shared with user space") this struct is
no longer embedded within the buffer itself but is instead allocated on
the heap to prevent userspace access to this driver-exclusive info.
Unfortunately, the space of this struct is still being accounted for in
the total buffer size calculation, specifically for async transactions.
This results in an additional 104 bytes added to every async buffer
request, and this area is never used.
This wasted space can be substantial. If we consider the maximum mmap
buffer space of SZ_4M, the driver will reserve half of it for async
transactions, or 0x200000. This area should, in theory, accommodate up
to 262,144 buffers of the minimum 8-byte size. However, after adding
the extra 'sizeof(struct binder_buffer)', the total number of buffers
drops to only 18,724, which is a sad 7.14% of the actual capacity.
This patch fixes the buffer size calculation to enable the utilization
of the entire async buffer space. This is expected to reduce the number
of -ENOSPC errors that are seen on the field.
Fixes: 74310e06be ("android: binder: Move buffer out of area shared with user space")
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-6-cmllamas@google.com
[cmllamas: fix trivial conflict with missing 261e7818f06e.]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit e2425a67b5ed67496959d0dfb99816f5757164b0)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
commit 9a9ab0d963621d9d12199df9817e66982582d5a5 upstream.
Task A calls binder_update_page_range() to allocate and insert pages on
a remote address space from Task B. For this, Task A pins the remote mm
via mmget_not_zero() first. This can race with Task B do_exit() and the
final mmput() refcount decrement will come from Task A.
Task A | Task B
------------------+------------------
mmget_not_zero() |
| do_exit()
| exit_mm()
| mmput()
mmput() |
exit_mmap() |
remove_vma() |
fput() |
In this case, the work of ____fput() from Task B is queued up in Task A
as TWA_RESUME. So in theory, Task A returns to userspace and the cleanup
work gets executed. However, Task A instead sleep, waiting for a reply
from Task B that never comes (it's dead).
This means the binder_deferred_release() is blocked until an unrelated
binder event forces Task A to go back to userspace. All the associated
death notifications will also be delayed until then.
In order to fix this use mmput_async() that will schedule the work in
the corresponding mm->async_put_work WQ instead of Task A.
Fixes: 457b9a6f09 ("Staging: android: add binder driver")
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Change-Id: I3f3bb5dc31d99ae6198e40c481ce48b236516bea
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-4-cmllamas@google.com
[cmllamas: fix trivial conflict with missing d8ed45c5dcd4.]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 95b1d336b0642198b56836b89908d07b9a0c9608)
[fix conflict due to missing commit 720c241924046aff83f5f2323232f34a30a4c281
("ANDROID: binder: change down_write to down_read")]
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
commit 3091c21d3e9322428691ce0b7a0cfa9c0b239eeb upstream.
Move the padding of 0-sized buffers to an earlier stage to account for
this round up during the alloc->free_async_space check.
Fixes: 74310e06be ("android: binder: Move buffer out of area shared with user space")
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-5-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 05088b886fea59cc827e5b5cedb66165cf532f72)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
commit e1090371e02b601cbfcea175c2a6cc7c955fa830 upstream.
Update the comments of binder_alloc_new_buf() to reflect that the return
value of the function is now ERR_PTR(-errno) on failure.
No functional changes in this patch.
Cc: stable@vger.kernel.org
Fixes: 57ada2fb22 ("binder: add log information for binder transaction failures")
Reviewed-by: Alice Ryhl <aliceryhl@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20231201172212.1813387-8-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 10cfdc51c399890e535ccc16ed3f58b7c5e8f93e)
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
binder_inner_proc_lock(thread->proc) is a spin lock, copy_to_user can't
be called with in this lock.
Copy it as a local variable to fix it.
Fixes: bd32889e841c ("binder: add BINDER_GET_EXTENDED_ERROR ioctl")
Reported-by: syzbot+46fff6434a7f968ecb39@syzkaller.appspotmail.com
Reviewed-by: Carlos Llamas <cmllamas@google.com>
Signed-off-by: Schspa Shi <schspa@gmail.com>
Link: https://lore.kernel.org/r/20220518011754.49348-1-schspa@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I1de3962ef9b29208c97df6379d5bc46942accef1
Add extended_error to the binderfs feature list, to help userspace
determine whether the BINDER_GET_EXTENDED_ERROR ioctl is supported by
the binder driver.
Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20220429235644.697372-4-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I68a686e9d7620d1901d139d367f736d5a0a7ad54
Provide a userspace mechanism to pull precise error information upon
failed operations. Extending the current error codes returned by the
interfaces allows userspace to better determine the course of action.
This could be for instance, retrying a failed transaction at a later
point and thus offloading the error handling from the driver.
Acked-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20220429235644.697372-3-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: Ibd0f984984e0425335e5a9579c6d6d8214b8cb56
Provide userspace with a mechanism to discover features supported by
the binder driver to refrain from using any unsupported ones in the
first place. Starting with "oneway_spam_detection" only new features
are to be listed under binderfs and all previous ones are assumed to
be supported.
Assuming an instance of binderfs has been mounted at /dev/binderfs,
binder feature files can be found under /dev/binderfs/features/.
Usage example:
$ mkdir /dev/binderfs
$ mount -t binder binder /dev/binderfs
$ cat /dev/binderfs/features/oneway_spam_detection
1
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Link: https://lore.kernel.org/r/20210715031805.1725878-1-cmllamas@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: Idfce7eef00325cf26d6fbdea21484f933e63400a
security_secid_to_secctx() can fail because of a GFP_ATOMIC allocation
This needs to be retried from userspace. However, binder driver doesn't
propagate specific enough error codes just yet (WIP b/28321379). We'll
retry on the binder driver as a temporary work around until userspace
can do this instead.
Bug: 174806915
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Change-Id: Ifebddeb7adf9707613512952b97ab702f0d2d592
All the other ioctl paths return EFAULT in case the
copy_from_user/copy_to_user call fails, make oneway spam detection
follow the same paradigm.
Fixes: a7dc1e6f99df ("binder: tell userspace to dump current backtrace when detected oneway spamming")
Acked-by: Todd Kjos <tkjos@google.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Luca Stefani <luca.stefani.ge1@gmail.com>
Link: https://lore.kernel.org/r/20210506193726.45118-1-luca.stefani.ge1@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit ced081a436d21a7d34d4d42acb85058f9cf423f2)
Bug: 187129171
Signed-off-by: Connor O'Brien <connoro@google.com>
Change-Id: I7c5e6ec7108c42721de6c82f4c1e9ff3d4f0e88d
When async binder buffer got exhausted, some normal oneway transactions
will also be discarded and may cause system or application failures. By
that time, the binder debug information we dump may not be relevant to
the root cause. And this issue is difficult to debug if without the
backtrace of the thread sending spam.
This change will send BR_ONEWAY_SPAM_SUSPECT to userspace when oneway
spamming is detected, request to dump current backtrace. Oneway spamming
will be reported only once when exceeding the threshold (target process
dips below 80% of its oneway space, and current process is responsible
for either more than 50 transactions, or more than 50% of the oneway
space). And the detection will restart when the async buffer has
returned to a healthy state.
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Hang Lu <hangl@codeaurora.org>
Link: https://lore.kernel.org/r/1617961246-4502-3-git-send-email-hangl@codeaurora.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 181190340
Change-Id: Id3d2526099bc89f04d8ad3ad6e48141b2a8f2515
(cherry picked from commit a7dc1e6f99df59799ab0128d9c4e47bbeceb934d)
Signed-off-by: Hang Lu <hangl@codeaurora.org>
Add BR_FROZEN_REPLY in binder_return_strings to support stat function.
Fixes: ae28c1be1e54 ("binder: BINDER_GET_FROZEN_INFO ioctl")
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Hang Lu <hangl@codeaurora.org>
Link: https://lore.kernel.org/r/1617961246-4502-2-git-send-email-hangl@codeaurora.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 005169157448ca41eff8716d79dc1b8f158229d2
git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-next)
Change-Id: Ib12e3f1dc1a389c9b4d5e9f60bd740d269dadf94
Signed-off-by: Hang Lu <hangl@codeaurora.org>
Currently cgroup freezer is used to freeze the application threads, and
BINDER_FREEZE is used to freeze the corresponding binder interface.
There's already a mechanism in ioctl(BINDER_FREEZE) to wait for any
existing transactions to drain out before actually freezing the binder
interface.
But freezing an app requires 2 steps, freezing the binder interface with
ioctl(BINDER_FREEZE) and then freezing the application main threads with
cgroupfs. This is not an atomic operation. The following race issue
might happen.
1) Binder interface is frozen by ioctl(BINDER_FREEZE);
2) Main thread A initiates a new sync binder transaction to process B;
3) Main thread A is frozen by "echo 1 > cgroup.freeze";
4) The response from process B reaches the frozen thread, which will
unexpectedly fail.
This patch provides a mechanism to check if there's any new pending
transaction happening between ioctl(BINDER_FREEZE) and freezing the
main thread. If there's any, the main thread freezing operation can
be rolled back to finish the pending transaction.
Furthermore, the response might reach the binder driver before the
rollback actually happens. That will still cause failed transaction.
As the other process doesn't wait for another response of the response,
the response transaction failure can be fixed by treating the response
transaction like an oneway/async one, allowing it to reach the frozen
thread. And it will be consumed when the thread gets unfrozen later.
NOTE: This patch reuses the existing definition of struct
binder_frozen_status_info but expands the bit assignments of __u32
member sync_recv.
To ensure backward compatibility, bit 0 of sync_recv still indicates
there's an outstanding sync binder transaction. This patch adds new
information to bit 1 of sync_recv, indicating the binder transaction
happens exactly when there's a race.
If an existing userspace app runs on a new kernel, a sync binder call
will set bit 0 of sync_recv so ioctl(BINDER_GET_FROZEN_INFO) still
return the expected value (true). The app just doesn't check bit 1
intentionally so it doesn't have the ability to tell if there's a race.
This behavior is aligned with what happens on an old kernel which
doesn't set bit 1 at all.
A new userspace app can 1) check bit 0 to know if there's a sync binder
transaction happened when being frozen - same as before; and 2) check
bit 1 to know if that sync binder transaction happened exactly when
there's a race - a new information for rollback decision.
Fixes: 432ff1e91694 ("binder: BINDER_FREEZE ioctl")
Acked-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Li Li <dualli@google.com>
Test: stress test with apps being frozen and initiating binder calls at
the same time, confirmed the pending transactions succeeded.
Link: https://lore.kernel.org/r/20210910164210.2282716-2-dualli@chromium.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 198493121
(cherry picked from commit b564171ade70570b7f335fa8ed17adb28409e3ac
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git
char-misc-linus)
Change-Id: I488ba75056f18bb3094ba5007027b76b5caebec9
Add a per-transaction flag to indicate that the buffer
must be cleared when the transaction is complete to
prevent copies of sensitive data from being preserved
in memory.
Signed-off-by: Todd Kjos <tkjos@google.com>
Link: https://lore.kernel.org/r/20201120233743.3617529-1-tkjos@google.com
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 171501513
Change-Id: Ic9338c85cbe3b11ab6f2bda55dce9964bb48447a
(cherry picked from commit 0f966cba95c78029f491b433ea95ff38f414a761)
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Releasing the procs lock while freezing a binder context allows for
other processes to modify the process list while the scan is still
ongoing.
Don't release the process locks during the scan operatoin, but store
matching processes in a dynamic array and process them at a later phase.
Signed-off-by: Marco Ballesio <balejs@google.com>
Bug: 176996063
Test: verified that all contexts are correctly frozen and unfrozen
Change-Id: Iea527e3b9188b04303f8b9b08b404e0c062a0189
binder_wait_for_work used to return -ERESTARTSYS if interrupted by a
signal. This error wasn't logged to avoid spamming the console. After
the return value changed to -EINTR to better conform to the kernel API,
the logging portion wasn't modified.
Filter EINTR from binder error logs.
Test: verified that the console isn't spammed anymore
Bug: 172330837
Signed-off-by: Marco Ballesio <balejs@google.com>
Change-Id: Ie7789bdbf5f0b3b0d55793d4f147c395de2c6641
binder freeze stops at the first context found for any pid, but
multiple ones are possible with the result that a process might end
up with inconsistent context states after freezing or unfreezing its
binder.
Freeze or unfreeze all contexts in a process upon a BINDER_FREEZE
ioctl.
Bug: 176996063
Test: verified that all contexts in a specific process with multiple
binders are frozen or unfrozen.
Signed-off-by: Marco Ballesio <balejs@google.com>
Change-Id: If0822e078e830e9fde10cc17b99e39ec7cf358d5
Signed-off-by: Ruchit <risen@pixelexperience.org>
when interrupted by a signal, binder_wait_for_work currently returns
-ERESTARTSYS. This error code is usually restricted to the kernel.
Replace this instance of -ERESTARTSYS with -EINTR.
Bug: 143717177
Test: built, booted, interrupted a worker thread within
binder_wait_for_work
Signed-off-by: Marco Ballesio <balejs@google.com>
Change-Id: I0bd1be173e0a75c917399b773046e819babb9d4b
Signed-off-by: Ruchit <risen@pixelexperience.org>
User space needs to know if binder transactions occurred to frozen
processes. Introduce a new BINDER_GET_FROZEN ioctl and keep track of
transactions occurring to frozen proceses. Also, allow async
transactions toward frozen processes and improve error hendling.
Bug: 143717177
Test: atest testBinderLib
Signed-off-by: Marco Ballesio <balejs@google.com>
Change-Id: I9ee1c2e5fe3d4ab31fc1a137d840bd4cd38a8704
Signed-off-by: Ruchit <risen@pixelexperience.org>
The most common cause of the binder transaction buffer filling up is a
client rapidly firing oneway transactions into a process, before it has
a chance to handle them. Yet the root cause of this is often hard to
debug, because either the system or the app will stop, and by that time
binder debug information we dump in bugreports is no longer relevant.
This change warns as soon as a process dips below 80% of its oneway
space (less than 100kB available in the configuration), when any one
process is responsible for either more than 50 transactions, or more
than 50% of the oneway space.
Signed-off-by: Martijn Coenen <maco@android.com>
Signed-off-by: Martijn Coenen <maco@google.com>
Acked-by: Todd Kjos <tkjos@google.com>
Link: https://lore.kernel.org/r/20200821122544.1277051-1-maco@android.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 261e7818f06ec51e488e007f787ccd7e77272918
git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git/
char-misc-next)
Bug: 147795659
Change-Id: Idc2b03ddc779880ca4716fdae47a70df43211f25
Signed-off-by: Ruchit <risen@pixelexperience.org>
Frozen tasks can't process binder transactions, so a way is required to
inform transmitting ends of communication failures due to the frozen
state of their receiving counterparts. Additionally, races are possible
between transitions to frozen state and binder transactions enqueued to
a specific process.
Implement BINDER_FREEZE ioctl for user space to inform the binder driver
about the intention to freeze or unfreeze a process. When the ioctl is
called, block the caller until any pending binder transactions toward
the target process are flushed. Return an error to transactions to
processes marked as frozen.
Bug: 180989544
(cherry picked from commit 15949c3cdd97bccdcd45c0c0f6c31058520b6494
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-testing)
Co-developed-by: Todd Kjos <tkjos@google.com>
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Marco Ballesio <balejs@google.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Li Li <dualli@google.com>
Link: https://lore.kernel.org/r/20210316011630.1121213-2-dualli@chromium.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: Ia1b5951cd99eeb98b59e06c3e27d59062dc725f6
commit c21a80ca0684ec2910344d72556c816cb8940c01 upstream.
This is a partial revert of commit
29bc22ac5e5b ("binder: use euid from cred instead of using task").
Setting sender_euid using proc->cred caused some Android system test
regressions that need further investigation. It is a partial
reversion because subsequent patches rely on proc->cred.
Fixes: 29bc22ac5e5b ("binder: use euid from cred instead of using task")
Cc: stable@vger.kernel.org # 4.4+
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Change-Id: I9b1769a3510fed250bb21859ef8beebabe034c66
Link: https://lore.kernel.org/r/20211112180720.2858135-1-tkjos@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 29bc22ac5e5bc63275e850f0c8fc549e3d0e306b upstream.
Save the 'struct cred' associated with a binder process
at initial open to avoid potential race conditions
when converting to an euid.
Set a transaction's sender_euid from the 'struct cred'
saved at binder_open() instead of looking up the euid
from the binder proc's 'struct task'. This ensures
the euid is associated with the security context that
of the task that opened binder.
Cc: stable@vger.kernel.org # 4.4+
Fixes: 457b9a6f09 ("Staging: android: add binder driver")
Signed-off-by: Todd Kjos <tkjos@google.com>
Suggested-by: Stephen Smalley <stephen.smalley.work@gmail.com>
Suggested-by: Jann Horn <jannh@google.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Change-Id: I91922e7f359df5901749f1b09094c3c68d45aed4
Bug: 200688826
Signed-off-by: Todd Kjos <tkjos@google.com>
commit d35d3660e065b69fdb8bf512f3d899f350afce52 upstream.
The binder driver makes the assumption proc->context pointer is invariant after
initialization (as documented in the kerneldoc header for struct proc).
However, in commit f0fe2c0f050d ("binder: prevent UAF for binderfs devices II")
proc->context is set to NULL during binder_deferred_release().
Another proc was in the middle of setting up a transaction to the dying
process and crashed on a NULL pointer deref on "context" which is a local
set to &proc->context:
new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
Here's the stack:
[ 5237.855435] Call trace:
[ 5237.855441] binder_get_ref_for_node_olocked+0x100/0x2ec
[ 5237.855446] binder_inc_ref_for_node+0x140/0x280
[ 5237.855451] binder_translate_binder+0x1d0/0x388
[ 5237.855456] binder_transaction+0x2228/0x3730
[ 5237.855461] binder_thread_write+0x640/0x25bc
[ 5237.855466] binder_ioctl_write_read+0xb0/0x464
[ 5237.855471] binder_ioctl+0x30c/0x96c
[ 5237.855477] do_vfs_ioctl+0x3e0/0x700
[ 5237.855482] __arm64_sys_ioctl+0x78/0xa4
[ 5237.855488] el0_svc_common+0xb4/0x194
[ 5237.855493] el0_svc_handler+0x74/0x98
[ 5237.855497] el0_svc+0x8/0xc
The fix is to move the kfree of the binder_device to binder_free_proc()
so the binder_device is freed when we know there are no references
remaining on the binder_proc.
Fixes: f0fe2c0f050d ("binder: prevent UAF for binderfs devices II")
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20200622200715.114382-1-tkjos@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Change-Id: I933c938ea85889f77fb634bbed29a7cd74527dcc
This is a necessary follow up to the first fix I proposed and we merged
in 2669b8b0c79 ("binder: prevent UAF for binderfs devices"). I have been
overly optimistic that the simple fix I proposed would work. But alas,
ihold() + iput() won't work since the inodes won't survive the
destruction of the superblock.
So all we get with my prior fix is a different race with a tinier
race-window but it doesn't solve the issue. Fwiw, the problem lies with
generic_shutdown_super(). It even has this cozy Al-style comment:
if (!list_empty(&sb->s_inodes)) {
printk("VFS: Busy inodes after unmount of %s. "
"Self-destruct in 5 seconds. Have a nice day...\n",
sb->s_id);
}
On binder_release(), binder_defer_work(proc, BINDER_DEFERRED_RELEASE) is
called which punts the actual cleanup operation to a workqueue. At some
point, binder_deferred_func() will be called which will end up calling
binder_deferred_release() which will retrieve and cleanup the
binder_context attach to this struct binder_proc.
If we trace back where this binder_context is attached to binder_proc we
see that it is set in binder_open() and is taken from the struct
binder_device it is associated with. This obviously assumes that the
struct binder_device that context is attached to is _never_ freed. While
that might be true for devtmpfs binder devices it is most certainly
wrong for binderfs binder devices.
So, assume binder_open() is called on a binderfs binder devices. We now
stash away the struct binder_context associated with that struct
binder_devices:
proc->context = &binder_dev->context;
/* binderfs stashes devices in i_private */
if (is_binderfs_device(nodp)) {
binder_dev = nodp->i_private;
info = nodp->i_sb->s_fs_info;
binder_binderfs_dir_entry_proc = info->proc_log_dir;
} else {
.
.
.
proc->context = &binder_dev->context;
Now let's assume that the binderfs instance for that binder devices is
shutdown via umount() and/or the mount namespace associated with it goes
away. As long as there is still an fd open for that binderfs binder
device things are fine. But let's assume we now close the last fd for
that binderfs binder device. Now binder_release() is called and punts to
the workqueue. Assume that the workqueue has quite a bit of stuff to do
and doesn't get to cleaning up the struct binder_proc and the associated
struct binder_context with it for that binderfs binder device right
away. In the meantime, the VFS is killing the super block and is
ultimately calling sb->evict_inode() which means it will call
binderfs_evict_inode() which does:
static void binderfs_evict_inode(struct inode *inode)
{
struct binder_device *device = inode->i_private;
struct binderfs_info *info = BINDERFS_I(inode);
clear_inode(inode);
if (!S_ISCHR(inode->i_mode) || !device)
return;
mutex_lock(&binderfs_minors_mutex);
--info->device_count;
ida_free(&binderfs_minors, device->miscdev.minor);
mutex_unlock(&binderfs_minors_mutex);
kfree(device->context.name);
kfree(device);
}
thereby freeing the struct binder_device including struct
binder_context.
Now the workqueue finally has time to get around to cleaning up struct
binder_proc and is now trying to access the associate struct
binder_context. Since it's already freed it will OOPs.
Fix this by introducing a refounct on binder devices.
This is an alternative fix to 51d8a7eca677 ("binder: prevent UAF read in
print_binder_transaction_log_entry()").
Fixes: 3ad20fe393b3 ("binder: implement binderfs")
Fixes: 2669b8b0c798 ("binder: prevent UAF for binderfs devices")
Fixes: 03e2e07e3814 ("binder: Make transaction_log available in binderfs")
Related: 51d8a7eca677 ("binder: prevent UAF read in print_binder_transaction_log_entry()")
Cc: stable@vger.kernel.org
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Todd Kjos <tkjos@google.com>
Link: https://lore.kernel.org/r/20200303164340.670054-1-christian.brauner@ubuntu.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit f0fe2c0f050d31babcad7d65f1d550d462a40064)
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I54a6c910002bf1077ba0c34c48fb96f4ffbf012e
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
On binder_release(), binder_defer_work(proc, BINDER_DEFERRED_RELEASE) is
called which punts the actual cleanup operation to a workqueue. At some
point, binder_deferred_func() will be called which will end up calling
binder_deferred_release() which will retrieve and cleanup the
binder_context attach to this struct binder_proc.
If we trace back where this binder_context is attached to binder_proc we
see that it is set in binder_open() and is taken from the struct
binder_device it is associated with. This obviously assumes that the
struct binder_device that context is attached to is _never_ freed. While
that might be true for devtmpfs binder devices it is most certainly
wrong for binderfs binder devices.
So, assume binder_open() is called on a binderfs binder devices. We now
stash away the struct binder_context associated with that struct
binder_devices:
proc->context = &binder_dev->context;
/* binderfs stashes devices in i_private */
if (is_binderfs_device(nodp)) {
binder_dev = nodp->i_private;
info = nodp->i_sb->s_fs_info;
binder_binderfs_dir_entry_proc = info->proc_log_dir;
} else {
.
.
.
proc->context = &binder_dev->context;
Now let's assume that the binderfs instance for that binder devices is
shutdown via umount() and/or the mount namespace associated with it goes
away. As long as there is still an fd open for that binderfs binder
device things are fine. But let's assume we now close the last fd for
that binderfs binder device. Now binder_release() is called and punts to
the workqueue. Assume that the workqueue has quite a bit of stuff to do
and doesn't get to cleaning up the struct binder_proc and the associated
struct binder_context with it for that binderfs binder device right
away. In the meantime, the VFS is killing the super block and is
ultimately calling sb->evict_inode() which means it will call
binderfs_evict_inode() which does:
static void binderfs_evict_inode(struct inode *inode)
{
struct binder_device *device = inode->i_private;
struct binderfs_info *info = BINDERFS_I(inode);
clear_inode(inode);
if (!S_ISCHR(inode->i_mode) || !device)
return;
mutex_lock(&binderfs_minors_mutex);
--info->device_count;
ida_free(&binderfs_minors, device->miscdev.minor);
mutex_unlock(&binderfs_minors_mutex);
kfree(device->context.name);
kfree(device);
}
thereby freeing the struct binder_device including struct
binder_context.
Now the workqueue finally has time to get around to cleaning up struct
binder_proc and is now trying to access the associate struct
binder_context. Since it's already freed it will OOPs.
Fix this by holding an additional reference to the inode that is only
released once the workqueue is done cleaning up struct binder_proc. This
is an easy alternative to introducing separate refcounting on struct
binder_device which we can always do later if it becomes necessary.
This is an alternative fix to 51d8a7eca677 ("binder: prevent UAF read in
print_binder_transaction_log_entry()").
Fixes: 3ad20fe393b3 ("binder: implement binderfs")
Fixes: 03e2e07e3814 ("binder: Make transaction_log available in binderfs")
Related: 51d8a7eca677 ("binder: prevent UAF read in print_binder_transaction_log_entry()")
Cc: stable@vger.kernel.org
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 2669b8b0c798fbe1a31d49e07aa33233d469ad9b)
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I047a1e360b4146872bbc1d206dce7a864bb4588b
commit adb9743d6a08778b78d62d16b4230346d3508986 upstream.
In binder_init(), the destruction of binder_alloc_shrinker_init() is not
performed in the wrong path, which will cause memory leaks. So this commit
introduces binder_alloc_shrinker_exit() and calls it in the wrong path to
fix that.
Change-Id: I688fc93203ef375724b6d665171bc48178460da9
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Acked-by: Carlos Llamas <cmllamas@google.com>
Fixes: f2517eb76f ("android: binder: Add global lru shrinker to binder")
Cc: stable <stable@kernel.org>
Link: https://lore.kernel.org/r/20230625154937.64316-1-qi.zheng@linux.dev
[cmllamas: resolved trivial merge conflicts]
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
A transaction of type BINDER_TYPE_WEAK_HANDLE can fail to increment the
reference for a node. In this case, the target proc normally releases
the failed reference upon close as expected. However, if the target is
dying in parallel the call will race with binder_deferred_release(), so
the target could have released all of its references by now leaving the
cleanup of the new failed reference unhandled.
The transaction then ends and the target proc gets released making the
ref->proc now a dangling pointer. Later on, ref->node is closed and we
attempt to take spin_lock(&ref->proc->inner_lock), which leads to the
use-after-free bug reported below. Let's fix this by cleaning up the
failed reference on the spot instead of relying on the target to do so.
==================================================================
BUG: KASAN: use-after-free in _raw_spin_lock+0xa8/0x150
Write of size 4 at addr ffff5ca207094238 by task kworker/1:0/590
CPU: 1 PID: 590 Comm: kworker/1:0 Not tainted 5.19.0-rc8 #10
Hardware name: linux,dummy-virt (DT)
Workqueue: events binder_deferred_func
Call trace:
dump_backtrace.part.0+0x1d0/0x1e0
show_stack+0x18/0x70
dump_stack_lvl+0x68/0x84
print_report+0x2e4/0x61c
kasan_report+0xa4/0x110
kasan_check_range+0xfc/0x1a4
__kasan_check_write+0x3c/0x50
_raw_spin_lock+0xa8/0x150
binder_deferred_func+0x5e0/0x9b0
process_one_work+0x38c/0x5f0
worker_thread+0x9c/0x694
kthread+0x188/0x190
ret_from_fork+0x10/0x20
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Acked-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Bug: 239630375
Link: https://lore.kernel.org/all/20220801182511.3371447-1-cmllamas@google.com/
Signed-off-by: Carlos Llamas <cmllamas@google.com>
Change-Id: I5085dd0dc805a780a64c057e5819f82dd8f02868
(cherry picked from commit ae3fa5d16a02ba7c7b170e0e1ab56d6f0ba33964)
commit cfd0d84ba28c18b531648c9d4a35ecca89ad9901 upstream.
In 4.13, commit 74310e06be ("android: binder: Move buffer out of area shared with user space")
fixed a kernel structure visibility issue. As part of that patch,
sizeof(void *) was used as the buffer size for 0-length data payloads so
the driver could detect abusive clients sending 0-length asynchronous
transactions to a server by enforcing limits on async_free_size.
Unfortunately, on the "free" side, the accounting of async_free_space
did not add the sizeof(void *) back. The result was that up to 8-bytes of
async_free_space were leaked on every async transaction of 8-bytes or
less. These small transactions are uncommon, so this accounting issue
has gone undetected for several years.
The fix is to use "buffer_size" (the allocated buffer size) instead of
"size" (the logical buffer size) when updating the async_free_space
during the free operation. These are the same except for this
corner case of asynchronous transactions with payloads < 8 bytes.
Fixes: 74310e06be ("android: binder: Move buffer out of area shared with user space")
Signed-off-by: Todd Kjos <tkjos@google.com>
Cc: stable@vger.kernel.org # 4.14+
Link: https://lore.kernel.org/r/20211220190150.2107077-1-tkjos@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a880b28a71e39013e357fd3adccd1d8a31bc69a8 upstream.
wake_up_poll() uses nr_exclusive=1, so it's not guaranteed to wake up
all exclusive waiters. Yet, POLLFREE *must* wake up all waiters. epoll
and aio poll are fortunately not affected by this, but it's very
fragile. Thus, the new function wake_up_pollfree() has been introduced.
Convert binder to use wake_up_pollfree().
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Fixes: f5cb779ba163 ("ANDROID: binder: remove waitqueue when thread exits.")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This is a partial revert of commit
29bc22ac5e5b ("binder: use euid from cred instead of using task").
Setting sender_euid using proc->cred caused some Android system test
regressions that need further investigation. It is a partial
reversion because subsequent patches rely on proc->cred.
Fixes: 29bc22ac5e5b ("binder: use euid from cred instead of using task")
Cc: stable@vger.kernel.org # 4.4+
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Change-Id: I9b1769a3510fed250bb21859ef8beebabe034c66
Link: https://lore.kernel.org/r/20211112180720.2858135-1-tkjos@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bug: 200688826
(cherry picked from commit c21a80ca0684ec2910344d72556c816cb8940c01
git: //git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-linus)
Signed-off-by: Todd Kjos <tkjos@google.com>
commit 52f88693378a58094c538662ba652aff0253c4fe upstream.
Since binder was integrated with selinux, it has passed
'struct task_struct' associated with the binder_proc
to represent the source and target of transactions.
The conversion of task to SID was then done in the hook
implementations. It turns out that there are race conditions
which can result in an incorrect security context being used.
Fix by using the 'struct cred' saved during binder_open and pass
it to the selinux subsystem.
Cc: stable@vger.kernel.org # 5.14 (need backport for earlier stables)
Fixes: 79af73079d ("Add security hooks to binder and implement the hooks for SELinux.")
Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Todd Kjos <tkjos@google.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Change-Id: Id7157515d2b08f11683aeb8ad9b8f1da075d34e7
Bug: 200688826
[ tkjos@ fixed minor conflict ]
Signed-off-by: Todd Kjos <tkjos@google.com>
commit 29bc22ac5e5bc63275e850f0c8fc549e3d0e306b upstream.
Save the 'struct cred' associated with a binder process
at initial open to avoid potential race conditions
when converting to an euid.
Set a transaction's sender_euid from the 'struct cred'
saved at binder_open() instead of looking up the euid
from the binder proc's 'struct task'. This ensures
the euid is associated with the security context that
of the task that opened binder.
Cc: stable@vger.kernel.org # 4.4+
Fixes: 457b9a6f09 ("Staging: android: add binder driver")
Signed-off-by: Todd Kjos <tkjos@google.com>
Suggested-by: Stephen Smalley <stephen.smalley.work@gmail.com>
Suggested-by: Jann Horn <jannh@google.com>
Acked-by: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Change-Id: I91922e7f359df5901749f1b09094c3c68d45aed4
Bug: 200688826
Signed-off-by: Todd Kjos <tkjos@google.com>
When releasing a thread todo list when tearing down
a binder_proc, the following race was possible which
could result in a use-after-free:
1. Thread 1: enter binder_release_work from binder_thread_release
2. Thread 2: binder_update_ref_for_handle() -> binder_dec_node_ilocked()
3. Thread 2: dec nodeA --> 0 (will free node)
4. Thread 1: ACQ inner_proc_lock
5. Thread 2: block on inner_proc_lock
6. Thread 1: dequeue work (BINDER_WORK_NODE, part of nodeA)
7. Thread 1: REL inner_proc_lock
8. Thread 2: ACQ inner_proc_lock
9. Thread 2: todo list cleanup, but work was already dequeued
10. Thread 2: free node
11. Thread 2: REL inner_proc_lock
12. Thread 1: deref w->type (UAF)
The problem was that for a BINDER_WORK_NODE, the binder_work element
must not be accessed after releasing the inner_proc_lock while
processing the todo list elements since another thread might be
handling a deref on the node containing the binder_work element
leading to the node being freed.
Signed-off-by: Todd Kjos <tkjos@google.com>
Link: https://lore.kernel.org/r/20201009232455.4054810-1-tkjos@google.com
Cc: <stable@vger.kernel.org> # 4.14, 4.19, 5.4, 5.8
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit f3277cbfba763cd2826396521b9296de67cf1bbc)
Change-Id: I7c1bf0b74824f272664e76206c5dc3b66b9eeaff
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This reverts commit c5665cafbedd2e2a523fe933e452391a02d3adb3.
This patch was causing display hangs for Qualcomm after the 5.4.58
merge.
Bug: 166779391
Change-Id: Iaf22ede68247422709b00f059e5c4d517f219adf
Signed-off-by: Todd Kjos <tkjos@google.com>
commit 4b836a1426cb0f1ef2a6e211d7e553221594f8fc upstream.
Binder is designed such that a binder_proc never has references to
itself. If this rule is violated, memory corruption can occur when a
process sends a transaction to itself; see e.g.
<https://syzkaller.appspot.com/bug?extid=09e05aba06723a94d43d>.
There is a remaining edgecase through which such a transaction-to-self
can still occur from the context of a task with BINDER_SET_CONTEXT_MGR
access:
- task A opens /dev/binder twice, creating binder_proc instances P1
and P2
- P1 becomes context manager
- P2 calls ACQUIRE on the magic handle 0, allocating index 0 in its
handle table
- P1 dies (by closing the /dev/binder fd and waiting a bit)
- P2 becomes context manager
- P2 calls ACQUIRE on the magic handle 0, allocating index 1 in its
handle table
[this triggers a warning: "binder: 1974:1974 tried to acquire
reference to desc 0, got 1 instead"]
- task B opens /dev/binder once, creating binder_proc instance P3
- P3 calls P2 (via magic handle 0) with (void*)1 as argument (two-way
transaction)
- P2 receives the handle and uses it to call P3 (two-way transaction)
- P3 calls P2 (via magic handle 0) (two-way transaction)
- P2 calls P2 (via handle 1) (two-way transaction)
And then, if P2 does *NOT* accept the incoming transaction work, but
instead closes the binder fd, we get a crash.
Solve it by preventing the context manager from using ACQUIRE on ref 0.
There shouldn't be any legitimate reason for the context manager to do
that.
Additionally, print a warning if someone manages to find another way to
trigger a transaction-to-self bug in the future.
Cc: stable@vger.kernel.org
Fixes: 457b9a6f09 ("Staging: android: add binder driver")
Acked-by: Todd Kjos <tkjos@google.com>
Signed-off-by: Jann Horn <jannh@google.com>
Reviewed-by: Martijn Coenen <maco@android.com>
Link: https://lore.kernel.org/r/20200727120424.1627555-1-jannh@google.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
When FUSE passthrough is used, the lower file system file is manipulated
directly, but neither mtime, atime or ctime of the referencing FUSE file
is updated.
Fix by updating the file times when passthrough operations are
performed.
Bug: 200779468
Bug: 201730208
Reported-by: Fengnan Chang <changfengnan@vivo.com>
Reported-by: Ed Tsai <ed.tsai@mediatek.com>
Signed-off-by: Alessio Balsini <balsini@google.com>
Change-Id: I35b72196b2cc1d79a9f62ddb32e2cfa934c3b6d3
With commit f8425c939663 ("fuse: 32-bit user space ioctl compat for fuse
device") the matching constraints for the FUSE_DEV_IOC_CLONE ioctl command
are relaxed, limited to the testing of command type and number. As Arnd
noticed, this is wrong as it wouldn't ensure the correctness of the data
size or direction for the received FUSE device ioctl.
Fix by bringing back the comparison of the ioctl received by the FUSE
device to the originally generated FUSE_DEV_IOC_CLONE.
Fixes: f8425c939663 ("fuse: 32-bit user space ioctl compat for fuse device")
Reported-by: Arnd Bergmann <arnd@kernel.org>
Signed-off-by: Alessio Balsini <balsini@android.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I372d8399db6d603ba20ef50528acf6645e4d3c66
(cherry picked from commit 6076f5f341e612152879bfda99f0b76c1953bf0b)
The initial FUSE passthrough interface has the issue of introducing an
ioctl which receives as a parameter a data structure containing a
pointer. What happens is that, depending on the architecture, the size
of this struct might change, and especially for 32-bit userspace running
on 64-bit kernel, the size mismatch results into different a single
ioctl the behavior of which depends on the data that is passed (e.g.,
with an enum). This is just a poor ioctl design as mentioned by Arnd
Bergmann [1].
Introduce the new FUSE_PASSTHROUGH_OPEN ioctl which only gets the fd of
the lower file system, which is a fixed-size __u32, dropping the
confusing fuse_passthrough_out data structure.
[1] https://lore.kernel.org/lkml/CAK8P3a2K2FzPvqBYL9W=Yut58SFXyetXwU4Fz50G5O3TsS0pPQ@mail.gmail.com/
Bug: 175195837
Signed-off-by: Alessio Balsini <balsini@google.com>
Change-Id: I486d71cbe20f3c0c87544fa75da4e2704fe57c7c
If the system doesn't have enough memory when fuse_passthrough_read_iter
is requested in asynchronous IO, an error is directly returned without
restoring the caller's credentials.
Fix by always ensuring credentials are restored.
Fixes: aa29f32988c1f84c96e2457b049dea437601f2cc ("FROMLIST: fuse: Use daemon creds in passthrough mode")
Link: https://lore.kernel.org/lkml/YB0qPHVORq7bJy6G@google.com/
Reported-by: Peng Tao <bergwolf@gmail.com>
Signed-off-by: Alessio Balsini <balsini@android.com>
Signed-off-by: Alessio Balsini <balsini@google.com>
Change-Id: I4aff43f5dd8ddab2cc8871cd9f81438963ead5b6
Enabling FUSE passthrough for mmap-ed operations not only affects
performance, but has also been shown as mandatory for the correct
functioning of FUSE passthrough.
yanwu noticed [1] that a FUSE file with passthrough enabled may suffer
data inconsistencies if the same file is also accessed with mmap. What
happens is that read/write operations are directly applied to the lower
file system (and its cache), while mmap-ed operations are affecting the
FUSE cache.
Extend the FUSE passthrough implementation to also handle memory-mapped
FUSE file, to both fix the cache inconsistencies and extend the
passthrough performance benefits to mmap-ed operations.
[1] https://lore.kernel.org/lkml/20210119110654.11817-1-wu-yan@tcl.com/
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-9-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: Ifad4698b0380f6e004c487940ac6907b9a9f2964
Signed-off-by: Alessio Balsini <balsini@google.com>
When using FUSE passthrough, read/write operations are directly
forwarded to the lower file system file through VFS, but there is no
guarantee that the process that is triggering the request has the right
permissions to access the lower file system. This would cause the
read/write access to fail.
In passthrough file systems, where the FUSE daemon is responsible for
the enforcement of the lower file system access policies, often happens
that the process dealing with the FUSE file system doesn't have access
to the lower file system.
Being the FUSE daemon in charge of implementing the FUSE file
operations, that in the case of read/write operations usually simply
results in the copy of memory buffers from/to the lower file system
respectively, these operations are executed with the FUSE daemon
privileges.
This patch adds a reference to the FUSE daemon credentials, referenced
at FUSE_DEV_IOC_PASSTHROUGH_OPEN ioctl() time so that they can be used
to temporarily raise the user credentials when accessing lower file
system files in passthrough.
The process accessing the FUSE file with passthrough enabled temporarily
receives the privileges of the FUSE daemon while performing read/write
operations. Similar behavior is implemented in overlayfs.
These privileges will be reverted as soon as the IO operation completes.
This feature does not provide any higher security privileges to those
processes accessing the FUSE file system with passthrough enabled. This
is because it is still the FUSE daemon responsible for enabling or not
the passthrough feature at file open time, and should enable the feature
only after appropriate access policy checks.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-8-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: Idb4f03a2ce7c536691e5eaf8fadadfcf002e1677
Signed-off-by: Alessio Balsini <balsini@google.com>
Extend the passthrough feature by handling asynchronous IO both for read
and write operations.
When an AIO request is received, if the request targets a FUSE file with
the passthrough functionality enabled, a new identical AIO request is
created. The new request targets the lower file system file and gets
assigned a special FUSE passthrough AIO completion callback.
When the lower file system AIO request is completed, the FUSE
passthrough AIO completion callback is executed and propagates the
completion signal to the FUSE AIO request by triggering its completion
callback as well.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-7-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I47671ef36211102da6dd3ee8b2f226d1e6cd9d5c
Signed-off-by: Alessio Balsini <balsini@google.com>
All the read and write operations performed on fuse_files which have the
passthrough feature enabled are forwarded to the associated lower file
system file via VFS.
Sending the request directly to the lower file system avoids the
userspace round-trip that, because of possible context switches and
additional operations might reduce the overall performance, especially
in those cases where caching doesn't help, for example in reads at
random offsets.
Verifying if a fuse_file has a lower file system file associated with
can be done by checking the validity of its passthrough_filp pointer.
This pointer is not NULL only if passthrough has been successfully
enabled via the appropriate ioctl().
When a read/write operation is requested for a FUSE file with
passthrough enabled, a new equivalent VFS request is generated, which
instead targets the lower file system file.
The VFS layer performs additional checks that allow for safer operations
but may cause the operation to fail if the process accessing the FUSE
file system does not have access to the lower file system.
This change only implements synchronous requests in passthrough,
returning an error in the case of asynchronous operations, yet covering
the majority of the use cases.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-6-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: Ifbe6a247fe7338f87d078fde923f0252eeaeb668
Signed-off-by: Alessio Balsini <balsini@google.com>
Implement the FUSE passthrough ioctl that associates the lower
(passthrough) file system file with the fuse_file.
The file descriptor passed to the ioctl by the FUSE daemon is used to
access the relative file pointer, that will be copied to the fuse_file
data structure to consolidate the link between the FUSE and lower file
system.
To enable the passthrough mode, user space triggers the
FUSE_DEV_IOC_PASSTHROUGH_OPEN ioctl and, if the call succeeds, receives
back an identifier that will be used at open/create response time in the
fuse_open_out field to associate the FUSE file to the lower file system
file.
The value returned by the ioctl to user space can be:
- > 0: success, the identifier can be used as part of an open/create
reply.
- <= 0: an error occurred.
The value 0 represents an error to preserve backward compatibility: the
fuse_open_out field that is used to pass the passthrough_fh back to the
kernel uses the same bits that were previously as struct padding, and is
commonly zero-initialized (e.g., in the libfuse implementation).
Removing 0 from the correct values fixes the ambiguity between the case
in which 0 corresponds to a real passthrough_fh, a missing
implementation of FUSE passthrough or a request for a normal FUSE file,
simplifying the user space implementation.
For the passthrough mode to be successfully activated, the lower file
system file must implement both read_iter and write_iter file
operations. This extra check avoids special pseudo files to be targeted
for this feature.
Passthrough comes with another limitation: no further file system
stacking is allowed for those FUSE file systems using passthrough.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-5-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I4d8290012302fb4547bce9bb261a03cc4f66b5aa
Signed-off-by: Alessio Balsini <balsini@google.com>
Expose the FUSE_PASSTHROUGH interface to user space and declare all the
basic data structures and functions as the skeleton on top of which the
FUSE passthrough functionality will be built.
As part of this, introduce the new FUSE passthrough ioctl, which allows
the FUSE daemon to specify a direct connection between a FUSE file and a
lower file system file. Such ioctl requires user space to pass the file
descriptor of one of its opened files through the fuse_passthrough_out
data structure introduced in this patch. This structure includes extra
fields for possible future extensions.
Also, add the passthrough functions for the set-up and tear-down of the
data structures and locks that will be used both when fuse_conns and
fuse_files are created/deleted.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-4-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I732532581348adadda5b5048a9346c2b0868d539
Signed-off-by: Alessio Balsini <balsini@google.com>
With a 64-bit kernel build the FUSE device cannot handle ioctl requests
coming from 32-bit user space.
This is due to the ioctl command translation that generates different
command identifiers that thus cannot be used for direct comparisons
without proper manipulation.
Explicitly extract type and number from the ioctl command to enable
32-bit user space compatibility on 64-bit kernel builds.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-3-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I595517c54d551be70e83c7fcb4b62397a3615004
Signed-off-by: Alessio Balsini <balsini@google.com>
OverlayFS implements its own function to translate iocb flags into rw
flags, so that they can be passed into another vfs call.
With commit ce71bfea207b4 ("fs: align IOCB_* flags with RWF_* flags")
Jens created a 1:1 matching between the iocb flags and rw flags,
simplifying the conversion.
Reduce the OverlayFS code by making the flag conversion function generic
and reusable.
Bug: 179164095
Link: https://lore.kernel.org/lkml/20210125153057.3623715-2-balsini@android.com/
Signed-off-by: Alessio Balsini <balsini@android.com>
Change-Id: I74aefeafd6ebbda2fbabee9024474dfe4cc6c2a7
Signed-off-by: Alessio Balsini <balsini@google.com>
We have a set of flags that are shared between the two and inherired
in kiocb_set_rw_flags(), but we check and set these individually.
Reorder the IOCB flags so that the bottom part of the space is synced
with the RWF flag space, and then we can do them all in one mask and
set operation.
The only exception is RWF_SYNC, which needs to mark IOCB_SYNC and
IOCB_DSYNC. Do that one separately.
This shaves 15 bytes of text from kiocb_set_rw_flags() for me.
(cherry picked from commit ce71bfea207b4d7c21d36f24ec37618ffcea1da8)
Suggested-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Change-Id: Ib6316ae5cb3f8a14fabef5492e79783c9e6d3c4d
Signed-off-by: Alessio Balsini <balsini@google.com>
This is the per-I/O equivalent of O_APPEND to support atomic append
operations on any open file.
If a file is opened with O_APPEND, pwrite() ignores the offset and
always appends data to the end of the file. RWF_APPEND enables atomic
append and pwrite() with offset on a single file descriptor.
Change-Id: I18cb7fa871e6b55bfe7890a633a4014135bf361e
Signed-off-by: Jürg Billeter <j@bitron.ch>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
We don't currently allow lookups into a devmap from eBPF, because the map
lookup returns a pointer directly to the dev->ifindex, which shouldn't be
modifiable from eBPF.
However, being able to do lookups in devmaps is useful to know (e.g.)
whether forwarding to a specific interface is enabled. Currently, programs
work around this by keeping a shadow map of another type which indicates
whether a map index is valid.
Since we now have a flag to make maps read-only from the eBPF side, we can
simply lift the lookup restriction if we make sure this flag is always set.
Change-Id: I42b1430605c6837710fd903a0c8abf2c7dc13f16
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
A common pattern when using xdp_redirect_map() is to create a device map
where the lookup key is simply ifindex. Because device maps are arrays,
this leaves holes in the map, and the map has to be sized to fit the
largest ifindex, regardless of how many devices actually are actually
needed in the map.
This patch adds a second type of device map where the key is looked up
using a hashmap, instead of being used as an array index. This allows maps
to be densely packed, so they can be smaller.
Change-Id: I6155de499a47fb45bac1a39319f0ad979032fd6d
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Yonghong Song <yhs@fb.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
"Post-hooks" are hooks that are called right before returning from
sys_bind. At this time IP and port are already allocated and no further
changes to `struct sock` can happen before returning from sys_bind but
BPF program has a chance to inspect the socket and change sys_bind
result.
Specifically it can e.g. inspect what port was allocated and if it
doesn't satisfy some policy, BPF program can force sys_bind to fail and
return EPERM to user.
Another example of usage is recording the IP:port pair to some map to
use it in later calls to sys_connect. E.g. if some TCP server inside
cgroup was bound to some IP:port_n, it can be recorded to a map. And
later when some TCP client inside same cgroup is trying to connect to
127.0.0.1:port_n, BPF hook for sys_connect can override the destination
and connect application to IP:port_n instead of 127.0.0.1:port_n. That
helps forcing all applications inside a cgroup to use desired IP and not
break those applications if they e.g. use localhost to communicate
between each other.
== Implementation details ==
Post-hooks are implemented as two new attach types
`BPF_CGROUP_INET4_POST_BIND` and `BPF_CGROUP_INET6_POST_BIND` for
existing prog type `BPF_PROG_TYPE_CGROUP_SOCK`.
Separate attach types for IPv4 and IPv6 are introduced to avoid access
to IPv6 field in `struct sock` from `inet_bind()` and to IPv4 field from
`inet6_bind()` since those fields might not make sense in such cases.
Change-Id: Ibef21eed069c37684321b2401e5bb52f689ab8e7
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
== The problem ==
See description of the problem in the initial patch of this patch set.
== The solution ==
The patch provides much more reliable in-kernel solution for the 2nd
part of the problem: making outgoing connecttion from desired IP.
It adds new attach types `BPF_CGROUP_INET4_CONNECT` and
`BPF_CGROUP_INET6_CONNECT` for program type
`BPF_PROG_TYPE_CGROUP_SOCK_ADDR` that can be used to override both
source and destination of a connection at connect(2) time.
Local end of connection can be bound to desired IP using newly
introduced BPF-helper `bpf_bind()`. It allows to bind to only IP though,
and doesn't support binding to port, i.e. leverages
`IP_BIND_ADDRESS_NO_PORT` socket option. There are two reasons for this:
* looking for a free port is expensive and can affect performance
significantly;
* there is no use-case for port.
As for remote end (`struct sockaddr *` passed by user), both parts of it
can be overridden, remote IP and remote port. It's useful if an
application inside cgroup wants to connect to another application inside
same cgroup or to itself, but knows nothing about IP assigned to the
cgroup.
Support is added for IPv4 and IPv6, for TCP and UDP.
IPv4 and IPv6 have separate attach types for same reason as sys_bind
hooks, i.e. to prevent reading from / writing to e.g. user_ip6 fields
when user passes sockaddr_in since it'd be out-of-bound.
== Implementation notes ==
The patch introduces new field in `struct proto`: `pre_connect` that is
a pointer to a function with same signature as `connect` but is called
before it. The reason is in some cases BPF hooks should be called way
before control is passed to `sk->sk_prot->connect`. Specifically
`inet_dgram_connect` autobinds socket before calling
`sk->sk_prot->connect` and there is no way to call `bpf_bind()` from
hooks from e.g. `ip4_datagram_connect` or `ip6_datagram_connect` since
it'd cause double-bind. On the other hand `proto.pre_connect` provides a
flexible way to add BPF hooks for connect only for necessary `proto` and
call them at desired time before `connect`. Since `bpf_bind()` is
allowed to bind only to IP and autobind in `inet_dgram_connect` binds
only port there is no chance of double-bind.
bpf_bind() sets `force_bind_address_no_port` to bind to only IP despite
of value of `bind_address_no_port` socket field.
bpf_bind() sets `with_lock` to `false` when calling to __inet_bind()
and __inet6_bind() since all call-sites, where bpf_bind() is called,
already hold socket lock.
Change-Id: I03eb513369c630b203466621d1fbdb9b29c8333c
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Refactor `bind()` code to make it ready to be called from BPF helper
function `bpf_bind()` (will be added soon). Implementation of
`inet_bind()` and `inet6_bind()` is separated into `__inet_bind()` and
`__inet6_bind()` correspondingly. These function can be used from both
`sk_prot->bind` and `bpf_bind()` contexts.
New functions have two additional arguments.
`force_bind_address_no_port` forces binding to IP only w/o checking
`inet_sock.bind_address_no_port` field. It'll allow to bind local end of
a connection to desired IP in `bpf_bind()` w/o changing
`bind_address_no_port` field of a socket. It's useful since `bpf_bind()`
can return an error and we'd need to restore original value of
`bind_address_no_port` in that case if we changed this before calling to
the helper.
`with_lock` specifies whether to lock socket when working with `struct
sk` or not. The argument is set to `true` for `sk_prot->bind`, i.e. old
behavior is preserved. But it will be set to `false` for `bpf_bind()`
use-case. The reason is all call-sites, where `bpf_bind()` will be
called, already hold that socket lock.
Change-Id: I3cd102acdb2b3c14946ef8452fd7afb763e8215f
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
== The problem ==
There is a use-case when all processes inside a cgroup should use one
single IP address on a host that has multiple IP configured. Those
processes should use the IP for both ingress and egress, for TCP and UDP
traffic. So TCP/UDP servers should be bound to that IP to accept
incoming connections on it, and TCP/UDP clients should make outgoing
connections from that IP. It should not require changing application
code since it's often not possible.
Currently it's solved by intercepting glibc wrappers around syscalls
such as `bind(2)` and `connect(2)`. It's done by a shared library that
is preloaded for every process in a cgroup so that whenever TCP/UDP
server calls `bind(2)`, the library replaces IP in sockaddr before
passing arguments to syscall. When application calls `connect(2)` the
library transparently binds the local end of connection to that IP
(`bind(2)` with `IP_BIND_ADDRESS_NO_PORT` to avoid performance penalty).
Shared library approach is fragile though, e.g.:
* some applications clear env vars (incl. `LD_PRELOAD`);
* `/etc/ld.so.preload` doesn't help since some applications are linked
with option `-z nodefaultlib`;
* other applications don't use glibc and there is nothing to intercept.
== The solution ==
The patch provides much more reliable in-kernel solution for the 1st
part of the problem: binding TCP/UDP servers on desired IP. It does not
depend on application environment and implementation details (whether
glibc is used or not).
It adds new eBPF program type `BPF_PROG_TYPE_CGROUP_SOCK_ADDR` and
attach types `BPF_CGROUP_INET4_BIND` and `BPF_CGROUP_INET6_BIND`
(similar to already existing `BPF_CGROUP_INET_SOCK_CREATE`).
The new program type is intended to be used with sockets (`struct sock`)
in a cgroup and provided by user `struct sockaddr`. Pointers to both of
them are parts of the context passed to programs of newly added types.
The new attach types provides hooks in `bind(2)` system call for both
IPv4 and IPv6 so that one can write a program to override IP addresses
and ports user program tries to bind to and apply such a program for
whole cgroup.
== Implementation notes ==
[1]
Separate attach types for `AF_INET` and `AF_INET6` are added
intentionally to prevent reading/writing to offsets that don't make
sense for corresponding socket family. E.g. if user passes `sockaddr_in`
it doesn't make sense to read from / write to `user_ip6[]` context
fields.
[2]
The write access to `struct bpf_sock_addr_kern` is implemented using
special field as an additional "register".
There are just two registers in `sock_addr_convert_ctx_access`: `src`
with value to write and `dst` with pointer to context that can't be
changed not to break later instructions. But the fields, allowed to
write to, are not available directly and to access them address of
corresponding pointer has to be loaded first. To get additional register
the 1st not used by `src` and `dst` one is taken, its content is saved
to `bpf_sock_addr_kern.tmp_reg`, then the register is used to load
address of pointer field, and finally the register's content is restored
from the temporary field after writing `src` value.
Change-Id: I47b4cd565cb7cd3bcf3ecf80ddf2586ee81868fb
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
introduce BPF_PROG_QUERY command to retrieve a set of either
attached programs to given cgroup or a set of effective programs
that will execute for events within a cgroup
Change-Id: I05e0ed5f6eddc30f4a18216d4541448816fd1ae5
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
for cgroup bits
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
== The problem ==
There are use-cases when a program of some type can be attached to
multiple attach points and those attach points must have different
permissions to access context or to call helpers.
E.g. context structure may have fields for both IPv4 and IPv6 but it
doesn't make sense to read from / write to IPv6 field when attach point
is somewhere in IPv4 stack.
Same applies to BPF-helpers: it may make sense to call some helper from
some attach point, but not from other for same prog type.
== The solution ==
Introduce `expected_attach_type` field in in `struct bpf_attr` for
`BPF_PROG_LOAD` command. If scenario described in "The problem" section
is the case for some prog type, the field will be checked twice:
1) At load time prog type is checked to see if attach type for it must
be known to validate program permissions correctly. Prog will be
rejected with EINVAL if it's the case and `expected_attach_type` is
not specified or has invalid value.
2) At attach time `attach_type` is compared with `expected_attach_type`,
if prog type requires to have one, and, if they differ, attach will
be rejected with EINVAL.
The `expected_attach_type` is now available as part of `struct bpf_prog`
in both `bpf_verifier_ops->is_valid_access()` and
`bpf_verifier_ops->get_func_proto()` () and can be used to check context
accesses and calls to helpers correspondingly.
Initially the idea was discussed by Alexei Starovoitov <ast@fb.com> and
Daniel Borkmann <daniel@iogearbox.net> here:
https://marc.info/?l=linux-netdev&m=152107378717201&w=2
Change-Id: Idead9c9cb4251bf5bd843b68bcb83072d5746226
Signed-off-by: Andrey Ignatov <rdna@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
bpf_target_prog seems long and clunky, rename it to prog_ifindex.
We don't want to call this field just ifindex, because maps
may need a similar field in the future and bpf_attr members for
programs and maps are unnamed.
Change-Id: I5473ea6721193bcf616ac3a1056c808446af9c8d
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The fact that we don't know which device the program is going
to be used on is quite limiting in current eBPF infrastructure.
We have to reverse or limit the changes which kernel makes to
the loaded bytecode if we want it to be offloaded to a networking
device. We also have to invent new APIs for debugging and
troubleshooting support.
Make it possible to load programs for a specific netdev. This
helps us to bring the debug information closer to the core
eBPF infrastructure (e.g. we will be able to reuse the verifer
log in device JIT). It allows device JITs to perform translation
on the original bytecode.
__bpf_prog_get() when called to get a reference for an attachment
point will now refuse to give it if program has a device assigned.
Following patches will add a version of that function which passes
the expected netdev in. @type argument in __bpf_prog_get() is
renamed to attach_type to make it clearer that it's only set on
attachment.
All calls to ndo_bpf are protected by rtnl, only verifier callbacks
are not. We need a wait queue to make sure netdev doesn't get
destroyed while verifier is still running and calling its driver.
Change-Id: Iba7b96574abc005ad3351d6db2528eb534e47561
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
ndo_xdp is a control path callback for setting up XDP in the
driver. We can reuse it for other forms of communication
between the eBPF stack and the drivers. Rename the callback
and associated structures and definitions.
Change-Id: I08c456c9afa712ce0b7a98c24b6f46545e69f3cc
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Simon Horman <simon.horman@netronome.com>
Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[ Upstream commit a37a32583e282d8d815e22add29bc1e91e19951a ]
When trying to finish resolving a struct member, btf_struct_resolve
saves the member type id in a u16 temporary variable. This truncates
the 32 bit type id value if it exceeds UINT16_MAX.
As a result, structs that have members with type ids > UINT16_MAX and
which need resolution will fail with a message like this:
[67414] STRUCT ff_device size=120 vlen=12
effect_owners type_id=67434 bits_offset=960 Member exceeds struct_size
Fix this by changing the type of last_member_type_id to u32.
Fixes: a0791f0df7d2 ("bpf: fix BTF limits")
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Change-Id: I3a3db7bd5dc8836dd2aa2ba572169aa2a0629eca
Signed-off-by: Lorenz Bauer <oss@lmb.io>
Link: https://lore.kernel.org/r/20220910110120.339242-1-oss@lmb.io
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
commit da6c7faeb103c493e505e87643272f70be586635 upstream.
btf_enum_check_member() was currently sure to recognize the size of
"enum" type members in struct/union as the size of "int" even if
its size was packed.
This patch fixes BTF enum verification to use the correct size
of member in BPF programs.
Fixes: 179cde8cef7e ("bpf: btf: Check members of struct/union")
Change-Id: Idd2f710477e7abbc6cf541fe3d9fecfe0d4ce594
Signed-off-by: Yoshiki Komachi <komachi.yoshiki@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1583825550-18606-2-git-send-email-komachi.yoshiki@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
[ Upstream commit a0791f0df7d212c245761538b17a9ea93607b667 ]
vmlinux BTF has more than 64k types.
Its string section is also at the offset larger than 64k.
Adjust both limits to make in-kernel BTF verifier successfully parse in-kernel BTF.
Fixes: 69b693f0aefa ("bpf: btf: Introduce BPF Type Format (BTF)")
Change-Id: I921037306001847bb0afac797a5b33f625bf65d8
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 4a6998aff82a20a1aece86a186d8e5263f8b2315 ]
Wenwen Wang reported:
In btf_parse(), the header of the user-space btf data 'btf_data'
is firstly parsed and verified through btf_parse_hdr().
In btf_parse_hdr(), the header is copied from user-space 'btf_data'
to kernel-space 'btf->hdr' and then verified. If no error happens
during the verification process, the whole data of 'btf_data',
including the header, is then copied to 'data' in btf_parse(). It
is obvious that the header is copied twice here. More importantly,
no check is enforced after the second copy to make sure the headers
obtained in these two copies are same. Given that 'btf_data' resides
in the user space, a malicious user can race to modify the header
between these two copies. By doing so, the user can inject
inconsistent data, which can cause undefined behavior of the
kernel and introduce potential security risk.
This issue is similar to the one fixed in commit 8af03d1ae2e1 ("bpf:
btf: Fix a missing check bug"). To fix it, this patch copies the user
'btf_data' *before* parsing / verifying the BTF header.
Fixes: 69b693f0aefa ("bpf: btf: Introduce BPF Type Format (BTF)")
Change-Id: I36ea252fc676d49cbb51e71c1c72ca7ebdd179cd
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Co-developed-by: Wenwen Wang <wang6495@umn.edu>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
[ Upstream commit 8af03d1ae2e154a8be3631e8694b87007e1bdbc2 ]
In btf_parse_hdr(), the length of the btf data header is firstly copied
from the user space to 'hdr_len' and checked to see whether it is larger
than 'btf_data_size'. If yes, an error code EINVAL is returned. Otherwise,
the whole header is copied again from the user space to 'btf->hdr'.
However, after the second copy, there is no check between
'btf->hdr->hdr_len' and 'hdr_len' to confirm that the two copies get the
same value. Given that the btf data is in the user space, a malicious user
can race to change the data between the two copies. By doing so, the user
can provide malicious data to the kernel and cause undefined behavior.
This patch adds a necessary check after the second copy, to make sure
'btf->hdr->hdr_len' has the same value as 'hdr_len'. Otherwise, an error
code EINVAL will be returned.
Change-Id: I8b01374ab866f1917b3f21486eae42403b520711
Signed-off-by: Wenwen Wang <wang6495@umn.edu>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
The end boundary math for type section is incorrect in
btf_check_all_metas(). It just happens that hdr->type_off
is always 0 for now because there are only two sections
(type and string) and string section must be at the end (ensured
in btf_parse_str_sec).
However, type_off may not be 0 if a new section would be added later.
This patch fixes it.
Fixes: f80442a4cd18 ("bpf: btf: Change how section is supported in btf_header")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Change-Id: Ic748f3764714643b4002f1459a63a34a3af9317b
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The len > skb_headlen(skb) cannot be used as a maximum upper bound
for the packet length since it does not have any relation to the full
linear packet length when filtering is used from upper layers (e.g.
in case of reuseport BPF programs) as by then skb->data, skb->len
already got mangled through __skb_pull() and others.
Fixes: 4e1ec56cdc59 ("bpf: add skb_load_bytes_relative helper")
Change-Id: Ic72959d61a393dc411f7654697d39b5fabc56604
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
This patch ensures the member->offset of a struct
is in the correct order (i.e the later member's offset cannot
go backward).
The current "pahole -J" BTF encoder does not generate something
like this. However, checking this can ensure future encoder
will not violate this.
Fixes: 69b693f0aefa ("bpf: btf: Introduce BPF Type Format (BTF)")
Change-Id: I07772d4be8072c45c751389a804431032c535358
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This patch shrinks the BTF_INT_BITS() mask. The current
btf_int_check_meta() ensures the nr_bits of an integer
cannot exceed 64. Hence, it is mostly an uapi cleanup.
The actual btf usage (i.e. seq_show()) is also modified
to use u8 instead of u16. The verification (e.g. btf_int_check_meta())
path stays as is to deal with invalid BTF situation.
Fixes: 69b693f0aefa ("bpf: btf: Introduce BPF Type Format (BTF)")
Change-Id: I870f4579152bf26f29925382e44e397e35ebb344
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
When extracting bitfield from a number, btf_int_bits_seq_show() builds
a mask and accesses least significant byte of the number in a way
specific to little-endian. This patch fixes that by checking endianness
of the machine and then shifting left and right the unneeded bits.
Thanks to Martin Lau for the help in navigating potential pitfalls when
dealing with endianess and for the final solution.
Fixes: b00b8daec828 ("bpf: btf: Add pretty print capability for data with BTF type info")
Change-Id: Ib5586230ad33de1e0af301b4c3790355426450af
Signed-off-by: Okash Khawaja <osk@fb.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
The t->type in BTF_KIND_FWD is not used. It must be 0.
This patch ensures that and also adds a test case in test_btf.c
Change-Id: I3a12680100b4379cc69989e9d0e48a9142d1e6e6
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
This patch ensures array's t->size is 0.
The array size is decided by its individual elem's size and the
number of elements. Hence, t->size is not used and
it must be 0.
A test case is added to test_btf.c
Change-Id: I5f1c299322dbd2172b24439ebd62473f245a513c
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
gcc warns about a noreturn function possibly returning in
some configurations:
kernel/bpf/btf.c: In function 'env_type_is_resolve_sink':
kernel/bpf/btf.c:729:1: error: control reaches end of non-void function [-Werror=return-type]
Using BUG() instead of BUG_ON() avoids that warning and otherwise
does the exact same thing.
Fixes: eb3f595dab40 ("bpf: btf: Validate type reference")
Change-Id: I389a395a727a75301b4342e60da6dca10f930ace
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Sparse warning:
kernel/bpf/btf.c:1985:34: warning: Variable length array is used.
This patch directly uses ARRAY_SIZE().
Fixes: f80442a4cd18 ("bpf: btf: Change how section is supported in btf_header")
Change-Id: I0457ce16acef2df07f61cff0303f7cf1cba8c8c5
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This patch does the followings:
1. Limit BTF_MAX_TYPES and BTF_MAX_NAME_OFFSET to 64k. We can
raise it later.
2. Remove the BTF_TYPE_PARENT and BTF_STR_TBL_ELF_ID. They are
currently encoded at the highest bit of a u32.
It is because the current use case does not require supporting
parent type (i.e type_id referring to a type in another BTF file).
It also does not support referring to a string in ELF.
The BTF_TYPE_PARENT and BTF_STR_TBL_ELF_ID checks are replaced
by BTF_TYPE_ID_CHECK and BTF_STR_OFFSET_CHECK which are
defined in btf.c instead of uapi/linux/btf.h.
3. Limit the BTF_INFO_KIND from 5 bits to 4 bits which is enough.
There is unused bits headroom if we ever needed it later.
4. The root bit in BTF_INFO is also removed because it is not
used in the current use case.
5. Remove BTF_INT_VARARGS since func type is not supported now.
The BTF_INT_ENCODING is limited to 4 bits instead of 8 bits.
The above can be added back later because the verifier
ensures the unused bits are zeros.
Change-Id: I1046de7b41054f007572fec5ca7fc62c3fd66440
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Instead of ingoring the array->index_type field. Enforce that
it must be a BTF_KIND_INT in size 1/2/4/8 bytes.
Change-Id: Ibfcef45f9df9ed1149eb7c521bece3f333ea0007
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
There are currently unused section descriptions in the btf_header. Those
sections are here to support future BTF use cases. For example, the
func section (func_off) is to support function signature (e.g. the BPF
prog function signature).
Instead of spelling out all potential sections up-front in the btf_header.
This patch makes changes to btf_header such that extending it (e.g. adding
a section) is possible later. The unused ones can be removed for now and
they can be added back later.
This patch:
1. adds a hdr_len to the btf_header. It will allow adding
sections (and other info like parent_label and parent_name)
later. The check is similar to the existing bpf_attr.
If a user passes in a longer hdr_len, the kernel
ensures the extra tailing bytes are 0.
2. allows the section order in the BTF object to be
different from its sec_off order in btf_header.
3. each sec_off is followed by a sec_len. It must not have gap or
overlapping among sections.
The string section is ensured to be at the end due to the 4 bytes
alignment requirement of the type section.
The above changes will allow enough flexibility to
add new sections (and other info) to the btf_header later.
This patch also removes an unnecessary !err check
at the end of btf_parse().
Change-Id: I8e7d8673d7c4cc6a5f5a0bccc64492de5f64a30a
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
This patch uses u64_to_user_ptr() to cast info.map_ids to a userspace ptr.
It also tags the user_map_ids with '__user' for sparse check.
Fixes: cb4d2b3f03d8 ("bpf: Add name, load_time, uid and map_ids to bpf_prog_info")
Change-Id: I18907e003d20295d9e375eb1493c848d795a7a16
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
* Upstream calls bpf_verifier_vlog() directly and calling
bpf_verifier_log_write() here can sometimes break format args
and cause kernel panics
Change-Id: I5f7dde9e83b8ef5a2bd1d2739bc08dd2ce69c41d
Below compilation issues are observed when CONFIG_SCHED_WALT is disabled.
1. kernel/sched/cpufreq_schedutil.c:408:23: \
error: implicit declaration of function 'boosted_cpu_util'
2. kernel/sched/core_ctl.c:1291:2: \
error: implicit declaration of function 'for_each_sched_cluster'
Fix these compilation issues by adding/updating proper checks
and dependencies as needed.
Change-Id: I59d3714a9fca0ff58758ec974f50eb5f3f00ae98
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
Make sure the compiler optimises away conditions that are always false
since commit b91319892e (cpufreq: schedutil: Don't jump to max frequency for RT tasks).
Change-Id: I7a108ff1a4ba09f2cb82ea8a82bd15967e724709
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Currently, the raw cache will be reset when next_f is changed after get_next_freq for correctness. However, it may introduce more cycles in those cases. This patch changes it to maintain the cached value instead of dropping it.
Bug: 159936782
Bug: 158863204
Signed-off-by: Wei Wang <wvw@google.com>
[dereference23: Backport to 4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I519ca02dd2e6038e3966e1f68fee641628827c82
The cpufreq_schedutil governor keeps a cache of the last
raw frequency that was mapped to a supported device frequency.
If the next request for a frequency matches the cached
value, the policy's next_freq value is reused. But there
are paths that can update the raw cached value without
updating the next_freq value, and there are paths that
can set the next_freq value without setting the raw
cached value. On those paths, the cached value
must be reset.
The case that has been observed is when a frequency request
reaches sugov_update_commit but is then rejected by to
the sugov_up_down_rate_limit check.
Bug: 116279565
Change-Id: I7c585339a04ff1732054d6e5b36a57e2d41266aa
Signed-off-by: John Dias <joaodias@google.com>
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This is tuned to match energy model characteristics and scheduler
efficiency enhancements.
Change-Id: Ia60e1ea888457fa1c0c0273cdd4b0180f0a87abf
Co-authored-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
ttwu_remote() unconditionally locks the task's runqueue lock, which implies
a full barrier across the lock and unlock, so the acquire barrier after the
control dependency is only needed when the task isn't on the runqueue.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: Ibb988fe341ba3109d381682a8725fbd2e7a648e3
Scheduler code is very hot and every little optimization counts. Instead
of constantly checking sched_numa_balancing when NUMA is disabled,
compile it out.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I7334594fbe835f615a199cfe02ee526135abab06
wake_affine_idle() prefers to move a task to the current CPU if the
wakeup is due to an interrupt. The expectation is that the interrupt
data is cache hot and relevant to the waking task as well as avoiding
a search. However, there is no way to determine if there was cache hot
data on the previous CPU that may exceed the interrupt data. Furthermore,
round-robin delivery of interrupts can migrate tasks around a socket where
each CPU is under-utilised. This can interact badly with cpufreq which
makes decisions based on per-cpu data. It has been observed on machines
with HWP that p-states are not boosted to their maximum levels even though
the workload is latency and throughput sensitive.
This patch uses the previous CPU for the task if it's idle and cache-affine
with the current CPU even if the current CPU is idle due to the wakup
being related to the interrupt. This reduces migrations at the cost of
the interrupt data not being cache hot when the task wakes.
A variety of workloads were tested on various machines and no adverse
impact was noticed that was outside noise. dbench on ext4 on UMA showed
roughly 10% reduction in the number of CPU migrations and it is a case
where interrupts are frequent for IO competions. In most cases, the
difference in performance is quite small but variability is often
reduced. For example, this is the result for pgbench running on a UMA
machine with different numbers of clients.
4.15.0-rc9 4.15.0-rc9
baseline waprev-v1
Hmean 1 22096.28 ( 0.00%) 22734.86 ( 2.89%)
Hmean 4 74633.42 ( 0.00%) 75496.77 ( 1.16%)
Hmean 7 115017.50 ( 0.00%) 113030.81 ( -1.73%)
Hmean 12 126209.63 ( 0.00%) 126613.40 ( 0.32%)
Hmean 16 131886.91 ( 0.00%) 130844.35 ( -0.79%)
Stddev 1 636.38 ( 0.00%) 417.11 ( 34.46%)
Stddev 4 614.64 ( 0.00%) 583.24 ( 5.11%)
Stddev 7 542.46 ( 0.00%) 435.45 ( 19.73%)
Stddev 12 173.93 ( 0.00%) 171.50 ( 1.40%)
Stddev 16 671.42 ( 0.00%) 680.30 ( -1.32%)
CoeffVar 1 2.88 ( 0.00%) 1.83 ( 36.26%)
Note that the different in performance is marginal but for low utilisation,
there is less variability.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180130104555.4125-4-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I28ccbec4a55ff8114aa7e8ce92e5e2c48806361d
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
If waking from an idle CPU due to an interrupt then it's possible that
the waker task will be pulled to wake on the current CPU. Unfortunately,
depending on the type of interrupt and IRQ configuration, there may not
be a strong relationship between the CPU an interrupt was delivered on
and the CPU a task was running on. For example, the interrupts could all
be delivered to CPUs on one particular node due to the machine topology
or IRQ affinity configuration. Another example is an interrupt for an IO
completion which can be delivered to any CPU where there is no guarantee
the data is either cache hot or even local.
This patch was motivated by the observation that an IO workload was
being pulled cross-node on a frequent basis when IO completed. From a
wakeup latency perspective, it's still useful to know that an idle CPU is
immediately available for use but lets only consider an automatic migration
if the CPUs share cache to limit damage due to NUMA migrations. Migrations
may still occur if wake_affine_weight determines it's appropriate.
These are the throughput results for dbench running on ext4 comparing
4.15-rc3 and this patch on a 2-socket machine where interrupts due to IO
completions can happen on any CPU.
4.15.0-rc3 4.15.0-rc3
vanilla lessmigrate
Hmean 1 854.64 ( 0.00%) 865.01 ( 1.21%)
Hmean 2 1229.60 ( 0.00%) 1274.44 ( 3.65%)
Hmean 4 1591.81 ( 0.00%) 1628.08 ( 2.28%)
Hmean 8 1845.04 ( 0.00%) 1831.80 ( -0.72%)
Hmean 16 2038.61 ( 0.00%) 2091.44 ( 2.59%)
Hmean 32 2327.19 ( 0.00%) 2430.29 ( 4.43%)
Hmean 64 2570.61 ( 0.00%) 2568.54 ( -0.08%)
Hmean 128 2481.89 ( 0.00%) 2499.28 ( 0.70%)
Stddev 1 14.31 ( 0.00%) 5.35 ( 62.65%)
Stddev 2 21.29 ( 0.00%) 11.09 ( 47.92%)
Stddev 4 7.22 ( 0.00%) 6.80 ( 5.92%)
Stddev 8 26.70 ( 0.00%) 9.41 ( 64.76%)
Stddev 16 22.40 ( 0.00%) 20.01 ( 10.70%)
Stddev 32 45.13 ( 0.00%) 44.74 ( 0.85%)
Stddev 64 93.10 ( 0.00%) 93.18 ( -0.09%)
Stddev 128 184.28 ( 0.00%) 177.85 ( 3.49%)
Note the small increase in throughput for low thread counts but also
note that the standard deviation for each sample during the test run is
lower. The throughput figures for dbench can be misleading so the benchmark
is actually modified to time the latency of the processing of one load
file with many samples taken. The difference in latency is
4.15.0-rc3 4.15.0-rc3
vanilla lessmigrate
Amean 1 21.71 ( 0.00%) 21.47 ( 1.08%)
Amean 2 30.89 ( 0.00%) 29.58 ( 4.26%)
Amean 4 47.54 ( 0.00%) 46.61 ( 1.97%)
Amean 8 82.71 ( 0.00%) 82.81 ( -0.12%)
Amean 16 149.45 ( 0.00%) 145.01 ( 2.97%)
Amean 32 265.49 ( 0.00%) 248.43 ( 6.42%)
Amean 64 463.23 ( 0.00%) 463.55 ( -0.07%)
Amean 128 933.97 ( 0.00%) 935.50 ( -0.16%)
Stddev 1 1.58 ( 0.00%) 1.54 ( 2.26%)
Stddev 2 2.84 ( 0.00%) 2.95 ( -4.15%)
Stddev 4 6.78 ( 0.00%) 6.85 ( -0.99%)
Stddev 8 16.85 ( 0.00%) 16.37 ( 2.85%)
Stddev 16 41.59 ( 0.00%) 41.04 ( 1.32%)
Stddev 32 111.05 ( 0.00%) 105.11 ( 5.35%)
Stddev 64 285.94 ( 0.00%) 288.01 ( -0.72%)
Stddev 128 803.39 ( 0.00%) 809.73 ( -0.79%)
It's a small improvement which is not surprising given that migrations that
migrate to a different node as not that common. However, it is noticeable
in the CPU migration statistics which are reduced by 24%.
There was a query for v1 of this patch about NAS so here are the results
for C-class using MPI for parallelisation on the same machine
nas-mpi
4.15.0-rc3 4.15.0-rc3
vanilla noirq
Time cg.C 24.25 ( 0.00%) 23.17 ( 4.45%)
Time ep.C 8.22 ( 0.00%) 8.29 ( -0.85%)
Time ft.C 22.67 ( 0.00%) 20.34 ( 10.28%)
Time is.C 1.42 ( 0.00%) 1.47 ( -3.52%)
Time lu.C 55.62 ( 0.00%) 54.81 ( 1.46%)
Time mg.C 7.93 ( 0.00%) 7.91 ( 0.25%)
4.15.0-rc3 4.15.0-rc3
vanilla noirq-v1r1
User 3799.96 3748.34
System 672.10 626.15
Elapsed 91.91 79.49
lu.C sees a small gain, ft.C a large gain and ep.C and is.C see small
regressions but in terms of absolute time, the difference is small and
likely within run-to-run variance. System CPU usage is slightly reduced.
schbench from Facebook was also requested. This is a bit of a mixed bag but
it's important to note that this workload should not be heavily impacted
by wakeups from interrupt context.
4.15.0-rc3 4.15.0-rc3
vanilla noirq-v1r1
Lat 50.00th-qrtle-1 41.00 ( 0.00%) 41.00 ( 0.00%)
Lat 75.00th-qrtle-1 42.00 ( 0.00%) 42.00 ( 0.00%)
Lat 90.00th-qrtle-1 43.00 ( 0.00%) 44.00 ( -2.33%)
Lat 95.00th-qrtle-1 44.00 ( 0.00%) 46.00 ( -4.55%)
Lat 99.00th-qrtle-1 57.00 ( 0.00%) 58.00 ( -1.75%)
Lat 99.50th-qrtle-1 59.00 ( 0.00%) 59.00 ( 0.00%)
Lat 99.90th-qrtle-1 67.00 ( 0.00%) 78.00 ( -16.42%)
Lat 50.00th-qrtle-2 40.00 ( 0.00%) 51.00 ( -27.50%)
Lat 75.00th-qrtle-2 45.00 ( 0.00%) 56.00 ( -24.44%)
Lat 90.00th-qrtle-2 53.00 ( 0.00%) 59.00 ( -11.32%)
Lat 95.00th-qrtle-2 57.00 ( 0.00%) 61.00 ( -7.02%)
Lat 99.00th-qrtle-2 67.00 ( 0.00%) 71.00 ( -5.97%)
Lat 99.50th-qrtle-2 69.00 ( 0.00%) 74.00 ( -7.25%)
Lat 99.90th-qrtle-2 83.00 ( 0.00%) 77.00 ( 7.23%)
Lat 50.00th-qrtle-4 51.00 ( 0.00%) 51.00 ( 0.00%)
Lat 75.00th-qrtle-4 57.00 ( 0.00%) 56.00 ( 1.75%)
Lat 90.00th-qrtle-4 60.00 ( 0.00%) 59.00 ( 1.67%)
Lat 95.00th-qrtle-4 62.00 ( 0.00%) 62.00 ( 0.00%)
Lat 99.00th-qrtle-4 73.00 ( 0.00%) 72.00 ( 1.37%)
Lat 99.50th-qrtle-4 76.00 ( 0.00%) 74.00 ( 2.63%)
Lat 99.90th-qrtle-4 85.00 ( 0.00%) 78.00 ( 8.24%)
Lat 50.00th-qrtle-8 54.00 ( 0.00%) 58.00 ( -7.41%)
Lat 75.00th-qrtle-8 59.00 ( 0.00%) 62.00 ( -5.08%)
Lat 90.00th-qrtle-8 65.00 ( 0.00%) 66.00 ( -1.54%)
Lat 95.00th-qrtle-8 67.00 ( 0.00%) 70.00 ( -4.48%)
Lat 99.00th-qrtle-8 78.00 ( 0.00%) 79.00 ( -1.28%)
Lat 99.50th-qrtle-8 81.00 ( 0.00%) 80.00 ( 1.23%)
Lat 99.90th-qrtle-8 116.00 ( 0.00%) 83.00 ( 28.45%)
Lat 50.00th-qrtle-16 65.00 ( 0.00%) 64.00 ( 1.54%)
Lat 75.00th-qrtle-16 77.00 ( 0.00%) 71.00 ( 7.79%)
Lat 90.00th-qrtle-16 83.00 ( 0.00%) 82.00 ( 1.20%)
Lat 95.00th-qrtle-16 87.00 ( 0.00%) 87.00 ( 0.00%)
Lat 99.00th-qrtle-16 95.00 ( 0.00%) 96.00 ( -1.05%)
Lat 99.50th-qrtle-16 99.00 ( 0.00%) 103.00 ( -4.04%)
Lat 99.90th-qrtle-16 104.00 ( 0.00%) 122.00 ( -17.31%)
Lat 50.00th-qrtle-32 71.00 ( 0.00%) 73.00 ( -2.82%)
Lat 75.00th-qrtle-32 91.00 ( 0.00%) 92.00 ( -1.10%)
Lat 90.00th-qrtle-32 108.00 ( 0.00%) 107.00 ( 0.93%)
Lat 95.00th-qrtle-32 118.00 ( 0.00%) 115.00 ( 2.54%)
Lat 99.00th-qrtle-32 134.00 ( 0.00%) 129.00 ( 3.73%)
Lat 99.50th-qrtle-32 138.00 ( 0.00%) 133.00 ( 3.62%)
Lat 99.90th-qrtle-32 149.00 ( 0.00%) 146.00 ( 2.01%)
Lat 50.00th-qrtle-39 83.00 ( 0.00%) 81.00 ( 2.41%)
Lat 75.00th-qrtle-39 105.00 ( 0.00%) 102.00 ( 2.86%)
Lat 90.00th-qrtle-39 120.00 ( 0.00%) 119.00 ( 0.83%)
Lat 95.00th-qrtle-39 129.00 ( 0.00%) 128.00 ( 0.78%)
Lat 99.00th-qrtle-39 153.00 ( 0.00%) 149.00 ( 2.61%)
Lat 99.50th-qrtle-39 166.00 ( 0.00%) 156.00 ( 6.02%)
Lat 99.90th-qrtle-39 12304.00 ( 0.00%) 12848.00 ( -4.42%)
When heavily loaded (e.g. 99.50th-qrtle-39 indicates 39 threads), there
are small gains in many cases. Otherwise it depends on the quartile used
where it can be bad -- e.g. 75.00th-qrtle-2. However, even these results
are probably a co-incidence. For this workload, much depends on what node
the threads get placed on and their relative locality and not wakeups from
interrupt context. A larger component on how it behaves would be automatic
NUMA balancing where a fault incurred to measure locality would be a much
larger contributer to latency than the wakeup path.
This is the results from an almost identical machine that happened to run
the same test. They only differ in terms of storage which is irrelevant
for this test.
4.15.0-rc3 4.15.0-rc3
vanilla noirq-v1r1
Lat 50.00th-qrtle-1 41.00 ( 0.00%) 41.00 ( 0.00%)
Lat 75.00th-qrtle-1 42.00 ( 0.00%) 42.00 ( 0.00%)
Lat 90.00th-qrtle-1 44.00 ( 0.00%) 43.00 ( 2.27%)
Lat 95.00th-qrtle-1 53.00 ( 0.00%) 45.00 ( 15.09%)
Lat 99.00th-qrtle-1 59.00 ( 0.00%) 58.00 ( 1.69%)
Lat 99.50th-qrtle-1 60.00 ( 0.00%) 59.00 ( 1.67%)
Lat 99.90th-qrtle-1 86.00 ( 0.00%) 61.00 ( 29.07%)
Lat 50.00th-qrtle-2 52.00 ( 0.00%) 41.00 ( 21.15%)
Lat 75.00th-qrtle-2 57.00 ( 0.00%) 46.00 ( 19.30%)
Lat 90.00th-qrtle-2 60.00 ( 0.00%) 53.00 ( 11.67%)
Lat 95.00th-qrtle-2 62.00 ( 0.00%) 57.00 ( 8.06%)
Lat 99.00th-qrtle-2 73.00 ( 0.00%) 68.00 ( 6.85%)
Lat 99.50th-qrtle-2 74.00 ( 0.00%) 71.00 ( 4.05%)
Lat 99.90th-qrtle-2 90.00 ( 0.00%) 75.00 ( 16.67%)
Lat 50.00th-qrtle-4 57.00 ( 0.00%) 52.00 ( 8.77%)
Lat 75.00th-qrtle-4 60.00 ( 0.00%) 58.00 ( 3.33%)
Lat 90.00th-qrtle-4 62.00 ( 0.00%) 62.00 ( 0.00%)
Lat 95.00th-qrtle-4 65.00 ( 0.00%) 65.00 ( 0.00%)
Lat 99.00th-qrtle-4 76.00 ( 0.00%) 75.00 ( 1.32%)
Lat 99.50th-qrtle-4 77.00 ( 0.00%) 77.00 ( 0.00%)
Lat 99.90th-qrtle-4 87.00 ( 0.00%) 81.00 ( 6.90%)
Lat 50.00th-qrtle-8 59.00 ( 0.00%) 57.00 ( 3.39%)
Lat 75.00th-qrtle-8 63.00 ( 0.00%) 62.00 ( 1.59%)
Lat 90.00th-qrtle-8 66.00 ( 0.00%) 67.00 ( -1.52%)
Lat 95.00th-qrtle-8 68.00 ( 0.00%) 70.00 ( -2.94%)
Lat 99.00th-qrtle-8 79.00 ( 0.00%) 80.00 ( -1.27%)
Lat 99.50th-qrtle-8 80.00 ( 0.00%) 84.00 ( -5.00%)
Lat 99.90th-qrtle-8 84.00 ( 0.00%) 90.00 ( -7.14%)
Lat 50.00th-qrtle-16 65.00 ( 0.00%) 65.00 ( 0.00%)
Lat 75.00th-qrtle-16 77.00 ( 0.00%) 75.00 ( 2.60%)
Lat 90.00th-qrtle-16 84.00 ( 0.00%) 83.00 ( 1.19%)
Lat 95.00th-qrtle-16 88.00 ( 0.00%) 87.00 ( 1.14%)
Lat 99.00th-qrtle-16 97.00 ( 0.00%) 96.00 ( 1.03%)
Lat 99.50th-qrtle-16 100.00 ( 0.00%) 104.00 ( -4.00%)
Lat 99.90th-qrtle-16 110.00 ( 0.00%) 126.00 ( -14.55%)
Lat 50.00th-qrtle-32 70.00 ( 0.00%) 71.00 ( -1.43%)
Lat 75.00th-qrtle-32 92.00 ( 0.00%) 94.00 ( -2.17%)
Lat 90.00th-qrtle-32 110.00 ( 0.00%) 110.00 ( 0.00%)
Lat 95.00th-qrtle-32 121.00 ( 0.00%) 118.00 ( 2.48%)
Lat 99.00th-qrtle-32 135.00 ( 0.00%) 137.00 ( -1.48%)
Lat 99.50th-qrtle-32 140.00 ( 0.00%) 146.00 ( -4.29%)
Lat 99.90th-qrtle-32 150.00 ( 0.00%) 160.00 ( -6.67%)
Lat 50.00th-qrtle-39 80.00 ( 0.00%) 71.00 ( 11.25%)
Lat 75.00th-qrtle-39 102.00 ( 0.00%) 91.00 ( 10.78%)
Lat 90.00th-qrtle-39 118.00 ( 0.00%) 108.00 ( 8.47%)
Lat 95.00th-qrtle-39 128.00 ( 0.00%) 117.00 ( 8.59%)
Lat 99.00th-qrtle-39 149.00 ( 0.00%) 133.00 ( 10.74%)
Lat 99.50th-qrtle-39 160.00 ( 0.00%) 139.00 ( 13.12%)
Lat 99.90th-qrtle-39 13808.00 ( 0.00%) 4920.00 ( 64.37%)
Despite being nearly identical, it showed a variety of major gains so
I'm not convinced that heavy emphasis should be placed on this particular
workload in terms of evaluating this particular patch. Further evidence of
this is the fact that testing on a UMA machine showed small gains/losses
even though the patch should be a no-op on UMA.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171219085947.13136-2-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: Icf371d0575177f50614b01b11c8bfb4d6a34526c
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This is a preparation patch that has wake_affine*() return a CPU ID instead of
a boolean. The intent is to allow the wake_affine() helpers to be avoided
if a decision is already made. This patch has no functional change.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180130104555.4125-3-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I71255e78c067ec35b81f6a20e8c161982ce34672
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
wake_affine_idle() takes parameters it never uses so clean it up.
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt@codeblueprint.co.uk>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180130104555.4125-2-mgorman@techsingularity.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I800046ec2ecafdcc6120a78b17a8d37511ef50b1
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
As a first step this patch makes cfs_tasks list as MRU one.
It means, that when a next task is picked to run on physical
CPU it is moved to the front of the list.
Therefore, the cfs_tasks list is more or less sorted (except
woken tasks) starting from recently given CPU time tasks toward
tasks with max wait time in a run-queue, i.e. MRU list.
Second, as part of the load balance operation, this approach
starts detach_tasks()/detach_one_task() from the tail of the
queue instead of the head, giving some advantages:
- tends to pick a task with highest wait time;
- tasks located in the tail are less likely cache-hot,
therefore the can_migrate_task() decision is higher.
hackbench illustrates slightly better performance. For example
doing 1000 samples and 40 groups on i5-3320M CPU, it shows below
figures:
default: 0.657 avg
patched: 0.646 avg
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Kirill Tkhai <tkhai@yandex.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Paul Turner <pjt@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Link: http://lkml.kernel.org/r/20170913102430.8985-2-urezki@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: Id44ee1af4d88a01db1994e642518b7e3cc58f937
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
schedule() ttwu()
deactivate_task(); if (p->on_rq && ...) // false
atomic_dec(&task_rq(p)->nr_iowait);
if (prev->in_iowait)
atomic_inc(&rq->nr_iowait);
Allows nr_iowait to be decremented before it gets incremented,
resulting in more dodgy IO-wait numbers than usual.
Note that because we can now do ttwu_queue_wakelist() before
p->on_cpu==0, we lose the natural ordering and have to further delay
the decrement.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Reported-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Link: https://lkml.kernel.org/r/20201117093829.GD3121429@hirez.programming.kicks-ass.net
Change-Id: Iee2ed007cbdbe9cb1ca8e028d928d263d85e1f2b
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Mel reported that on some ARM64 platforms loadavg goes bananas and
Will tracked it down to the following race:
CPU0 CPU1
schedule()
prev->sched_contributes_to_load = X;
deactivate_task(prev);
try_to_wake_up()
if (p->on_rq &&) // false
if (smp_load_acquire(&p->on_cpu) && // true
ttwu_queue_wakelist())
p->sched_remote_wakeup = Y;
smp_store_release(prev->on_cpu, 0);
where both p->sched_contributes_to_load and p->sched_remote_wakeup are
in the same word, and thus the stores X and Y race (and can clobber
one another's data).
Whereas prior to commit c6e7bd7afaeb ("sched/core: Optimize ttwu()
spinning on p->on_cpu") the p->on_cpu handoff serialized access to
p->sched_remote_wakeup (just as it still does with
p->sched_contributes_to_load) that commit broke that by calling
ttwu_queue_wakelist() with p->on_cpu != 0.
However, due to
p->XXX = X ttwu()
schedule() if (p->on_rq && ...) // false
smp_mb__after_spinlock() if (smp_load_acquire(&p->on_cpu) &&
deactivate_task() ttwu_queue_wakelist())
p->on_rq = 0; p->sched_remote_wakeup = Y;
We can be sure any 'current' store is complete and 'current' is
guaranteed asleep. Therefore we can move p->sched_remote_wakeup into
the current flags word.
Note: while the observed failure was loadavg accounting gone wrong due
to ttwu() cobbering p->sched_contributes_to_load, the reverse problem
is also possible where schedule() clobbers p->sched_remote_wakeup,
this could result in enqueue_entity() wrecking ->vruntime and causing
scheduling artifacts.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Reported-by: Mel Gorman <mgorman@techsingularity.net>
Debugged-by: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20201117083016.GK3121392@hirez.programming.kicks-ass.net
Change-Id: I9a2e7ef7bfd4e3c1c8bdd49ecb4793634924b6c8
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
There is apparently one site that violates the rule that only current
and ttwu() will modify task->state, namely ptrace_{,un}freeze_traced()
will change task->state for a remote task.
Oleg explains:
"TASK_TRACED/TASK_STOPPED was always protected by siglock. In
particular, ttwu(__TASK_TRACED) must be always called with siglock
held. That is why ptrace_freeze_traced() assumes it can safely do
s/TASK_TRACED/__TASK_TRACED/ under spin_lock(siglock)."
This breaks the ordering scheme introduced by commit:
dbfb089d360b ("sched: Fix loadavg accounting race")
Specifically, the reload not matching no longer implies we don't have
to block.
Simply things by noting that what we need is a LOAD->STORE ordering
and this can be provided by a control dependency.
So replace:
prev_state = prev->state;
raw_spin_lock(&rq->lock);
smp_mb__after_spinlock(); /* SMP-MB */
if (... && prev_state && prev_state == prev->state)
deactivate_task();
with:
prev_state = prev->state;
if (... && prev_state) /* CTRL-DEP */
deactivate_task();
Since that already implies the 'prev->state' load must be complete
before allowing the 'prev->on_rq = 0' store to become visible.
Fixes: dbfb089d360b ("sched: Fix loadavg accounting race")
Reported-by: Jiri Slaby <jirislaby@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Tested-by: Christian Brauner <christian.brauner@ubuntu.com>
Change-Id: Iccb651f3757ed543e8f104bc16cded57674caf78
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The recent commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
moved these lines in ttwu():
p->sched_contributes_to_load = !!task_contributes_to_load(p);
p->state = TASK_WAKING;
up before:
smp_cond_load_acquire(&p->on_cpu, !VAL);
into the 'p->on_rq == 0' block, with the thinking that once we hit
schedule() the current task cannot change it's ->state anymore. And
while this is true, it is both incorrect and flawed.
It is incorrect in that we need at least an ACQUIRE on 'p->on_rq == 0'
to avoid weak hardware from re-ordering things for us. This can fairly
easily be achieved by relying on the control-dependency already in
place.
The second problem, which makes the flaw in the original argument, is
that while schedule() will not change prev->state, it will read it a
number of times (arguably too many times since it's marked volatile).
The previous condition 'p->on_cpu == 0' was sufficient because that
indicates schedule() has completed, and will no longer read
prev->state. So now the trick is to make this same true for the (much)
earlier 'prev->on_rq == 0' case.
Furthermore, in order to make the ordering stick, the 'prev->on_rq = 0'
assignment needs to he a RELEASE, but adding additional ordering to
schedule() is an unwelcome proposition at the best of times, doubly so
for mere accounting.
Luckily we can push the prev->state load up before rq->lock, with the
only caveat that we then have to re-read the state after. However, we
know that if it changed, we no longer have to worry about the blocking
path. This gives us the required ordering, if we block, we did the
prev->state load before an (effective) smp_mb() and the p->on_rq store
needs not change.
With this we end up with the effective ordering:
LOAD p->state LOAD-ACQUIRE p->on_rq == 0
MB
STORE p->on_rq, 0 STORE p->state, TASK_WAKING
which ensures the TASK_WAKING store happens after the prev->state
load, and all is well again.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Reported-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Dave Jones <davej@codemonkey.org.uk>
Tested-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Link: https://lkml.kernel.org/r/20200707102957.GN117543@hirez.programming.kicks-ass.net
Change-Id: Ica231076ceae2507f9a59d09e2b339133074b315
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Paul reported rcutorture occasionally hitting a NULL deref:
sched_ttwu_pending()
ttwu_do_wakeup()
check_preempt_curr() := check_preempt_wakeup()
find_matching_se()
is_same_group()
if (se->cfs_rq == pse->cfs_rq) <-- *BOOM*
Debugging showed that this only appears to happen when we take the new
code-path from commit:
2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")
and only when @cpu == smp_processor_id(). Something which should not
be possible, because p->on_cpu can only be true for remote tasks.
Similarly, without the new code-path from commit:
c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
this would've unconditionally hit:
smp_cond_load_acquire(&p->on_cpu, !VAL);
and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this
would result in an instant live-lock (with IRQs disabled), something
that hasn't been reported.
The NULL deref can be explained however if the task_cpu(p) load at the
beginning of try_to_wake_up() returns an old value, and this old value
happens to be smp_processor_id(). Further assume that the p->on_cpu
load accurately returns 1, it really is still running, just not here.
Then, when we enqueue the task locally, we can crash in exactly the
observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq
is from the wrong CPU, therefore we'll iterate into the non-existant
parents and NULL deref.
The closest semi-plausible scenario I've managed to contrive is
somewhat elaborate (then again, actual reproduction takes many CPU
hours of rcutorture, so it can't be anything obvious):
X->cpu = 1
rq(1)->curr = X
CPU0 CPU1 CPU2
// switch away from X
LOCK rq(1)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 9
switch_to(Z)
X->on_cpu = 0
UNLOCK rq(1)->lock
// migrate X to cpu 0
LOCK rq(1)->lock
dequeue_task(X)
set_task_cpu(X, 0)
X->cpu = 0
UNLOCK rq(1)->lock
LOCK rq(0)->lock
enqueue_task(X)
X->on_rq = 1
UNLOCK rq(0)->lock
// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
X->on_cpu = 1
UNLOCK rq(0)->lock
// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb(); // wake X
ttwu()
LOCK X->pi_lock
smp_mb__after_spinlock
if (p->state)
cpu = X->cpu; // =? 1
smp_rmb()
// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
X->on_rq = 0
if (p->on_rq)
smp_rmb();
if (p->on_cpu && ttwu_queue_wakelist(..)) [*]
smp_cond_load_acquire(&p->on_cpu, !VAL)
cpu = select_task_rq(X, X->wake_cpu, ...)
if (X->cpu != cpu)
switch_to(Y)
X->on_cpu = 0
UNLOCK rq(0)->lock
However I'm having trouble convincing myself that's actually possible
on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu
observes ->state != RUNNING, it must also observe ->cpu != 1.
(Most of the previous ttwu() races were found on very large PowerPC)
Nevertheless, this fully explains the observed failure case.
Fix it by ordering the task_cpu(p) load after the p->on_cpu load,
which is easy since nothing actually uses @cpu before this.
Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Tested-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lkml.kernel.org/r/20200622125649.GC576871@hirez.programming.kicks-ass.net
Change-Id: Idd54334615da4c78698ca8b3b12b514ae9d8360f
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Just like the ttwu_queue_remote() IPI, make use of _TIF_POLLING_NRFLAG
to avoid sending IPIs to idle CPUs.
[ mingo: Fix UP build bug. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: https://lore.kernel.org/r/20200526161907.953304789@infradead.org
Change-Id: Ic3d00f973db6962613740f1d4cfb0f09464b697a
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The previous commit:
c6e7bd7afaeb: ("sched/core: Optimize ttwu() spinning on p->on_cpu")
avoids spinning on p->on_rq when the task is descheduling, but only if the
wakee is on a CPU that does not share cache with the waker.
This patch offloads the activation of the wakee to the CPU that is about to
go idle if the task is the only one on the runqueue. This potentially allows
the waker task to continue making progress when the wakeup is not strictly
synchronous.
This is very obvious with netperf UDP_STREAM running on localhost. The
waker is sending packets as quickly as possible without waiting for any
reply. It frequently wakes the server for the processing of packets and
when netserver is using local memory, it quickly completes the processing
and goes back to idle. The waker often observes that netserver is on_rq
and spins excessively leading to a drop in throughput.
This is a comparison of 5.7-rc6 against "sched: Optimize ttwu() spinning
on p->on_cpu" and against this patch labeled vanilla, optttwu-v1r1 and
localwakelist-v1r2 respectively.
5.7.0-rc6 5.7.0-rc6 5.7.0-rc6
vanilla optttwu-v1r1 localwakelist-v1r2
Hmean send-64 251.49 ( 0.00%) 258.05 * 2.61%* 305.59 * 21.51%*
Hmean send-128 497.86 ( 0.00%) 519.89 * 4.43%* 600.25 * 20.57%*
Hmean send-256 944.90 ( 0.00%) 997.45 * 5.56%* 1140.19 * 20.67%*
Hmean send-1024 3779.03 ( 0.00%) 3859.18 * 2.12%* 4518.19 * 19.56%*
Hmean send-2048 7030.81 ( 0.00%) 7315.99 * 4.06%* 8683.01 * 23.50%*
Hmean send-3312 10847.44 ( 0.00%) 11149.43 * 2.78%* 12896.71 * 18.89%*
Hmean send-4096 13436.19 ( 0.00%) 13614.09 ( 1.32%) 15041.09 * 11.94%*
Hmean send-8192 22624.49 ( 0.00%) 23265.32 * 2.83%* 24534.96 * 8.44%*
Hmean send-16384 34441.87 ( 0.00%) 36457.15 * 5.85%* 35986.21 * 4.48%*
Note that this benefit is not universal to all wakeups, it only applies
to the case where the waker often spins on p->on_rq.
The impact can be seen from a "perf sched latency" report generated from
a single iteration of one packet size:
-----------------------------------------------------------------------------------------------------------------
Task | Runtime ms | Switches | Average delay ms | Maximum delay ms | Maximum delay at |
-----------------------------------------------------------------------------------------------------------------
vanilla
netperf:4337 | 21709.193 ms | 2932 | avg: 0.002 ms | max: 0.041 ms | max at: 112.154512 s
netserver:4338 | 14629.459 ms | 5146990 | avg: 0.001 ms | max: 1615.864 ms | max at: 140.134496 s
localwakelist-v1r2
netperf:4339 | 29789.717 ms | 2460 | avg: 0.002 ms | max: 0.059 ms | max at: 138.205389 s
netserver:4340 | 18858.767 ms | 7279005 | avg: 0.001 ms | max: 0.362 ms | max at: 135.709683 s
-----------------------------------------------------------------------------------------------------------------
Note that the average wakeup delay is quite small on both the vanilla
kernel and with the two patches applied. However, there are significant
outliers with the vanilla kernel with the maximum one measured as 1615
milliseconds with a vanilla kernel but never worse than 0.362 ms with
both patches applied and a much higher rate of context switching.
Similarly a separate profile of cycles showed that 2.83% of all cycles
were spent in try_to_wake_up() with almost half of the cycles spent
on spinning on p->on_rq. With the two patches, the percentage of cycles
spent in try_to_wake_up() drops to 1.13%
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Jirka Hladky <jhladky@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: valentin.schneider@arm.com
Cc: Hillf Danton <hdanton@sina.com>
Cc: Rik van Riel <riel@surriel.com>
Link: https://lore.kernel.org/r/20200524202956.27665-3-mgorman@techsingularity.net
Change-Id: I680c08132d1789995749a778f350bd77dee422dc
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Both Rik and Mel reported seeing ttwu() spend significant time on:
smp_cond_load_acquire(&p->on_cpu, !VAL);
Attempt to avoid this by queueing the wakeup on the CPU that owns the
p->on_cpu value. This will then allow the ttwu() to complete without
further waiting.
Since we run schedule() with interrupts disabled, the IPI is
guaranteed to happen after p->on_cpu is cleared, this is what makes it
safe to queue early.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Jirka Hladky <jhladky@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: valentin.schneider@arm.com
Cc: Hillf Danton <hdanton@sina.com>
Cc: Rik van Riel <riel@surriel.com>
Link: https://lore.kernel.org/r/20200524202956.27665-2-mgorman@techsingularity.net
Change-Id: I5787935224793e065de57cf9763e08a5e6d40979
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
John reported a DEBUG_PREEMPT warning caused by commit:
aacedf26fb76 ("sched/core: Optimize try_to_wake_up() for local wakeups")
I overlooked that ttwu_stat() requires preemption disabled.
Reported-by: John Stultz <john.stultz@linaro.org>
Tested-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: aacedf26fb76 ("sched/core: Optimize try_to_wake_up() for local wakeups")
Link: https://lkml.kernel.org/r/20190710105736.GK3402@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I385c5e0796c36e35761cc4658edd3c00ac908bfb
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Jens reported that significant performance can be had on some block
workloads by special casing local wakeups. That is, wakeups on the
current task before it schedules out.
Given something like the normal wait pattern:
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (cond)
break;
schedule();
}
__set_current_state(TASK_RUNNING);
Any wakeup (on this CPU) after set_current_state() and before
schedule() would benefit from this.
Normal wakeups take p->pi_lock, which serializes wakeups to the same
task. By eliding that we gain concurrency on:
- ttwu_stat(); we already had concurrency on rq stats, this now also
brings it to task stats. -ENOCARE
- tracepoints; it is now possible to get multiple instances of
trace_sched_waking() (and possibly trace_sched_wakeup()) for the
same task. Tracers will have to learn to cope.
Furthermore, p->pi_lock is used by set_special_state(), to order
against TASK_RUNNING stores from other CPUs. But since this is
strictly CPU local, we don't need the lock, and set_special_state()'s
disabling of IRQs is sufficient.
After the normal wakeup takes p->pi_lock it issues
smp_mb__after_spinlock(), in order to ensure the woken task must
observe prior stores before we observe the p->state. If this is CPU
local, this will be satisfied with a compiler barrier, and we rely on
try_to_wake_up() being a funcation call, which implies such.
Since, when 'p == current', 'p->on_rq' must be true, the normal wakeup
would continue into the ttwu_remote() branch, which normally is
concerned with exactly this wakeup scenario, except from a remote CPU.
IOW we're waking a task that is still running. In this case, we can
trivially avoid taking rq->lock, all that's left from this is to set
p->state.
This then yields an extremely simple and fast path for 'p == current'.
Reported-by: Jens Axboe <axboe@kernel.dk>
Tested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Qian Cai <cai@lca.pw>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: gkohli@codeaurora.org
Cc: hch@lst.de
Cc: oleg@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I603279a68529f271e74b2bd123fba2ccb09ee0d3
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Previously we have used pure CFS wakeup in overutilized case. This is a
tweaked version to activate the path only for important tasks.
Bug: 161190988
Bug: 160883639
Test: boot and systrace
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I2a27f241b3ba32a04cf6f88deb483d6636440dcf
With scheduler placement hint, there still could be several boosted
tasks contending for big cores. On chipset with fewer big cores, it
might cause problems like jank. To improve it, schedule tasks of prio
>= DEFAULT_PRIO from little cores if they could fit, even for tasks
that prefer high capacity cpus, since such prio means they are less
important.
Bug: 158936596
Test: tasks scheduled as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: Ic0cc06461818944e3e97ec0493c0d9c9f1a5e217
[backported to 4.14]
Signed-off-by: Volodymyr Zhdanov <wight554@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Refine some changes from AU90. One is to allow boosted task run on min
capacity cpu if it fits in. The other is to check fast exit for
prefer-idle task first.
Bug: 128477368
Bug: 130576120
Test: task rq selection behavior is as expected
Change-Id: Ied57b37a361ed137d10167f0346f52a149d08cd6
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Currently iowait doesn't distinguish background/foreground tasks and we
have seen cases where a device run to high frequency unnecessarily when
running some background I/O. This patch limits iowait boost to tasks with
prefer_idle only. Specifically, on Pixel, those are foreground and top
app tasks.
Bug: 130308826
Test: Boot and trace
Change-Id: I2d892beeb4b12b7e8f0fb2848c23982148648a10
Signed-off-by: Wei Wang <wvw@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Previously we skip util check on idle cpu if task prefers idle, but we
still need to make sure task fits in that cpu after considering capacity
margin (on little cores only).
Bug: 147785606
Test: cpu skipped as expected
Signed-off-by: Rick Yiu <rickyiu@google.com>
Change-Id: I7c85768ceda94b44052c7c9428fd50088268edad
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Current cpu util includes util of runnable tasks plus the recent utilization
of currently non-runnable tasks, so it may return a non-zero value even
there is no task running on a cpu. When scheduler is selecting a cpu for
a task, it will check if cpu util is over its capacity, what could
happen is that it will skip a cpu even it is idle, so let scheduler
skip util checking if the task perfers idle cpu and the cpu is idle.
Bug: 133284637
Test: cpu selected as expected
Change-Id: I2c15d6b79b1cc83c72e84add70962a8e74c178b8
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Mid capacity cpu was first introduced on kernel 4.14 for floral.
However, other 4.14 platforms may not have mid capacity cpu, such
as sunfish. So, we need to check if it exists before using it.
Bug: 142551658
Test: boot to home
Change-Id: I9b7f5b94b337167b9790def4953854baab96eaa2
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Causes failures on kernel_test so moving the declaration up.
Bug: 123939451
Test: manual - build/build_test.sh
Change-Id: I3865f406ad2363c6a968193052fe421956d94065
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Important threads can get forced to little cpu's
when the sync or prev_bias hints are followed
blindly. This patch adds a check to see whether
those paths are forcing the task to a cpu that
has less capacity than other cpu's available for
the task. If so, we ignore the sync and prev_bias
and allow the scheduler to make a free decision.
Bug: 117438867
Change-Id: Ie5a99f9a8b65ba9382a8d0de2ae0aad843e558d1
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
If we ever wanted to stop big tasks from being balanced out to smaller cluster,
it had already been done by checking the tasks in can_migrate_task() with
schedtune
This commit, however, introduces a bug that light tasks including non-big tasks
not being balanced out to smaller cluster, resulting in big cluster being
occasionally overloaded
This reverts commit d89e049987.
Change-Id: I72bbd526c8b4e0aabb7533cbdb5141593016dbb5
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This task migration logic is guarded by a WALT #ifdef even though it has
nothing specific to do with WALT. The result is that, with PELT, boosted
tasks can be migrated to the little cluster, causing visible stutters.
Move the WALT #ifdef so PELT can benefit from this logic as well.
Thanks to Zachariah Kennedy <zkennedy87@gmail.com> and Danny Lin
<danny@kdrag0n.dev> for discovering this issue and creating this fix.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I95aec52f456977b61e7a5b4dc6fed01184c86b09
This improves scheduler decision accuracy, on load balancing and EAS placement
specifically
Change-Id: I21733de68d9796e2c69605102e0fb5ea60e742b5
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
These pieces of codes are only useful for WALT
Change-Id: I520684e747067deb95fa94ac2cbee607c6fbf482
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
cpu_capacity() returns maximum capacity when WALT is disabled, hence
we couldn't take advantage of CAF's optimization
Return CPU's original capacity instead to make it usable
Change-Id: I524f9f1872f038c0b77ba404b1caf0ce75321dd8
Signed-off-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
When WALT is disabled, do_pl_notif() always returns false, in which case
this bit of code serves no purpose. As a result, this WALT-specific code
spins on acquiring the rq lock in a hot path, wasting CPU time. Compile
it out when WALT is disabled to eliminate the unnecessary overhead.
Change-Id: I94ec3f2ce0faad049c0bf4974b2b4442883311a4
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
CONFIG_LOCK_STAT shows warnings in move_queued_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.
Bug: 123720375
Change-Id: I8bff8550c4f79ca535556f6ec626f17ff5fce637
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
CONFIG_LOCK_STAT shows warnings in detach_task() for releasing a
pinned lock. The warnings are due to the calls to
double_unlock_balance() added to snapshot WALT. Lets disable them if
not building with SCHED_WALT.
Bug: 123720375
Change-Id: Ibfa28b1434fa6006fa0117fd2df1a3eadb321568
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Currently when calculating boosted util for a cpu, it uses a fixed
value of 1024 for calculation. So when top-app tasks moved to LC,
which has much lower capacity than BC, the freq calculated will be
high even the cpu util is low. This results in higher power
consumption, especially on arch which has more little cores than
big cores. By replacing the fixed value of 1024 with actual cpu
capacity will reduce the freq calculated on LC.
Bug: 152925197
Test: boosted util reduced on little cores
Signed-off-by: Rick Yiu <rickyiu@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I80cdd08a2c7fa5e674c43bfc132584d85c14622b
PELT doesn't account for real time task utilization in cpu_util().
As the result a CPU busy running RT task is considered as low
utilization by the scheduler. Fix this by adding real time loading
in to account.
Bug: 147385228
Test: boot to home and run audio test
Change-Id: Ie4412b186608b9a618f0d35cee9a7310db481f7c
Signed-off-by: Kyle Lin <kylelin@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Commit 20017f3383 ("sched/fair: Only kick nohz balance when runqueue
has more than 1 task") disabled the nohz kick for LB when a rq has a
misfit task. The assumption is that this would be addressed in the
forced up-migration path. However, this path is WALT-specific, so
disabling the nohz kick breaks PELT.
Fix it by re-enabling the nohz_kick when there is a misfit task on the
rq.
Bug: 143472450
Test: 10/10 iterations of eas_small_to_big ended up up-migrating
Fixes: 20017f3383 ("sched/fair: Only kick nohz balance when runqueue
has more than 1 task")
Signed-off-by: Quentin Perret <qperret@google.com>
Change-Id: I9f708eb7661a9e82afdd4e99b878995c33703a45
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
update_cpu_capacity will update cpu_capacity_orig capped with
thermal_cap, in non-WALT case, thermal_cap is previous
cpu_capacity_orig. This caused cpu_capacity_orig being capped
incorrectly.
Test: Build
Bug: 144143594
Change-Id: I1ff9d9c87554c2d2395d46b215276b7ab50585c0
Signed-off-by: Wei Wang <wvw@google.com>
(cherry picked from commit dac65a5a494f8d0c80101acc5d482d94cda6f158)
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
None of these functions does what its name implies when
CONFIG_SCHED_WALT=n. While all are currently unused, future patches
could introduce subtle bugs by calling any of them from non WALT
specific code. Delete the functions so it's obvious if new callers are
added.
Test: build kernel
Change-Id: Ib7552afb5668b48fe2ae56307016e98716e00e63
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
With CONFIG_SCHED_WALT disabled, is_min_capacity_cpu() is defined to
always return true, which breaks the intended behavior of
task_fits_max(). Revise is_min_capacity_cpu() to return correct
results.
An earlier version of this patch failed to handle the case when
min_cap_orig_cpu == -1 while sched domains are being updated due to
hotplug. Add a check for this case.
Test: trace shows increased top-app placement on medium cores
Bug: 117499098
Bug: 128477368
Bug: 130756111
Change-Id: Ia2b41aa7c57f071c997bcd0e9cdfd0808f6a2bf9
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The estimated utilization for a task:
util_est = max(util_avg, est.enqueue, est.ewma)
is defined based on:
- util_avg: the PELT defined utilization
- est.enqueued: the util_avg at the end of the last activation
- est.ewma: a exponential moving average on the est.enqueued
samples
According to this definition, when a task suddenly change its bandwidth
requirements from small to big, the EWMA will need to collect multiple
samples before converging up to track the new big utilization.
This slow convergence towards bigger utilization values is not
aligned to the default scheduler behavior, which is to optimize for
performance. Moreover, the est.ewma component fails to compensate for
temporarely utilization drops which spans just few est.enqueued samples.
To let util_est do a better job in the scenario depicted above, change
its definition by making util_est directly follow upward motion and
only decay the est.ewma on downward.
Signed-off-by: Patrick Bellasi <patrick.bellasi@matbug.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
(am from https://lkml.org/lkml/2019/10/23/1071)
Change-Id: Ifbde836af2e903815904b1dbf44c782b7b66f9ce
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Because mark_reserved use for WALT and it's called by load_balance,
it leads to build breakage when WALT disabled. Executing the function
only if CONFIG_SCHED_WALT enabled.
Bug: 144142283
Test: Build and boot to home
Change-Id: I5cc3e3ece6a28c6cdabbe6964f6a6032ff2ea809
Signed-off-by: Kyle Lin <kylelin@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: If25bddeb70670d0fcaf93088ebf55ab3dc80b4e3
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I65d0d1ae7b633969a88e20a39750fff6279db460
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The RT_RUNTIME_SHARE sched feature enables the sharing of rt_runtime
between CPUs, allowing a CPU to run a real-time task up to 100% of the
time while leaving more space for non-real-time tasks to run on the CPU
that lend rt_runtime.
The problem is that a CPU can easily borrow enough rt_runtime to allow
a spinning rt-task to run forever, starving per-cpu tasks like kworkers,
which are non-real-time by design.
This patch disables RT_RUNTIME_SHARE by default, avoiding this problem.
The feature will still be present for users that want to enable it,
though.
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Wei Wang <wvw@google.com>
Link: https://lkml.kernel.org/r/b776ab46817e3db5d8ef79175fa0d71073c051c7.1600697903.git.bristot@redhat.com
(cherry picked from commit 2586af1ac187f6b3a50930a4e33497074e81762d)
Change-Id: Ibb1b185d512130783ac9f0a29f0e20e9828c86fd
Bug: 169673278
Test: build, boot and check the trace with RT task
Signed-off-by: Kyle Lin <kylelin@google.com>
Change-Id: Iffede8107863b02ad4a0cb902fc8119416931bdb
This reverts commit 6f58caae21.
It's not present in newer CAF kernels and Google removed it on their
4.14 devices as well.
Change-Id: I3675cbfe4a37ae9ed31bf3659a545965a0d59c6f
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
the previous definitions as well as the creation of this is locked behind CONFIG_ZRAM_LRU_WRITEBACK as well
Change-Id: I869b5595f69cc481e93ca6862b460594762d9b25
drivers/cpuidle/lpm-levels.o: In function `lpm_suspend_prepare':
/home/risen/android/ascendia/out/../drivers/cpuidle/lpm-levels.c:1750: undefined reference to `debug_masterstats_show'
/home/risen/android/ascendia/out/../drivers/cpuidle/lpm-levels.c:1751: undefined reference to `debug_rpmstats_show'
drivers/cpuidle/lpm-levels.o: In function `lpm_suspend_wake':
/home/risen/android/ascendia/out/../drivers/cpuidle/lpm-levels.c:1773: undefined reference to `debug_rpmstats_show'
/home/risen/android/ascendia/out/../drivers/cpuidle/lpm-levels.c:1774: undefined reference to `debug_masterstats_show'
make[1]: *** [/home/risen/android/ascendia/Makefile:1190: vmlinux] Error 1
- Return proper values when write wrappers aren't bypassed
- Revise Kconfig description
- Improve overall code style
- Don't write colocate and sched_boost_no_override values when WALT is
disabled
- Mark static data as static
- Improve readability of log messages
- Propagate cftype struct in write wrappers
- Use task_is_booster helper rather than hard-coded "init" check
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
[0ctobot: Squash kdrag0n/proton_zf6@12d005c with
kdrag0n/proton_zf6@eb73f2f]
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
This implements a mechanism by which default SchedTune parameters
can be configured in-kernel, circumventing userspace, and
mitigating reliance on ramdisk modification in the context of
custom kernels.
[2.5V]: This version adds proper protection
from userspace (mainly init) trying to write lame
boost values and gives full control to developer
and user (sh is not blocked).
[V3.0]: Use a struct to store all the values.
[0ctobot: Update for msm-4.9 and improve coding style]
[YaroST12: Update for msm-4.14]
Co-authored-by: Adam W. Willis <return.of.octobot@gmail.com>
Co-authored-by: Yaroslav Furman <yaro330@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Change-Id: I70b676014d580b7df0f2962a989579376e261d49
It is too bad to do a tight loop every adding pkt. When the hotspot is turned on, I notice that the
htt_htc_misc_pkt_list_trim() function consumes at least 5% of CPU time. By caching the head of pkt
queue and freeing multiple pkts at once to reduce CPU consumption.
Signed-off-by: Julian Liu <wlootlxt123@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
POPP constantly attempts to lower the GPU's frequency behind the
governor's back in order to save power; however, the GPU governor in use
(msm-adreno-tz) is very good at determining the GPU's load and selecting
an appropriate frequency to run the GPU at.
POPP was created long ago, perhaps when msm-adreno-tz didn't exist or
didn't work so well, so it is clearly deprecated. Remove it.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Currently, the kgsl worker thread is erroneously ranked right below
Android's audio threads in terms of priority.
The kgsl worker thread is in the critical path for rendering frames to
the display, so increase its priority to match the priority of the
display commit threads.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
In order to prevent redundant entry creation by racing against itself,
mb_cache_entry_create scans through a large hash-list of all current
entries in order to see if another allocation for the requested new
entry has been made. Furthermore, it allocates memory for a new entry
before scanning through this hash-list, which results in that allocated
memory being discarded when the requested new entry is already present.
This happens more than half the time.
Speed up cache entry creation by keeping a small linked list of
requested new entries in progress, and scanning through that first
instead of the large hash-list. Additionally, don't bother allocating
memory for a new entry until it's known that the allocated memory will
be used.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
There is noticeable scheduling latency and heavy zone lock contention
stemming from rmqueue_bulk's single hold of the zone lock while doing
its work, as seen with the preemptoff tracer. There's no actual need for
rmqueue_bulk() to hold the zone lock the entire time; it only does so
for supposed efficiency. As such, we can relax the zone lock and even
reschedule when IRQs are enabled in order to keep the scheduling delays
and zone lock contention at bay. Forward progress is still guaranteed,
as the zone lock can only be relaxed after page removal.
With this change, rmqueue_bulk() no longer appears as a serious offender
in the preemptoff tracer, and system latency is noticeably improved.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Boost the DDR bus for 58 ms when requested in order to improve jitter.
The 3879 frequency step was determined empirically to be the minimum
needed to sustain acceptably low jitter in UIBench.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This driver boosts enumerated devfreq devices upon input, and allows for
boosting specific devfreq devices on other custom events. The boost
frequencies for this driver should be set so that frame drops are
near-zero at the boosted frequencies and power consumption is minimized
at said frequencies. The goal of this driver is to provide an interface
to achieve optimal device performance by requesting boosts on key
events, such as when a frame is ready to rendered to the display.
Currently, support is only present for boosting the cpu-llcc-ddr-bw
devfreq device, but the driver is structured in a way that makes it
easy to add support for new boostable devfreq devices in the future.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
An IRQ affinity notifier getting overwritten can point to some annoying
issues which need to be resolved, like multiple pm_qos objects being
registered to the same IRQ. Print out a warning when this happens to aid
debugging.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
On ARM, IRQs are executed on the first CPU inside the affinity mask, so
setting an affinity mask with more than one CPU set is deceptive and
causes issues with pm_qos. To fix this, only set the CPU0 bit inside the
affinity mask, since that's where IRQs will run by default.
This is a follow-up to "kernel: Don't allow IRQ affinity masks to have
more than one CPU".
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Even with an affinity mask that has multiple CPUs set, IRQs always run
on the first CPU in their affinity mask. Drivers that register an IRQ
affinity notifier (such as pm_qos) will therefore have an incorrect
assumption of where an IRQ is affined.
Fix the IRQ affinity mask deception by forcing it to only contain one
set CPU.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Giving userspace intimate control over CPU latency requirements is
nonsense. Userspace can't even stop itself from being preempted, so
there's no reason for it to have access to a mechanism primarily used to
eliminate CPU delays on the order of microseconds.
Remove userspace's ability to send pm_qos requests so that it can't hurt
power consumption.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
It isn't guaranteed a CPU will idle upon calling lpm_cpuidle_enter(),
since it could abort early at the need_resched() check. In this case,
it's possible for an IPI to be sent to this "idle" CPU needlessly, thus
wasting power. For the same reason, it's also wasteful to keep a CPU
marked idle even after it's woken up.
Reduce the window that CPUs are marked idle to as small as it can be in
order to improve power consumption.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
The pm_qos callback currently suffers from a number of pitfalls: it
sends IPIs to CPUs that may not be idle, waits for those IPIs to finish
propagating while preemption is disabled (resulting in a long busy wait
for the pm_qos_update_target() caller), and needlessly calls a no-op
function when the IPIs are processed.
Optimize the pm_qos notifier by only sending IPIs to CPUs that are
idle, and by using arch_send_wakeup_ipi_mask() instead of
smp_call_function_many(). Using IPI_WAKEUP instead of IPI_CALL_FUNC,
which is what smp_call_function_many() uses behind the scenes, has the
benefit of doing zero work upon receipt of the IPI; IPI_WAKEUP is
designed purely for sending an IPI without a payload, whereas
IPI_CALL_FUNC does unwanted extra work just to run the empty
smp_callback() function.
Determining which CPUs are idle is done efficiently with an atomic
bitmask instead of using the wake_up_if_idle() API, which checks the
CPU's runqueue in an RCU read-side critical section and under a spin
lock. Not very efficient in comparison to a simple, atomic bitwise
operation. A cpumask isn't needed for this because NR_CPUS is
guaranteed to fit within a word.
Change-Id: Ic4dd7e4781172bb8e3b6eb13417a814256d44cf0
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
An empty IPI is useful for cpuidle to wake sleeping CPUs without causing
them to do unnecessary work upon receipt of the IPI. IPI_WAKEUP fills
this use-case nicely, so let it be used outside of the ACPI parking
protocol.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
This allows pm_qos votes with, say, 100 us for example to select power
levels with exit latencies equal to 100 us. The extra microsecond of
exit latency doesn't hurt.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Currently most of the assembly files that use architecture extensions
enable them using the .arch directive but crc32.S uses .cpu instead. Move
that over to .arch for consistency.
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20200414182843.31664-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
The upcoming GCC 9 release extends the -Wmissing-attributes warnings
(enabled by -Wall) to C and aliases: it warns when particular function
attributes are missing in the aliases but not in their target.
In particular, it triggers here because crc32_le_base/__crc32c_le_base
aren't __pure while their target crc32_le/__crc32c_le are.
These aliases are used by architectures as a fallback in accelerated
versions of CRC32. See commit 9784d82db3eb ("lib/crc32: make core crc32()
routines weak so they can be overridden").
Therefore, being fallbacks, it is likely that even if the aliases
were called from C, there wouldn't be any optimizations possible.
Currently, the only user is arm64, which calls this from asm.
Still, marking the aliases as __pure makes sense and is a good idea
for documentation purposes and possible future optimizations,
which also silences the warning.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Allow architectures to drop in accelerated CRC32 routines by making
the crc32_le/__crc32c_le entry points weak, and exposing non-weak
aliases for them that may be used by the accelerated versions as
fallbacks in case the instructions they rely upon are not available.
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Improve the performance of the crc32() asm routines by getting rid of
most of the branches and small sized loads on the common path.
Instead, use a branchless code path involving overlapping 16 byte
loads to process the first (length % 32) bytes, and process the
remainder using a loop that processes 32 bytes at a time.
Tested using the following test program:
#include <stdlib.h>
extern void crc32_le(unsigned short, char const*, int);
int main(void)
{
static const char buf[4096];
srand(20181126);
for (int i = 0; i < 100 * 1000 * 1000; i++)
crc32_le(0, buf, rand() % 1024);
return 0;
}
On Cortex-A53 and Cortex-A57, the performance regresses but only very
slightly. On Cortex-A72 however, the performance improves from
$ time ./crc32
real 0m10.149s
user 0m10.149s
sys 0m0.000s
to
$ time ./crc32
real 0m7.915s
user 0m7.915s
sys 0m0.000s
Cc: Rui Sun <sunrui26@huawei.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Unlike crc32c(), which is wired up to the crypto API internally so the
optimal driver is selected based on the platform's capabilities,
crc32_le() is implemented as a library function using a slice-by-8 table
based C implementation. Even though few of the call sites may be
bottlenecks, calling a time variant implementation with a non-negligible
D-cache footprint is a bit of a waste, given that ARMv8.1 and up mandates
support for the CRC32 instructions that were optional in ARMv8.0, but are
already widely available, even on the Cortex-A53 based Raspberry Pi.
So implement routines that use these instructions if available, and fall
back to the existing generic routines otherwise. The selection is based
on alternatives patching.
Note that this unconditionally selects CONFIG_CRC32 as a builtin. Since
CRC32 is relied upon by core functionality such as CONFIG_OF_FLATTREE,
this just codifies the status quo.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
The current delay implementation uses the yield instruction, which is a
hint that it is beneficial to schedule another thread. As this is a hint,
it may be implemented as a NOP, causing all delays to be busy loops. This
is the case for many existing CPUs.
Taking advantage of the generic timer sending periodic events to all
cores, we can use WFE during delays to reduce power consumption. This is
beneficial only for delays longer than the period of the timer event
stream.
If timer event stream is not enabled, delays will behave as yield/busy
loops.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The arch timer configuration for a CPU might get reset after suspending
said CPU.
In order to reliably use the event stream in the kernel (e.g. for delays),
we keep track of the state where we can safely consider the event stream as
properly configured. After writing to cntkctl, we issue an ISB to ensure
that subsequent delay loops can rely on the event stream being enabled.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arm64 provides always working implementation of futex_atomic_cmpxchg_inatomic(),
so there is no need to check it runtime.
Change-Id: Id4b9ba07d979fddbdac9f2aaa5250b1487ff9042
Reported-by: Piyush swami <Piyush.swami@arm.com>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: atndko <z1281552865@gmail.com>
Signed-off-by: Panchajanya1999 <panchajanya@azure-dev.live>
It is probably safe to assume that all Armv8-A implementations have a
multiplier whose efficiency is comparable or better than a sequence of
three or so register-dependent arithmetic instructions. Select
ARCH_HAS_FAST_MULTIPLIER to get ever-so-slightly nicer codegen in the
few dusty old corners which care.
In a contrived benchmark calling hweight64() in a loop, this does indeed
turn out to be a small win overall, with no measurable impact on
Cortex-A57 but about 5% performance improvement on Cortex-A53.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Throwing our __uint128_t idioms at csum_ipv6_magic() makes it
about 1.3x-2x faster across a range of microarchitecture/compiler
combinations. Not much in absolute terms, but every little helps.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
In validating the checksumming results of the new routine, I sadly
neglected to test its not-checksumming results. Thus it slipped through
that the one case where @buff is already dword-aligned and @len = 0
manages to defeat the tail-masking logic and behave as if @len = 8.
For a zero length it doesn't make much sense to deference @buff anyway,
so just add an early return (which has essentially zero impact on
performance).
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Apparently there exist certain workloads which rely heavily on software
checksumming, for which the generic do_csum() implementation becomes a
significant bottleneck. Therefore let's give arm64 its own optimised
version - for ease of maintenance this foregoes assembly or intrisics,
and is thus not actually arm64-specific, but does rely heavily on C
idioms that translate well to the A64 ISA and the typical load/store
capabilities of most ARMv8 CPU cores.
The resulting increase in checksum throughput scales nicely with buffer
size, tending towards 4x for a small in-order core (Cortex-A53), and up
to 6x or more for an aggressive big core (Ampere eMAG).
Reported-by: Lingyan Huang <huanglingyan2@huawei.com>
Tested-by: Lingyan Huang <huanglingyan2@huawei.com>
Change-Id: I42f718428ee872541006b3932dc010dd3f8b0f28
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
A lot of CPU time is wasted on allocating, populating, and copying
debug names back and forth with userspace when they're not actually
needed. We can't just remove the name buffers from the various sync data
structures though because we must preserve ABI compatibility with
userspace, but instead we can just pretend the name fields of the
user-shared structs aren't there. This massively reduces the sizes of
memory allocated for these data structures and the amount of data passed
between userspace, as well as eliminates a kzalloc() entirely from
sync_file_ioctl_fence_info(), thus improving graphics performance.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
This kernel won't be used on devices with 7nm, 14nm, or 28nm PLLs nor
will it be used with the 10nm DisplayPort PLL since our only display is
connected via DSI.
Don't compile support for PLLs we won't use.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
[dereference23: Adapted for atoll]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This kernel will not be used on devices with other GPUs.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I205e625dc1520c8baef622bc6fd303f714c6f7d6
This unused debug print wastes CPU time when writing to registers,
resulting in perf top reporting a decent chunk of time spent inside
sde_reg_write(). Removing the debug print gets sde_reg_write() off perf
top's radar.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
These debug logs are everywhere and not only bloat the driver, but add
latency everywhere they're used because they're not compiled out. Since
they serve no purpose for us as we're not debugging SDE, compile them
out.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
A measurably significant amount of CPU time is spent in these routines
while the camera is open. These are also responsible for a grotesque
amount of dmesg spam, so let's nuke them.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
While Android userspace (e.g. storaged) does use iostats via
/proc/diskstats, init will explicitly enable iostats for the devices on
which it is primarily used - sda and sdf. Avoid the 0.5-1% overhead for
block devices that do not need it.
Signed-off-by: kdrag0n <dragon@khronodragon.com>
CRC errors on SPI bus usually means there is something wrong with the
hardware(unstable voltage, wiring, etc).
Disable SPI CRC in favor of improving performance as the cost of
detecting hardware errors are too high, and not all that useful.
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Change-Id: I5d7ef9dedbddf8d7f4c4911788051a7753eb67d8
Binder code is very hot, so checking frequently to see if a debug
message should be printed is a waste of cycles. We're not debugging
binder, so just stub out the debug prints to compile them out entirely.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Sometimes, it may be desirable to use CPU frequency tables different
from the ones in the hardware's OSM LUTs. This commit adds support for
overriding each CPU's frequency table with a list of allowed frequencies
defined in the OSM driver's DT node.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Joel Fernandes found that the synchronize_rcu_tasks() was taking a
significant amount of time. He demonstrated it with the following test:
# cd /sys/kernel/tracing
# while [ 1 ]; do x=1; done &
# echo '__schedule_bug:traceon' > set_ftrace_filter
# time echo '!__schedule_bug:traceon' > set_ftrace_filter;
real 0m1.064s
user 0m0.000s
sys 0m0.004s
Where it takes a little over a second to perform the synchronize,
because there's a loop that waits 1 second at a time for tasks to get
through their quiescent points when there's a task that must be waited
for.
After discussion we came up with a simple way to wait for holdouts but
increase the time for each iteration of the loop but no more than a
full second.
With the new patch we have:
# time echo '!__schedule_bug:traceon' > set_ftrace_filter;
real 0m0.131s
user 0m0.000s
sys 0m0.004s
Which drops it down to 13% of what the original wait time was.
Link: http://lkml.kernel.org/r/20180523063815.198302-2-joel@joelfernandes.org
Reported-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Suggested-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: celtare21 <celtare21@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Make sure the compiler optimises away conditions that are always false
since commit b91319892e (cpufreq: schedutil: Don't jump to max frequency for RT tasks).
Change-Id: I7a108ff1a4ba09f2cb82ea8a82bd15967e724709
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Currently, the raw cache will be reset when next_f is changed after get_next_freq for correctness. However, it may introduce more cycles in those cases. This patch changes it to maintain the cached value instead of dropping it.
Bug: 159936782
Bug: 158863204
Signed-off-by: Wei Wang <wvw@google.com>
[dereference23: Backport to 4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I519ca02dd2e6038e3966e1f68fee641628827c82
The cpufreq_schedutil governor keeps a cache of the last
raw frequency that was mapped to a supported device frequency.
If the next request for a frequency matches the cached
value, the policy's next_freq value is reused. But there
are paths that can update the raw cached value without
updating the next_freq value, and there are paths that
can set the next_freq value without setting the raw
cached value. On those paths, the cached value
must be reset.
The case that has been observed is when a frequency request
reaches sugov_update_commit but is then rejected by to
the sugov_up_down_rate_limit check.
Bug: 116279565
Change-Id: I7c585339a04ff1732054d6e5b36a57e2d41266aa
Signed-off-by: John Dias <joaodias@google.com>
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Part of the fix from commit d86ab9cff8 ("cpufreq: schedutil: use now
as reference when aggregating shared policy requests") is reversed in
commit 05d2ca2420 ("cpufreq: schedutil: Ignore CPU load older than
WALT window size") due to a porting mistake. Restore it while keeping
the relevant change from the latter patch.
Bug: 117438867
Test: build & boot
Change-Id: I21399be760d7c8e2fff6c158368a285dc6261647
Signed-off-by: Connor O'Brien <connoro@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Bug: 120438505
Change-Id: I59e3675a320ce71c3c90be3904756b125300ba6b
Signed-off-by: Miguel de Dios <migueldedios@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This reverts commit b8b6f565c0.
CAF's hispeed boost and predicted load features aren't any good. Remove
them entirely to prevent userspace from trying to enable them
(specifically pl) and to reduce useless overhead in schedutil, since it
runs *very* often.
Change-Id: I0446b49a59e5dce8e1b7712bdb654c9a5e6ff0ed
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This is tuned to match energy model characteristics and scheduler
efficiency enhancements.
Change-Id: Ia60e1ea888457fa1c0c0273cdd4b0180f0a87abf
Co-authored-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Qualcomm's LLCC controller does not have an error IRQ line on lito and
instead polls to check memory banks for errors every 5 seconds, which is
inefficient and will add to system jitter.
The generic Kryo CPU cache controller does have error IRQ lines so it
doesn't need to use polling, but EDAC in general is fairly useless in
its current state anyway because Google disabled the option to panic on
uncorrectable error. Let's follow their decision and just disable EDAC
entirely, as well as its placeholder RAS dependency.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
__rmqueue(), __rmqueue_fallback(), __rmqueue_smallest() and
__rmqueue_cma_fallback() are all in page allocator's hot path and better
be finished as soon as possible. One way to make them faster is by making
them inline. But as Andrew Morton and Andi Kleen pointed out:
https://lkml.org/lkml/2017/10/10/1252https://lkml.org/lkml/2017/10/10/1279
To make sure they are inlined, we should use __always_inline for them.
With the will-it-scale/page_fault1/process benchmark, when using nr_cpu
processes to stress buddy, the results for will-it-scale.processes with
and without the patch are:
On a 2-sockets Intel-Skylake machine:
compiler base head
gcc-4.4.7 6496131 6911823 +6.4%
gcc-4.9.4 7225110 7731072 +7.0%
gcc-5.4.1 7054224 7688146 +9.0%
gcc-6.2.0 7059794 7651675 +8.4%
On a 4-sockets Intel-Skylake machine:
compiler base head
gcc-4.4.7 13162890 13508193 +2.6%
gcc-4.9.4 14997463 15484353 +3.2%
gcc-5.4.1 14708711 15449805 +5.0%
gcc-6.2.0 14574099 15349204 +5.3%
The above 4 compilers are used because I've done the tests through
Intel's Linux Kernel Performance(LKP) infrastructure and they are the
available compilers there.
The benefit being less on 4 sockets machine is due to the lock
contention there(perf-profile/native_queued_spin_lock_slowpath=81%) is
less severe than on the 2 sockets machine(85%).
What the benchmark does is: it forks nr_cpu processes and then each
process does the following:
1 mmap() 128M anonymous space;
2 writes to each page there to trigger actual page allocation;
3 munmap() it.
in a loop.
https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault1.c
Binary size wise, I have locally built them with different compilers:
[aaron@aaronlu obj]$ size */*/mm/page_alloc.o
text data bss dec hex filename
37409 9904 8524 55837 da1d gcc-4.9.4/base/mm/page_alloc.o
38273 9904 8524 56701 dd7d gcc-4.9.4/head/mm/page_alloc.o
37465 9840 8428 55733 d9b5 gcc-5.5.0/base/mm/page_alloc.o
38169 9840 8428 56437 dc75 gcc-5.5.0/head/mm/page_alloc.o
37573 9840 8428 55841 da21 gcc-6.4.0/base/mm/page_alloc.o
38261 9840 8428 56529 dcd1 gcc-6.4.0/head/mm/page_alloc.o
36863 9840 8428 55131 d75b gcc-7.2.0/base/mm/page_alloc.o
37711 9840 8428 55979 daab gcc-7.2.0/head/mm/page_alloc.o
Text size increased about 800 bytes for mm/page_alloc.o.
[aaron@aaronlu obj]$ size */*/vmlinux
text data bss dec hex filename
10342757 5903208 17723392 33969357 20654cd gcc-4.9.4/base/vmlinux
10342757 5903208 17723392 33969357 20654cd gcc-4.9.4/head/vmlinux
10332448 5836608 17715200 33884256 2050860 gcc-5.5.0/base/vmlinux
10332448 5836608 17715200 33884256 2050860 gcc-5.5.0/head/vmlinux
10094546 5836696 17715200 33646442 201676a gcc-6.4.0/base/vmlinux
10094546 5836696 17715200 33646442 201676a gcc-6.4.0/head/vmlinux
10018775 5828732 17715200 33562707 2002053 gcc-7.2.0/base/vmlinux
10018775 5828732 17715200 33562707 2002053 gcc-7.2.0/head/vmlinux
Text size for vmlinux has no change though, probably due to function
alignment.
Link: http://lkml.kernel.org/r/20171013063111.GA26032@intel.com
Signed-off-by: Aaron Lu <aaron.lu@intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Kemi Wang <kemi.wang@intel.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
Commit 60a3cdd063 ("x86: add optimized inlining") introduced
CONFIG_OPTIMIZE_INLINING, but it has been available only for x86.
The idea is obviously arch-agnostic. This commit moves the config entry
from arch/x86/Kconfig.debug to lib/Kconfig.debug so that all
architectures can benefit from it.
This can make a huge difference in kernel image size especially when
CONFIG_OPTIMIZE_FOR_SIZE is enabled.
For example, I got 3.5% smaller arm64 kernel for v5.1-rc1.
dec file
18983424 arch/arm64/boot/Image.before
18321920 arch/arm64/boot/Image.after
This also slightly improves the "Kernel hacking" Kconfig menu as
e61aca5158 ("Merge branch 'kconfig-diet' from Dave Hansen') suggested;
this config option would be a good fit in the "compiler option" menu.
Link: http://lkml.kernel.org/r/20190423034959.13525-12-yamada.masahiro@socionext.com
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Borislav Petkov <bp@suse.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boris Brezillon <bbrezillon@kernel.org>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Marek Vasut <marek.vasut@gmail.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Malaterre <malat@debian.org>
Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Weinberger <richard@nod.at>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Stefan Agner <stefan@agner.ch>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[kdrag0n: Backported to k4.14]
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
-O3 is much more stable with modern compilers these days than it was a
decade ago. Using -O3 on the kernel results in significantly improved
hackbench performance, which is a sign that overall performance in the
kernel is improved. It works especially well in conjunction with LTO.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
There's plenty of room on the stack for a few more inlined bytes here
and there. The measured stack usage at runtime is still safe without
this, and performance is surely improved at a microscopic level, so
remove it.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
SD720G doesn't have 2400000 frequency, it only goes to 2323200.
Signed-off-by: Yaroslav Furman <yaro330@gmail.com>
[dereference23: Adapted for atoll]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Now that pstore_register() can correctly pass max_reason to the kmesg
dump facility, introduce a new "max_reason" module parameter and
"max-reason" Device Tree field.
The "dump_oops" module parameter and "dump-oops" Device
Tree field are now considered deprecated, but are now automatically
converted to their corresponding max_reason values when present, though
the new max_reason setting has precedence.
For struct ramoops_platform_data, the "dump_oops" member is entirely
replaced by a new "max_reason" member, with the only existing user
updated in place.
Additionally remove the "reason" filter logic from ramoops_pstore_write(),
as that is not specifically needed anymore, though technically
this is a change in behavior for any ramoops users also setting the
printk.always_kmsg_dump boot param, which will cause ramoops to behave as
if max_reason was set to KMSG_DUMP_MAX.
Co-developed-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/lkml/20200515184434.8470-6-keescook@chromium.org/
Signed-off-by: Kees Cook <keescook@chromium.org>
Add a new member to struct pstore_info for passing information about
kmesg dump maximum reason. This allows a finer control of what kmesg
dumps are sent to pstore storage backends.
Those backends that do not explicitly set this field (keeping it equal to
0), get the default behavior: store only Oopses and Panics, or everything
if the printk.always_kmsg_dump boot param is set.
Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com>
Link: https://lore.kernel.org/lkml/20200515184434.8470-5-keescook@chromium.org/
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Since the header is a fixed small maximum size, just use a stack variable
to avoid memory allocation in the write path.
Signed-off-by: Kees Cook <keescook@chromium.org>
If zero-length header happened in ramoops_write_kmsg_hdr(), that means
we will not be able to read back dmesg record later, since it will be
treated as invalid header in ramoops_pstore_read(). So we should not
execute the following code but return the error.
Signed-off-by: Yue Hu <huyue2@yulong.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Since only one single ramoops area allowed at a time, other probes
(like device tree) are meaningless, as it will waste CPU resources.
So let's check for being already initialized first.
Signed-off-by: Yue Hu <huyue2@yulong.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Sometimes pstore_console_write() will write records with zero size
to persistent ram zone, which is unnecessary. It will only increase
resource consumption. Also adjust ramoops_write_kmsg_hdr() to have
same logic if memory allocation fails.
Signed-off-by: Yue Hu <huyue2@yulong.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
The UFS callback is the most time consuming callback in the dpm_resume
section of kernel resumes, taking around 30 ms. Making it async
improves resume latency by around 20 ms, and helps with decreasing
suspend times as well.
Bug: 134704391
Change-Id: I708c8a7bc8f2250d6b2365971ccc394c7fbf8896
Signed-off-by: Vincent Palomares <paillon@google.com>
Although try_to_freeze_tasks() stops when there's a wakeup, it doesn't
return an error when it successfully freezes everything it wants to freeze.
As a result, the suspend attempt can continue even after a wakeup is
issued. Although the wakeup will be eventually caught later in the suspend
process, kicking the can down the road is suboptimal; when there's a wakeup
detected, suspend should be immediately aborted by returning an error
instead. Make try_to_freeze_tasks() do just that, and also move the wakeup
check above the `todo` check so that we don't miss a wakeup from a process
that successfully froze.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I6d0ff54b1e1e143df2679d3848019590725c6351
PM callbacks can be used as an indicator that the system is suspending, and
as such, a wakeup may be issued outside the notifier in order to cancel an
ongoing suspend. Clearing wakeups after running the PM callbacks can thus
cause wakeups to be missed. Fix it by simply clearing wakeups prior to
running PM callbacks.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I49b6f900284ff42b245be77548f6ca756cf585ee
Calling s2idle_wake() once after pm_abort_suspend is incremented is all
that's needed to wake up from s2idle. Avoid multiple unnecessary attempts
to wake from s2idle by only doing the wakeup when pm_abort_suspend hits 1.
The s2idle machinery already provides the synchronization needed to make
this safe.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I789857351353737f7199a8df5c5ba210050352e6
Freezing processes on Android usually takes less than 100 ms, and if it
takes longer than that to the point where the 20 second freeze timeout is
reached, it's because the remaining processes to be frozen are deadlocked
waiting for something from a process which is already frozen. There's no
point in burning power trying to freeze for that long, so reduce the freeze
timeout to a very generous 1 second for Android and don't let anything mess
with it.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
When some CPUs cycle out of s2idle due to non-wakeup IRQs, it's possible
for them to run while the CPU responsible for jiffies updates remains idle.
This can delay the execution of timers indefinitely until the CPU managing
the jiffies updates finally wakes up, by which point everything could be
dead if enough time passes.
Fix it by handing off timekeeping duties when the timekeeping CPU enters
s2idle and freezes its tick. When all CPUs are in s2idle, the first one to
wake up for any reason (either from a wakeup IRQ or non-wakeup IRQ) will
assume responsibility for the timekeeping tick.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: Idbb2e825c489e174d5701e0c315b51a3149bfe49
Replace 'schedule(); try_to_freeze();' with a call to freezable_schedule().
Tasks calling freezable_schedule() set the PF_FREEZER_SKIP flag
before calling schedule(). Unlike tasks calling schedule();
try_to_freeze() tasks calling freezable_schedule() are not awaken by
try_to_freeze_tasks(). Instead they call try_to_freeze() when they
wake up if the freeze is still underway.
It is not a problem since sleeping tasks can't do anything which isn't
allowed for a frozen task while sleeping.
The result is a potential performance gain during freeze, since less
tasks have to be awaken.
For instance on a bare Debian vm running a 4.19 stable kernel, the
number of tasks skipped in freeze_task() went up from 12 without the
patch to 32 with the patch (out of 448), an increase of > x2.5.
Signed-off-by: Hugo Lefeuvre <hle@owl.eu.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Cc: Joel Fernandes <joel@joelfernandes.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20190207200352.GA27859@behemoth.owl.eu.com.local
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Change-Id: I1b90daf9ebc8cdfaffc5fe8fe8f1883b9588a6c6
* This is spurious and does not get destroyed, which keeps this
wakelock as pending. Remove this code to save power.
* To enable these wakelocks again, pass `-DIPA_WAKELOCKS` as a
cflag.
Signed-off-by: Vaisakh Murali <mvaisakh@statixos.com>
Change-Id: I4a424f7bddbfb793edbdeb3b6ac1b69d2f4a676e
The original dedup code does not handle collision from the observation
that it practically does not happen.
For additional peace of mind, use a bigger hash size for reducing the
possibility of collision even further.
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I4543ce22d69a21b2edcc72a72790d2c55a9e312c
Reference: 59e1a2f4bf83 ("ksm: replace jhash2 with xxhash")
Signed-off-by: Park Ju Hyung <qkrwngud825@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: Icb6221a96d0d59e6bbc1c908ec1cd5e59d4a7117
xxhash's performance depends heavily on compiler optimizations including
inlines. Follow upstream's behavior and inline those helper functions.
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: Ieee439865dc48f21a97032a8009941331c302227
These simply wraps memcpy().
Replace it with macros so that it is naturally inlined.
Signed-off-by: Juhyung Park <qkrwngud825@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I67887a706816aff46339d43b2f28e97813c1d4af
A measurably significant amount of CPU time is spent on logging events
for debugging purposes in lpm_cpuidle_enter. Kill the useless logging to
reduce overhead.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Change-Id: I9d34ece7e910d92e6f1a55fa85b3db5828fbbb9e
Waking the GPU upon touch wastes power when the screen is being touched
in a way that does not induce animation or any actual need for GPU usage.
Instead of preemptively waking the GPU on touch input, wake it up upon
receiving a IOCTL_KGSL_GPU_COMMAND ioctl since it is a sign that the GPU
will soon be needed.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I6387083562578b229ea0913b5b2fa6562d4a85e9
When the rotator is actually used (still an unsolved question in
computer science), these PM QoS requests block some CPUs in the LITTLE
cluster from entering deep idle because the driver assumes that display
rotating work occurs on a hardcoded set of CPUs, which is false. We
already have the IRQ PM QoS machinery for display rendering operations
that actually matter, so this cruft is unneeded.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I30542fa3009dd46ff0c21058f14c8923d7fcb892
These are blocking some CPUs in the LITTLE cluster from entering deep
idle because the driver assumes that display rendering work occurs on a
hardcoded set of CPUs, which is false. The scope of this is also quite
large, which increases power consumption.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I0ab629ac45d9d99b74f764030c1e1c2e6823fc24
Qualcomm's implementation used the value from device tree node
qcom,pm-qos-cpu-group-latency-us (67 microseconds for atoll). Set
it manually to match PM QoS requests in other drivers.
Change-Id: I7aa175e90293f369cf4b838c2ff7d6b99177f1a8
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Qualcomm's PM QoS solution suffers from a number of issues: applying
PM QoS to all CPUs, convoluted spaghetti code that wastes CPU cycles,
and keeping PM QoS applied for 10 ms after all requests finish
processing.
This implements a simple IRQ-affined PM QoS mechanism for each UFS
adapter which uses atomics to elide locking, and enqueues a worker to
apply PM QoS to the target CPU as soon as a command request is issued.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I634873deb7baf0916208809c025eb0d3ccdb0fa3
This implementation is completely over the top and wastes lots of CPU
cycles. It's too convoluted to fix, so just scrap it to make way for a
simpler solution. This purges every PM QoS reference in the UFS drivers.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I6b6a585aa975c10dcec5f1d6a0062cc6073ffdf4
UFS devices from a couple of vendors have a limitation with the
number of power cycles. They can withstand only a certain amount
of power cycles.
Disable turning off the link for those devices so that we can
minimize the number of power cycles.
Change-Id: I87030f63c9312eb216b292549238455054efb706
Signed-off-by: Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The scsi_block_reqs_cnt increased in ufshcd_hold() is supposed to be
decreased back in ufshcd_ungate_work() in a paired way. However, if
specific ufshcd_hold/release sequences are met, it is possible that
scsi_block_reqs_cnt is increased twice but only one ungate work is
queued. To make sure scsi_block_reqs_cnt is handled by ufshcd_hold() and
ufshcd_ungate_work() in a paired way, increase it only if queue_work()
returns true.
Change-Id: I35f870e49cc832bb6f4b58763e3240f05b98ec13
Signed-off-by: Can Guo <cang@codeaurora.org>
Signed-off-by: Ziqi Chen <ziqichen@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Currently bits in hba->outstanding_tasks are cleared only after their
corresponding task management commands are successfully done by
__ufshcd_issue_tm_cmd().
If timeout happens in a task management command, its corresponding bit in
hba->outstanding_tasks will not be cleared until next task management
command with the same tag used successfully finishes.
This is wrong and can lead to some issues, like power issue. For example,
ufshcd_release() and ufshcd_gate_work() will do nothing if
hba->outstanding_tasks is not zero even if both UFS host and devices are
actually idle.
Solution is referred from error handling of device commands: bits in
hba->outstanding_tasks shall be cleared regardless of their execution
results.
Change-Id: I92269b69da38f65b23355aa928869ff2f98c9120
Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
Signed-off-by: Chun-Hung Wu <chun-hung.wu@mediatek.com>
Reviewed-by: Avri Altman <avri.altman@wdc.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Git-commit: b557217c8475f40bc765ee20ff6b3b9124c8a4fe
Git-repo: https://android.googlesource.com/kernel/common/
[nitirawa@codeaurora.org: Ported to 4.19 kernel]
Signed-off-by: Nitin Rawat <nitirawa@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Commit 53c12d0ef6 ("scsi: ufs: fix error recovery after the hibern8
exit failure") would leave hba->clk_gating.active_reqs++ and skip
subsequent actions in ufshcd_hold() if error handling is in progress.
It may cause next ufschcd_release() queue a new clk gate work into
hytimer even though the previous clk gate work has not yet got finish.
Under this corner case, ufshcd_gate_work() may change uic_link_state
to UIC_LINK_HIBERN8_STATE at the heels of setting clk state to CLK_ON
by ufshcd_hold() and then run into ufshcd_hold dead loop. To fix this
issue, we need to ensure there is no any pending and running clk gate
work before changing clk state to CLK_ON in ufshcd_hold().
Change-Id: I25fe35f2cad18f8a77fccf40755d856ee670594d
Signed-off-by: Ziqi Chen <ziqichen@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
In some rare scenario, inconsistent state b/w
clk_gating.state and uic link state is observed
when back to back clk_gate_work gets scheduled.
Sequence of events:
1. hibern8 enter fails as a part of ufshc_gate_work.
2. Link recovery marks eh_in_progress and schedules
the error handler to recovery the failure.
3. Error handler finished its work, it
clears eh_in_progress and returns, thus link recovery
succeeds
4. THe next retry of hibern8 succeed and sets the
clk_gating.state to CLK_OFF.
5. B/w step 2 and 4 when eh_in_progress is already set,
if any async thread says IOCTL calls ufshcd_hold, only
clk_gating.active_requests increases and it bails out.
6. After async thread finishes its job and calls ufshcd_release.
if ufshcd_release() is called after eh_in_progress is
cleared, it would schedule the
clk_gate_work instead of bailing out.
7. when 2nd gate_work runs after 1st clk gate work is over,
since 1st gate work would have set the clk_gating.state
as CLKS_OFF, the current gate work sets
clk_gating.state to CLKS_ON and bails out.
This leaves the clk_gating.state as CLKS_ON and link as
hibern8 state which causes inconsistency and causes i/o block
and deadloop when new i/o request calls ufshcd_hold functions
with sync flag as false and true successfully.
Fix this by checking clk_gating.state, if it is already CLKS_OFF,
bail from clk_gate_work().
Change-Id: Ia60a4c872c39636c1e5144584025f8938e712202
Signed-off-by: Can Guo <cang@codeaurora.org>
Signed-off-by: Nitin Rawat <nitirawa@codeaurora.org>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The initial value of pos is zero.
This causes UFS driver to access err_hist->reg[-1].
Bug: 174649668
Signed-off-by: Randall Huang <huangrandall@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I4f0bc416ae8f52bf4e4f88ba84a8db98513e165b
[1] tried to fix wrong lrbp=NULL after clear_bit_unlock, which caused a race
condition. But, it caused a power regression since __ufshcd_release was called
before lrb_in_use is released.
Actually, we should have followed the sequence:
1. lrbp->cmd = NULL
2. clear_bit_unlock()
3. __ufshcd_release()
4. __ufshcd_hibern8_release()
Let's add right fix.
[1] f0e7e5baba0a (scsi: ufs: Avoid potential lrb race caused by early release of lrb_in_use")
Bug: 157450639
Fixes: 639e1063a57e ("Revert "scsi: ufs: Avoid potential lrb race caused by early release of lrb_in_use"")
Fixes: f0e7e5baba0a (scsi: ufs: Avoid potential lrb race caused by early release of lrb_in_use")
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Change-Id: If738dc3e8e9caa7eee28c729dea3a47a3ed56b97
[dereference23: Apply to msm-4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
The descriptor access function has a potential issue. It makes a
buffer overflow bug and trigger a kernel panic. Fix the boundary check
and return EINVAL when it has an invalid input.
log:
Kernel panic - not syncing: stack-protector: Kernel stack is corrupted
in: ufs_sysfs_read_desc_param+0x1a4/0x1a4
Call trace:
dump_backtrace+0x0/0x1a0
dump_stack+0xbc/0xf8
panic+0x150/0x2d4
clear_warn_once_fops_open+0x0/0x30
lun_write_protect_show+0x0/0x74
Bug: 153344835
Test: adb shell cat /sys/devices/platform/soc/1d84000.ufshc/*_descriptor*/*
Change-Id: Ie57cfacc6f7b32f68e1b54bb1cf059d60e6d17c6
Signed-off-by: Leo Liou <leoliou@google.com>
[dereference23: Apply to msm-4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Matched with Pixel-4.19, we can disable clocks and links if autohibern8 is on.
The patch fixes power regression, 10~15mW, caused by the below two patches.
Bug: 151181812
Fixes: 92967c0c2ecb "scsi: ufs: set autohibern8 timer regardless of hibern8_on_idle"
Fixes: 5a57ae46dd6f "scsi: ufs: disable hibern8_on_idle"
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I00fe1add360922559d7e227ef630f34d04a42958
We observed several failures when probing hba.
Since there is no error recovery design, the device is
not boot into Android.
Log as shown below:
pwr ctrl cmd 0x2 failed, host upmcrs:0x4
...
ufshcd_change_power_mode: power mode change failed 4
ufshcd_probe_hba: Failed setting power mode, err = 4
Because we do not support removable UFS card,
we can introduce a retry design in async_scan().
The UFS can be initlized after re-probing hba.
Log as shown below:
ufshcd_async_scan failed. Err = 4. Retry 3
qcom_ice 1d90000.ufsice: QC ICE 3.1.79 device found @0xffffff800d9a0000
ufshcd_print_pwr_info:[RX, TX]: gear=[1, 1], lane[1, 1], pwr[SLOWAUTO_MODE, SLOWAUTO_MODE], rate = 0
ufshcd_print_pwr_info:[RX, TX]: gear=[3, 3], lane[2, 2], pwr[FAST MODE, FAST MODE], rate = 2
Bug: 151080047
Test: inject failure and boot to Android
Change-Id: I76af5c89f91c72aecb1b5824fcada66dadbc5c4a
Signed-off-by: Randall Huang <huangrandall@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This reverts commit 28652f3b71d2eeb780d61085907c8a04d5ced116.
Change log from v1:
- add is_binary_power_count to toggle the power_count under phy mutex
- no funcitonal changes on other components
= v1 description:
This can contribute a race condition between is_phy_pwr_on and phy_power_on/off.
Instead of it, we must rely on mutex with power_count in phy_power_on/off more
precisely. And, in order to fix the origina issue caused by multiple power on/
off, we must use power_count as a flag.
So, the final approach is same as moving "is_phy_pwr_on" under mutex.
Bug: 139262967
Change-Id: Ia2ca0a1187b97e0f5b9e46d8e8465ac0b28d2862
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Must have WQ_MEM_RECLAIM
``WQ_MEM_RECLAIM``
All wq which might be used in the memory reclaim paths **MUST**
have this flag set. The wq is guaranteed to have at least one
execution context regardless of memory pressure.
Bug: 158050260
Bug: 155410470
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Link: https://lore.kernel.org/linux-scsi/20200915204532.1672300-3-jaegeuk@kernel.org/T/#u
Change-Id: I65f2608650fa3436503581a60ac539f85273a21e
(cherry picked from commit ceed5e02518f5cedc799f2f37bd30211243a7959)
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
When enabling/disabling hibern8, we will call flush_work
to wait for the task finished. Using async operation
to avoid long waiting time.
Bug: 138085490
Test: Boot, cat /sys/kernel/debug/ufshcd0/show_hba and
check hibern8_exit_cnt
Signed-off-by: Martin Liu <liumartin@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Change-Id: I427cd8e40cbc6674a145142babbd92a1062dc511
This merged the following fix:
6a317b49c98c ("scsi: ufs: revise commit ecd2676bd513 ("disallow SECURITY_PROTOCOL_IN without _OUT")")
If we allow this, Hynix will give timeout due to spec violation.
The latest Hynix controller gives error instead of timeout.
Bug: 113580864
Bug: 79898356
Bug: 109850759
Bug: 117682499
Bug: 112560467
Change-Id: Ie7820a9604e4c7bc4cc530acf41bb5bb72f33d5b
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Randall Huang <huangrandall@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
We should call up_write() in the error case.
Bug: 77551464
Change-Id: I2e959a17f2dcafd2a2b8ff78dc4f8fa971929ac1
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
ufshcd_read_desc_param() will allocate full-size buffer and give it required
data back. Otherwise, it can cause memory buffer corruption.
Bug: 77551464
Change-Id: I84ed0cd1470c0dcc12f0aff9d631433fe16244e4
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
ufshcd_probe_hba() puts pm_runtime all the time. So, during the reset flow,
we need to get one.
pm_runtime_get_sync (1)
ufshcd_async_scan
pm_runtime_get_sync (2)
ufshcd_hold_all
ufshcd_probe_hba
- pm_runtime_put_sync (1)
- ufshcd_reset_and_restore
- ufshcd_detect_device
- ufshcd_host_reset_and_restore
- ufshcd_probe_hba
- pm_runtime_put_sync (0)
ufshcd_release_all
pm_runtime_put_sync (-1)
Bug: 157744625
Bug: 153043714
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Change-Id: I2d5696d6143842790fa25218beda12b71cfcc1d6
A non-zero return value could come from ufshcd_hba_enable() when the
host fails at reinit flow. But ufshcd_probe_hba() didn't take the error
value of this case. The patch is to fix this issue.
Bug: 160290772
Change-Id: Iedceeaeb056fe9fb519370ec42424e0542a73ed2
Signed-off-by: Leo Liou <leoliou@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Bug: 134949663
Bug: 137150088
Bug: 149155051
Test: test with powerhint feature
Change-Id: Ie5002107a69e7d56a889138eec0e593de1bf6a61
Signed-off-by: Jaegeuk Kim <jaegeuk@google.com>
Signed-off-by: Leo Liou <leoliou@google.com>
[dereference23: Apply to msm-4.14]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Scheduler code is very hot and every little optimization counts. Instead
of constantly checking sched_numa_balancing when NUMA is disabled,
compile it out.
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Change-Id: I7334594fbe835f615a199cfe02ee526135abab06
This is tuned to match energy model characteristics and scheduler
efficiency enhancements.
Change-Id: Ia60e1ea888457fa1c0c0273cdd4b0180f0a87abf
Co-authored-by: Diep Quynh <remilia.1505@gmail.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
It's needed to speed up trace points and other dynamic debugging stuff.
Bug: 145162121
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I811b538bc5280a633c56e0544ba1f54cd6b234f2
Switch to 1 MiB static log buffer in __log_buf[]:
define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);
instead of having the log buffer reallocated at boot by:
setup_log_buf()
log_buf_add_cpu()
log_buf_len_update()
new_log_buf = memblock_virt_alloc_nopanic()
There is no need to do this reallocation for the log buffer.
Change-Id: I8bf00b1fe45e9f6393e332e88642ee0c8a85ad7e
Signed-off-by: Petri Gynther <pgynther@google.com>
As previous projects, disable sched autocgroup helps
reduce jank in certain workloads
Bug: 144961955
Bug: 143857245
Test: build and boot to home
Change-Id: I5f7cf53fede9e70aa389eed741bc2f9a624ee39d
Signed-off-by: Chiawei Wang <chiaweiwang@google.com>
HW tracing features shouldn't be enabled in any final product. So
disable it.
Bug: 154966878
Signed-off-by: Saravana Kannan <saravanak@google.com>
Change-Id: I6603e71b0912dd89d653bb0bd36a0a4cb8b504e1
This reserved memory dump region is intended to be used with the memory
dump v2 driver, but we've disabled that and we don't need this memory
dumping functionality. Remove the unused region and assiciated driver
node to save 36 MiB of memory.
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Change-Id: I5f784d6a88ff00b26a49fc507bd05881a9822965
Disable the MSM watchdog during suspend by removing "qcom,wakeup-enable".
Some watchdog timed out reset issues showed watchdog
expiration issues happened during suspend, especially when waiting for
s2idle_wait_head in s2idle_enter. But due to external interrupts are
disabled before this function, the watchdog petting timer might not
working as expectation. To avoid introducing watchdog reset when
execution of suspend/resume is not hang, remove "qcom,wakeup-enable" to
disable the feature.
Bug: 190429220
Change-Id: I7ce0ef57da15925cd024d602039d303c523bfd9b
Merged-In: I7ce0ef57da15925cd024d602039d303c523bfd9b
Signed-off-by: Woody Lin <woodylin@google.com>
(cherry picked from commit da702ade8884424ee578a3db2d4aaa217d8b85d6)
[dereference23: Apply for atoll]
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
This is primarily intended for generic x86 kernels which may run on all
sorts of broken systems. Our kernel runs on known hardware, so this is
unnecessary.
Disable it for a minor IRQ handler overhead reduction.
Suggested-by: Tyler Nijmeh <tylernij@gmail.com>
Signed-off-by: Danny Lin <danny@kdrag0n.dev>
Change-Id: I1992d2c88d3f3b9a9d15748d6c3c6bf6709d8812
Coresight is used for debugging purposes. When the debugging configs are
disabled, having these included causes power regressions due to clks
being left on. So lets disable all the coresight DT entries by default.
Signed-off-by: Will McVicker <willmcvicker@google.com>
Signed-off-by: Alexander Winkowski <dereference23@outlook.com>
Bug: 156429236
Test: compile, verify list of probed devices
Change-Id: I84f9c874f2f5e8720ced23c7b4268d1b536b96a7