Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull scheduler updates from Ingo Molnar: - membarrier updates (Mathieu Desnoyers) - SMP balancing optimizations (Mel Gorman) - stats update optimizations (Peter Zijlstra) - RT scheduler race fixes (Steven Rostedt) - misc fixes and updates * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/fair: Use a recently used CPU as an idle candidate and the basis for SIS sched/fair: Do not migrate if the prev_cpu is idle sched/fair: Restructure wake_affine*() to return a CPU id sched/fair: Remove unnecessary parameters from wake_affine_idle() sched/rt: Make update_curr_rt() more accurate sched/rt: Up the root domain ref count when passing it around via IPIs sched/rt: Use container_of() to get root domain in rto_push_irq_work_func() sched/core: Optimize update_stats_*() sched/core: Optimize ttwu_stat() membarrier/selftest: Test private expedited sync core command membarrier/arm64: Provide core serializing command membarrier/x86: Provide core serializing command membarrier: Provide core serializing command, *_SYNC_CORE lockin/x86: Implement sync_core_before_usermode() locking: Introduce sync_core_before_usermode() membarrier/selftest: Test global expedited command membarrier: Provide GLOBAL_EXPEDITED command membarrier: Document scheduler barrier requirements powerpc, membarrier: Skip memory barrier in switch_mm() membarrier/selftest: Test private expedited command
This commit is contained in:
@@ -31,7 +31,7 @@
|
||||
* enum membarrier_cmd - membarrier system call command
|
||||
* @MEMBARRIER_CMD_QUERY: Query the set of supported commands. It returns
|
||||
* a bitmask of valid commands.
|
||||
* @MEMBARRIER_CMD_SHARED: Execute a memory barrier on all running threads.
|
||||
* @MEMBARRIER_CMD_GLOBAL: Execute a memory barrier on all running threads.
|
||||
* Upon return from system call, the caller thread
|
||||
* is ensured that all running threads have passed
|
||||
* through a state where all memory accesses to
|
||||
@@ -40,6 +40,28 @@
|
||||
* (non-running threads are de facto in such a
|
||||
* state). This covers threads from all processes
|
||||
* running on the system. This command returns 0.
|
||||
* @MEMBARRIER_CMD_GLOBAL_EXPEDITED:
|
||||
* Execute a memory barrier on all running threads
|
||||
* of all processes which previously registered
|
||||
* with MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED.
|
||||
* Upon return from system call, the caller thread
|
||||
* is ensured that all running threads have passed
|
||||
* through a state where all memory accesses to
|
||||
* user-space addresses match program order between
|
||||
* entry to and return from the system call
|
||||
* (non-running threads are de facto in such a
|
||||
* state). This only covers threads from processes
|
||||
* which registered with
|
||||
* MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED.
|
||||
* This command returns 0. Given that
|
||||
* registration is about the intent to receive
|
||||
* the barriers, it is valid to invoke
|
||||
* MEMBARRIER_CMD_GLOBAL_EXPEDITED from a
|
||||
* non-registered process.
|
||||
* @MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED:
|
||||
* Register the process intent to receive
|
||||
* MEMBARRIER_CMD_GLOBAL_EXPEDITED memory
|
||||
* barriers. Always returns 0.
|
||||
* @MEMBARRIER_CMD_PRIVATE_EXPEDITED:
|
||||
* Execute a memory barrier on each running
|
||||
* thread belonging to the same process as the current
|
||||
@@ -51,7 +73,7 @@
|
||||
* to and return from the system call
|
||||
* (non-running threads are de facto in such a
|
||||
* state). This only covers threads from the
|
||||
* same processes as the caller thread. This
|
||||
* same process as the caller thread. This
|
||||
* command returns 0 on success. The
|
||||
* "expedited" commands complete faster than
|
||||
* the non-expedited ones, they never block,
|
||||
@@ -64,18 +86,54 @@
|
||||
* Register the process intent to use
|
||||
* MEMBARRIER_CMD_PRIVATE_EXPEDITED. Always
|
||||
* returns 0.
|
||||
* @MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE:
|
||||
* In addition to provide memory ordering
|
||||
* guarantees described in
|
||||
* MEMBARRIER_CMD_PRIVATE_EXPEDITED, ensure
|
||||
* the caller thread, upon return from system
|
||||
* call, that all its running threads siblings
|
||||
* have executed a core serializing
|
||||
* instruction. (architectures are required to
|
||||
* guarantee that non-running threads issue
|
||||
* core serializing instructions before they
|
||||
* resume user-space execution). This only
|
||||
* covers threads from the same process as the
|
||||
* caller thread. This command returns 0 on
|
||||
* success. The "expedited" commands complete
|
||||
* faster than the non-expedited ones, they
|
||||
* never block, but have the downside of
|
||||
* causing extra overhead. If this command is
|
||||
* not implemented by an architecture, -EINVAL
|
||||
* is returned. A process needs to register its
|
||||
* intent to use the private expedited sync
|
||||
* core command prior to using it, otherwise
|
||||
* this command returns -EPERM.
|
||||
* @MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE:
|
||||
* Register the process intent to use
|
||||
* MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE.
|
||||
* If this command is not implemented by an
|
||||
* architecture, -EINVAL is returned.
|
||||
* Returns 0 on success.
|
||||
* @MEMBARRIER_CMD_SHARED:
|
||||
* Alias to MEMBARRIER_CMD_GLOBAL. Provided for
|
||||
* header backward compatibility.
|
||||
*
|
||||
* Command to be passed to the membarrier system call. The commands need to
|
||||
* be a single bit each, except for MEMBARRIER_CMD_QUERY which is assigned to
|
||||
* the value 0.
|
||||
*/
|
||||
enum membarrier_cmd {
|
||||
MEMBARRIER_CMD_QUERY = 0,
|
||||
MEMBARRIER_CMD_SHARED = (1 << 0),
|
||||
/* reserved for MEMBARRIER_CMD_SHARED_EXPEDITED (1 << 1) */
|
||||
/* reserved for MEMBARRIER_CMD_PRIVATE (1 << 2) */
|
||||
MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3),
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
|
||||
MEMBARRIER_CMD_QUERY = 0,
|
||||
MEMBARRIER_CMD_GLOBAL = (1 << 0),
|
||||
MEMBARRIER_CMD_GLOBAL_EXPEDITED = (1 << 1),
|
||||
MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED = (1 << 2),
|
||||
MEMBARRIER_CMD_PRIVATE_EXPEDITED = (1 << 3),
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED = (1 << 4),
|
||||
MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 5),
|
||||
MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE = (1 << 6),
|
||||
|
||||
/* Alias for header backward compatibility. */
|
||||
MEMBARRIER_CMD_SHARED = MEMBARRIER_CMD_GLOBAL,
|
||||
};
|
||||
|
||||
#endif /* _UAPI_LINUX_MEMBARRIER_H */
|
||||
|
||||
Reference in New Issue
Block a user