Merge tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"Again, the majority of changes go into the cpufreq subsystem, but
there are no big features this time. The cpufreq changes that stand
out somewhat are the governor interface rework and improvements
related to the handling of frequency tables. Apart from those, there
are fixes and new device/CPU IDs in drivers, cleanups and an
improvement of the new schedutil governor.
Next, there are some changes in the hibernation core, including a fix
for a nasty problem related to the MONITOR/MWAIT usage by CPU offline
during resume from hibernation, a few core improvements related to
memory management during resume, a couple of additional debug features
and cleanups.
Finally, we have some fixes and cleanups in the devfreq subsystem,
generic power domains framework improvements related to system
suspend/resume, support for some new chips in intel_idle and in the
power capping RAPL driver, a new version of the AnalyzeSuspend utility
and some assorted fixes and cleanups.
Specifics:
- Rework the cpufreq governor interface to make it more
straightforward and modify the conservative governor to avoid using
transition notifications (Rafael Wysocki).
- Rework the handling of frequency tables by the cpufreq core to make
it more efficient (Viresh Kumar).
- Modify the schedutil governor to reduce the number of wakeups it
causes to occur in cases when the CPU frequency doesn't need to be
changed (Steve Muckle, Viresh Kumar).
- Fix some minor issues and clean up code in the cpufreq core and
governors (Rafael Wysocki, Viresh Kumar).
- Add Intel Broxton support to the intel_pstate driver (Srinivas
Pandruvada).
- Fix problems related to the config TDP feature and to the validity
of the MSR_HWP_INTERRUPT register in intel_pstate (Jan Kiszka,
Srinivas Pandruvada).
- Make intel_pstate update the cpu_frequency tracepoint even if the
frequency doesn't change to avoid confusing powertop (Rafael
Wysocki).
- Clean up the usage of __init/__initdata in intel_pstate, mark some
of its internal variables as __read_mostly and drop an unused
structure element from it (Jisheng Zhang, Carsten Emde).
- Clean up the usage of some duplicate MSR symbols in intel_pstate
and turbostat (Srinivas Pandruvada).
- Update/fix the powernv, s3c24xx and mvebu cpufreq drivers (Akshay
Adiga, Viresh Kumar, Ben Dooks).
- Fix a regression (introduced during the 4.5 cycle) in the
pcc-cpufreq driver by reverting the problematic commit (Andreas
Herrmann).
- Add support for Intel Denverton to intel_idle, clean up Broxton
support in it and make it explicitly non-modular (Jacob Pan, Jan
Beulich, Paul Gortmaker).
- Add support for Denverton and Ivy Bridge server to the Intel RAPL
power capping driver and make it more careful about the handing of
MSRs that may not be present (Jacob Pan, Xiaolong Wang).
- Fix resume from hibernation on x86-64 by making the CPU offline
during resume avoid using MONITOR/MWAIT in the "play dead" loop
which may lead to an inadvertent "revival" of a "dead" CPU and a
page fault leading to a kernel crash from it (Rafael Wysocki).
- Make memory management during resume from hibernation more
straightforward (Rafael Wysocki).
- Add debug features that should help to detect problems related to
hibernation and resume from it (Rafael Wysocki, Chen Yu).
- Clean up hibernation core somewhat (Rafael Wysocki).
- Prevent KASAN from instrumenting the hibernation core which leads
to large numbers of false-positives from it (James Morse).
- Prevent PM (hibernate and suspend) notifiers from being called
during the cleanup phase if they have not been called during the
corresponding preparation phase which is possible if one of the
other notifiers returns an error at that time (Lianwei Wang).
- Improve suspend-related debug printout in the tasks freezer and
clean up suspend-related console handling (Roger Lu, Borislav
Petkov).
- Update the AnalyzeSuspend script in the kernel sources to version
4.2 (Todd Brandt).
- Modify the generic power domains framework to make it handle system
suspend/resume better (Ulf Hansson).
- Make the runtime PM framework avoid resuming devices synchronously
when user space changes the runtime PM settings for them and
improve its error reporting (Rafael Wysocki, Linus Walleij).
- Fix error paths in devfreq drivers (exynos, exynos-ppmu,
exynos-bus) and in the core, make some devfreq code explicitly
non-modular and change some of it into tristate (Bartlomiej
Zolnierkiewicz, Peter Chen, Paul Gortmaker).
- Add DT support to the generic PM clocks management code and make it
export some more symbols (Jon Hunter, Paul Gortmaker).
- Make the PCI PM core code slightly more robust against possible
driver errors (Andy Shevchenko).
- Make it possible to change DESTDIR and PREFIX in turbostat (Andy
Shevchenko)"
* tag 'pm-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (89 commits)
Revert "cpufreq: pcc-cpufreq: update default value of cpuinfo_transition_latency"
PM / hibernate: Introduce test_resume mode for hibernation
cpufreq: export cpufreq_driver_resolve_freq()
cpufreq: Disallow ->resolve_freq() for drivers providing ->target_index()
PCI / PM: check all fields in pci_set_platform_pm()
cpufreq: acpi-cpufreq: use cached frequency mapping when possible
cpufreq: schedutil: map raw required frequency to driver frequency
cpufreq: add cpufreq_driver_resolve_freq()
cpufreq: intel_pstate: Check cpuid for MSR_HWP_INTERRUPT
intel_pstate: Update cpu_frequency tracepoint every time
cpufreq: intel_pstate: clean remnant struct element
PM / tools: scripts: AnalyzeSuspend v4.2
x86 / hibernate: Use hlt_play_dead() when resuming from hibernation
cpufreq: powernv: Replacing pstate_id with frequency table index
intel_pstate: Fix MSR_CONFIG_TDP_x addressing in core_get_max_pstate()
PM / hibernate: Image data protection during restoration
PM / hibernate: Add missing braces in __register_nosave_region()
PM / hibernate: Clean up comments in snapshot.c
PM / hibernate: Clean up function headers in snapshot.c
PM / hibernate: Add missing braces in hibernate_setup()
...
This commit is contained in:
@@ -1,6 +1,8 @@
|
||||
|
||||
ccflags-$(CONFIG_PM_DEBUG) := -DDEBUG
|
||||
|
||||
KASAN_SANITIZE_snapshot.o := n
|
||||
|
||||
obj-y += qos.o
|
||||
obj-$(CONFIG_PM) += main.o
|
||||
obj-$(CONFIG_VT_CONSOLE_SLEEP) += console.o
|
||||
|
||||
@@ -126,17 +126,17 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
int pm_prepare_console(void)
|
||||
void pm_prepare_console(void)
|
||||
{
|
||||
if (!pm_vt_switch())
|
||||
return 0;
|
||||
return;
|
||||
|
||||
orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
|
||||
if (orig_fgconsole < 0)
|
||||
return 1;
|
||||
return;
|
||||
|
||||
orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
|
||||
return 0;
|
||||
return;
|
||||
}
|
||||
|
||||
void pm_restore_console(void)
|
||||
|
||||
+68
-33
@@ -52,6 +52,7 @@ enum {
|
||||
#ifdef CONFIG_SUSPEND
|
||||
HIBERNATION_SUSPEND,
|
||||
#endif
|
||||
HIBERNATION_TEST_RESUME,
|
||||
/* keep last */
|
||||
__HIBERNATION_AFTER_LAST
|
||||
};
|
||||
@@ -409,6 +410,11 @@ int hibernation_snapshot(int platform_mode)
|
||||
goto Close;
|
||||
}
|
||||
|
||||
int __weak hibernate_resume_nonboot_cpu_disable(void)
|
||||
{
|
||||
return disable_nonboot_cpus();
|
||||
}
|
||||
|
||||
/**
|
||||
* resume_target_kernel - Restore system state from a hibernation image.
|
||||
* @platform_mode: Whether or not to use the platform driver.
|
||||
@@ -433,7 +439,7 @@ static int resume_target_kernel(bool platform_mode)
|
||||
if (error)
|
||||
goto Cleanup;
|
||||
|
||||
error = disable_nonboot_cpus();
|
||||
error = hibernate_resume_nonboot_cpu_disable();
|
||||
if (error)
|
||||
goto Enable_cpus;
|
||||
|
||||
@@ -642,12 +648,39 @@ static void power_down(void)
|
||||
cpu_relax();
|
||||
}
|
||||
|
||||
static int load_image_and_restore(void)
|
||||
{
|
||||
int error;
|
||||
unsigned int flags;
|
||||
|
||||
pr_debug("PM: Loading hibernation image.\n");
|
||||
|
||||
lock_device_hotplug();
|
||||
error = create_basic_memory_bitmaps();
|
||||
if (error)
|
||||
goto Unlock;
|
||||
|
||||
error = swsusp_read(&flags);
|
||||
swsusp_close(FMODE_READ);
|
||||
if (!error)
|
||||
hibernation_restore(flags & SF_PLATFORM_MODE);
|
||||
|
||||
printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n");
|
||||
swsusp_free();
|
||||
free_basic_memory_bitmaps();
|
||||
Unlock:
|
||||
unlock_device_hotplug();
|
||||
|
||||
return error;
|
||||
}
|
||||
|
||||
/**
|
||||
* hibernate - Carry out system hibernation, including saving the image.
|
||||
*/
|
||||
int hibernate(void)
|
||||
{
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
bool snapshot_test = false;
|
||||
|
||||
if (!hibernation_available()) {
|
||||
pr_debug("PM: Hibernation not available.\n");
|
||||
@@ -662,9 +695,11 @@ int hibernate(void)
|
||||
}
|
||||
|
||||
pm_prepare_console();
|
||||
error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Exit;
|
||||
}
|
||||
|
||||
printk(KERN_INFO "PM: Syncing filesystems ... ");
|
||||
sys_sync();
|
||||
@@ -697,8 +732,12 @@ int hibernate(void)
|
||||
pr_debug("PM: writing image.\n");
|
||||
error = swsusp_write(flags);
|
||||
swsusp_free();
|
||||
if (!error)
|
||||
power_down();
|
||||
if (!error) {
|
||||
if (hibernation_mode == HIBERNATION_TEST_RESUME)
|
||||
snapshot_test = true;
|
||||
else
|
||||
power_down();
|
||||
}
|
||||
in_suspend = 0;
|
||||
pm_restore_gfp_mask();
|
||||
} else {
|
||||
@@ -709,12 +748,18 @@ int hibernate(void)
|
||||
free_basic_memory_bitmaps();
|
||||
Thaw:
|
||||
unlock_device_hotplug();
|
||||
if (snapshot_test) {
|
||||
pr_debug("PM: Checking hibernation image\n");
|
||||
error = swsusp_check();
|
||||
if (!error)
|
||||
error = load_image_and_restore();
|
||||
}
|
||||
thaw_processes();
|
||||
|
||||
/* Don't bother checking whether freezer_test_done is true */
|
||||
freezer_test_done = false;
|
||||
Exit:
|
||||
pm_notifier_call_chain(PM_POST_HIBERNATION);
|
||||
__pm_notifier_call_chain(PM_POST_HIBERNATION, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
atomic_inc(&snapshot_device_available);
|
||||
Unlock:
|
||||
@@ -740,8 +785,7 @@ int hibernate(void)
|
||||
*/
|
||||
static int software_resume(void)
|
||||
{
|
||||
int error;
|
||||
unsigned int flags;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
/*
|
||||
* If the user said "noresume".. bail out early.
|
||||
@@ -827,35 +871,20 @@ static int software_resume(void)
|
||||
}
|
||||
|
||||
pm_prepare_console();
|
||||
error = pm_notifier_call_chain(PM_RESTORE_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Close_Finish;
|
||||
}
|
||||
|
||||
pr_debug("PM: Preparing processes for restore.\n");
|
||||
error = freeze_processes();
|
||||
if (error)
|
||||
goto Close_Finish;
|
||||
|
||||
pr_debug("PM: Loading hibernation image.\n");
|
||||
|
||||
lock_device_hotplug();
|
||||
error = create_basic_memory_bitmaps();
|
||||
if (error)
|
||||
goto Thaw;
|
||||
|
||||
error = swsusp_read(&flags);
|
||||
swsusp_close(FMODE_READ);
|
||||
if (!error)
|
||||
hibernation_restore(flags & SF_PLATFORM_MODE);
|
||||
|
||||
printk(KERN_ERR "PM: Failed to load hibernation image, recovering.\n");
|
||||
swsusp_free();
|
||||
free_basic_memory_bitmaps();
|
||||
Thaw:
|
||||
unlock_device_hotplug();
|
||||
error = load_image_and_restore();
|
||||
thaw_processes();
|
||||
Finish:
|
||||
pm_notifier_call_chain(PM_POST_RESTORE);
|
||||
__pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
atomic_inc(&snapshot_device_available);
|
||||
/* For success case, the suspend path will release the lock */
|
||||
@@ -878,6 +907,7 @@ static const char * const hibernation_modes[] = {
|
||||
#ifdef CONFIG_SUSPEND
|
||||
[HIBERNATION_SUSPEND] = "suspend",
|
||||
#endif
|
||||
[HIBERNATION_TEST_RESUME] = "test_resume",
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -924,6 +954,7 @@ static ssize_t disk_show(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
#ifdef CONFIG_SUSPEND
|
||||
case HIBERNATION_SUSPEND:
|
||||
#endif
|
||||
case HIBERNATION_TEST_RESUME:
|
||||
break;
|
||||
case HIBERNATION_PLATFORM:
|
||||
if (hibernation_ops)
|
||||
@@ -970,6 +1001,7 @@ static ssize_t disk_store(struct kobject *kobj, struct kobj_attribute *attr,
|
||||
#ifdef CONFIG_SUSPEND
|
||||
case HIBERNATION_SUSPEND:
|
||||
#endif
|
||||
case HIBERNATION_TEST_RESUME:
|
||||
hibernation_mode = mode;
|
||||
break;
|
||||
case HIBERNATION_PLATFORM:
|
||||
@@ -1115,13 +1147,16 @@ static int __init resume_offset_setup(char *str)
|
||||
|
||||
static int __init hibernate_setup(char *str)
|
||||
{
|
||||
if (!strncmp(str, "noresume", 8))
|
||||
if (!strncmp(str, "noresume", 8)) {
|
||||
noresume = 1;
|
||||
else if (!strncmp(str, "nocompress", 10))
|
||||
} else if (!strncmp(str, "nocompress", 10)) {
|
||||
nocompress = 1;
|
||||
else if (!strncmp(str, "no", 2)) {
|
||||
} else if (!strncmp(str, "no", 2)) {
|
||||
noresume = 1;
|
||||
nohibernate = 1;
|
||||
} else if (IS_ENABLED(CONFIG_DEBUG_RODATA)
|
||||
&& !strncmp(str, "protect_image", 13)) {
|
||||
enable_restore_image_protection();
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
+9
-2
@@ -38,12 +38,19 @@ int unregister_pm_notifier(struct notifier_block *nb)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(unregister_pm_notifier);
|
||||
|
||||
int pm_notifier_call_chain(unsigned long val)
|
||||
int __pm_notifier_call_chain(unsigned long val, int nr_to_call, int *nr_calls)
|
||||
{
|
||||
int ret = blocking_notifier_call_chain(&pm_chain_head, val, NULL);
|
||||
int ret;
|
||||
|
||||
ret = __blocking_notifier_call_chain(&pm_chain_head, val, NULL,
|
||||
nr_to_call, nr_calls);
|
||||
|
||||
return notifier_to_errno(ret);
|
||||
}
|
||||
int pm_notifier_call_chain(unsigned long val)
|
||||
{
|
||||
return __pm_notifier_call_chain(val, -1, NULL);
|
||||
}
|
||||
|
||||
/* If set, devices may be suspended and resumed asynchronously. */
|
||||
int pm_async_enabled = 1;
|
||||
|
||||
@@ -38,6 +38,8 @@ static inline char *check_image_kernel(struct swsusp_info *info)
|
||||
}
|
||||
#endif /* CONFIG_ARCH_HIBERNATION_HEADER */
|
||||
|
||||
extern int hibernate_resume_nonboot_cpu_disable(void);
|
||||
|
||||
/*
|
||||
* Keep some memory free so that I/O operations can succeed without paging
|
||||
* [Might this be more than 4 MB?]
|
||||
@@ -59,6 +61,13 @@ extern int hibernation_snapshot(int platform_mode);
|
||||
extern int hibernation_restore(int platform_mode);
|
||||
extern int hibernation_platform_enter(void);
|
||||
|
||||
#ifdef CONFIG_DEBUG_RODATA
|
||||
/* kernel/power/snapshot.c */
|
||||
extern void enable_restore_image_protection(void);
|
||||
#else
|
||||
static inline void enable_restore_image_protection(void) {}
|
||||
#endif /* CONFIG_DEBUG_RODATA */
|
||||
|
||||
#else /* !CONFIG_HIBERNATION */
|
||||
|
||||
static inline void hibernate_reserved_size_init(void) {}
|
||||
@@ -200,6 +209,8 @@ static inline void suspend_test_finish(const char *label) {}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
/* kernel/power/main.c */
|
||||
extern int __pm_notifier_call_chain(unsigned long val, int nr_to_call,
|
||||
int *nr_calls);
|
||||
extern int pm_notifier_call_chain(unsigned long val);
|
||||
#endif
|
||||
|
||||
|
||||
@@ -89,6 +89,9 @@ static int try_to_freeze_tasks(bool user_only)
|
||||
elapsed_msecs / 1000, elapsed_msecs % 1000,
|
||||
todo - wq_busy, wq_busy);
|
||||
|
||||
if (wq_busy)
|
||||
show_workqueue_state();
|
||||
|
||||
if (!wakeup) {
|
||||
read_lock(&tasklist_lock);
|
||||
for_each_process_thread(g, p) {
|
||||
|
||||
+521
-431
File diff suppressed because it is too large
Load Diff
@@ -266,16 +266,18 @@ static int suspend_test(int level)
|
||||
*/
|
||||
static int suspend_prepare(suspend_state_t state)
|
||||
{
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
if (!sleep_state_supported(state))
|
||||
return -EPERM;
|
||||
|
||||
pm_prepare_console();
|
||||
|
||||
error = pm_notifier_call_chain(PM_SUSPEND_PREPARE);
|
||||
if (error)
|
||||
error = __pm_notifier_call_chain(PM_SUSPEND_PREPARE, -1, &nr_calls);
|
||||
if (error) {
|
||||
nr_calls--;
|
||||
goto Finish;
|
||||
}
|
||||
|
||||
trace_suspend_resume(TPS("freeze_processes"), 0, true);
|
||||
error = suspend_freeze_processes();
|
||||
@@ -286,7 +288,7 @@ static int suspend_prepare(suspend_state_t state)
|
||||
suspend_stats.failed_freeze++;
|
||||
dpm_save_failed_step(SUSPEND_FREEZE);
|
||||
Finish:
|
||||
pm_notifier_call_chain(PM_POST_SUSPEND);
|
||||
__pm_notifier_call_chain(PM_POST_SUSPEND, nr_calls, NULL);
|
||||
pm_restore_console();
|
||||
return error;
|
||||
}
|
||||
|
||||
@@ -350,6 +350,12 @@ static int swsusp_swap_check(void)
|
||||
if (res < 0)
|
||||
blkdev_put(hib_resume_bdev, FMODE_WRITE);
|
||||
|
||||
/*
|
||||
* Update the resume device to the one actually used,
|
||||
* so the test_resume mode can use it in case it is
|
||||
* invoked from hibernate() to test the snapshot.
|
||||
*/
|
||||
swsusp_resume_device = hib_resume_bdev->bd_dev;
|
||||
return res;
|
||||
}
|
||||
|
||||
|
||||
+8
-6
@@ -47,7 +47,7 @@ atomic_t snapshot_device_available = ATOMIC_INIT(1);
|
||||
static int snapshot_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct snapshot_data *data;
|
||||
int error;
|
||||
int error, nr_calls = 0;
|
||||
|
||||
if (!hibernation_available())
|
||||
return -EPERM;
|
||||
@@ -74,9 +74,9 @@ static int snapshot_open(struct inode *inode, struct file *filp)
|
||||
swap_type_of(swsusp_resume_device, 0, NULL) : -1;
|
||||
data->mode = O_RDONLY;
|
||||
data->free_bitmaps = false;
|
||||
error = pm_notifier_call_chain(PM_HIBERNATION_PREPARE);
|
||||
error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1, &nr_calls);
|
||||
if (error)
|
||||
pm_notifier_call_chain(PM_POST_HIBERNATION);
|
||||
__pm_notifier_call_chain(PM_POST_HIBERNATION, --nr_calls, NULL);
|
||||
} else {
|
||||
/*
|
||||
* Resuming. We may need to wait for the image device to
|
||||
@@ -86,13 +86,15 @@ static int snapshot_open(struct inode *inode, struct file *filp)
|
||||
|
||||
data->swap = -1;
|
||||
data->mode = O_WRONLY;
|
||||
error = pm_notifier_call_chain(PM_RESTORE_PREPARE);
|
||||
error = __pm_notifier_call_chain(PM_RESTORE_PREPARE, -1, &nr_calls);
|
||||
if (!error) {
|
||||
error = create_basic_memory_bitmaps();
|
||||
data->free_bitmaps = !error;
|
||||
}
|
||||
} else
|
||||
nr_calls--;
|
||||
|
||||
if (error)
|
||||
pm_notifier_call_chain(PM_POST_RESTORE);
|
||||
__pm_notifier_call_chain(PM_POST_RESTORE, nr_calls, NULL);
|
||||
}
|
||||
if (error)
|
||||
atomic_inc(&snapshot_device_available);
|
||||
|
||||
@@ -47,6 +47,8 @@ struct sugov_cpu {
|
||||
struct update_util_data update_util;
|
||||
struct sugov_policy *sg_policy;
|
||||
|
||||
unsigned int cached_raw_freq;
|
||||
|
||||
/* The fields below are only needed when sharing a policy. */
|
||||
unsigned long util;
|
||||
unsigned long max;
|
||||
@@ -106,7 +108,7 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
|
||||
|
||||
/**
|
||||
* get_next_freq - Compute a new frequency for a given cpufreq policy.
|
||||
* @policy: cpufreq policy object to compute the new frequency for.
|
||||
* @sg_cpu: schedutil cpu object to compute the new frequency for.
|
||||
* @util: Current CPU utilization.
|
||||
* @max: CPU capacity.
|
||||
*
|
||||
@@ -121,14 +123,25 @@ static void sugov_update_commit(struct sugov_policy *sg_policy, u64 time,
|
||||
* next_freq = C * curr_freq * util_raw / max
|
||||
*
|
||||
* Take C = 1.25 for the frequency tipping point at (util / max) = 0.8.
|
||||
*
|
||||
* The lowest driver-supported frequency which is equal or greater than the raw
|
||||
* next_freq (as calculated above) is returned, subject to policy min/max and
|
||||
* cpufreq driver limitations.
|
||||
*/
|
||||
static unsigned int get_next_freq(struct cpufreq_policy *policy,
|
||||
unsigned long util, unsigned long max)
|
||||
static unsigned int get_next_freq(struct sugov_cpu *sg_cpu, unsigned long util,
|
||||
unsigned long max)
|
||||
{
|
||||
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
||||
struct cpufreq_policy *policy = sg_policy->policy;
|
||||
unsigned int freq = arch_scale_freq_invariant() ?
|
||||
policy->cpuinfo.max_freq : policy->cur;
|
||||
|
||||
return (freq + (freq >> 2)) * util / max;
|
||||
freq = (freq + (freq >> 2)) * util / max;
|
||||
|
||||
if (freq == sg_cpu->cached_raw_freq && sg_policy->next_freq != UINT_MAX)
|
||||
return sg_policy->next_freq;
|
||||
sg_cpu->cached_raw_freq = freq;
|
||||
return cpufreq_driver_resolve_freq(policy, freq);
|
||||
}
|
||||
|
||||
static void sugov_update_single(struct update_util_data *hook, u64 time,
|
||||
@@ -143,13 +156,14 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
|
||||
return;
|
||||
|
||||
next_f = util == ULONG_MAX ? policy->cpuinfo.max_freq :
|
||||
get_next_freq(policy, util, max);
|
||||
get_next_freq(sg_cpu, util, max);
|
||||
sugov_update_commit(sg_policy, time, next_f);
|
||||
}
|
||||
|
||||
static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy,
|
||||
static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu,
|
||||
unsigned long util, unsigned long max)
|
||||
{
|
||||
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
||||
struct cpufreq_policy *policy = sg_policy->policy;
|
||||
unsigned int max_f = policy->cpuinfo.max_freq;
|
||||
u64 last_freq_update_time = sg_policy->last_freq_update_time;
|
||||
@@ -189,7 +203,7 @@ static unsigned int sugov_next_freq_shared(struct sugov_policy *sg_policy,
|
||||
}
|
||||
}
|
||||
|
||||
return get_next_freq(policy, util, max);
|
||||
return get_next_freq(sg_cpu, util, max);
|
||||
}
|
||||
|
||||
static void sugov_update_shared(struct update_util_data *hook, u64 time,
|
||||
@@ -206,7 +220,7 @@ static void sugov_update_shared(struct update_util_data *hook, u64 time,
|
||||
sg_cpu->last_update = time;
|
||||
|
||||
if (sugov_should_update_freq(sg_policy, time)) {
|
||||
next_f = sugov_next_freq_shared(sg_policy, util, max);
|
||||
next_f = sugov_next_freq_shared(sg_cpu, util, max);
|
||||
sugov_update_commit(sg_policy, time, next_f);
|
||||
}
|
||||
|
||||
@@ -394,7 +408,7 @@ static int sugov_init(struct cpufreq_policy *policy)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int sugov_exit(struct cpufreq_policy *policy)
|
||||
static void sugov_exit(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
struct sugov_tunables *tunables = sg_policy->tunables;
|
||||
@@ -412,7 +426,6 @@ static int sugov_exit(struct cpufreq_policy *policy)
|
||||
mutex_unlock(&global_tunables_lock);
|
||||
|
||||
sugov_policy_free(sg_policy);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_start(struct cpufreq_policy *policy)
|
||||
@@ -434,6 +447,7 @@ static int sugov_start(struct cpufreq_policy *policy)
|
||||
sg_cpu->util = ULONG_MAX;
|
||||
sg_cpu->max = 0;
|
||||
sg_cpu->last_update = 0;
|
||||
sg_cpu->cached_raw_freq = 0;
|
||||
cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util,
|
||||
sugov_update_shared);
|
||||
} else {
|
||||
@@ -444,7 +458,7 @@ static int sugov_start(struct cpufreq_policy *policy)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_stop(struct cpufreq_policy *policy)
|
||||
static void sugov_stop(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
unsigned int cpu;
|
||||
@@ -456,53 +470,29 @@ static int sugov_stop(struct cpufreq_policy *policy)
|
||||
|
||||
irq_work_sync(&sg_policy->irq_work);
|
||||
cancel_work_sync(&sg_policy->work);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sugov_limits(struct cpufreq_policy *policy)
|
||||
static void sugov_limits(struct cpufreq_policy *policy)
|
||||
{
|
||||
struct sugov_policy *sg_policy = policy->governor_data;
|
||||
|
||||
if (!policy->fast_switch_enabled) {
|
||||
mutex_lock(&sg_policy->work_lock);
|
||||
|
||||
if (policy->max < policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->max,
|
||||
CPUFREQ_RELATION_H);
|
||||
else if (policy->min > policy->cur)
|
||||
__cpufreq_driver_target(policy, policy->min,
|
||||
CPUFREQ_RELATION_L);
|
||||
|
||||
cpufreq_policy_apply_limits(policy);
|
||||
mutex_unlock(&sg_policy->work_lock);
|
||||
}
|
||||
|
||||
sg_policy->need_freq_update = true;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int sugov_governor(struct cpufreq_policy *policy, unsigned int event)
|
||||
{
|
||||
if (event == CPUFREQ_GOV_POLICY_INIT) {
|
||||
return sugov_init(policy);
|
||||
} else if (policy->governor_data) {
|
||||
switch (event) {
|
||||
case CPUFREQ_GOV_POLICY_EXIT:
|
||||
return sugov_exit(policy);
|
||||
case CPUFREQ_GOV_START:
|
||||
return sugov_start(policy);
|
||||
case CPUFREQ_GOV_STOP:
|
||||
return sugov_stop(policy);
|
||||
case CPUFREQ_GOV_LIMITS:
|
||||
return sugov_limits(policy);
|
||||
}
|
||||
}
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static struct cpufreq_governor schedutil_gov = {
|
||||
.name = "schedutil",
|
||||
.governor = sugov_governor,
|
||||
.owner = THIS_MODULE,
|
||||
.init = sugov_init,
|
||||
.exit = sugov_exit,
|
||||
.start = sugov_start,
|
||||
.stop = sugov_stop,
|
||||
.limits = sugov_limits,
|
||||
};
|
||||
|
||||
static int __init sugov_module_init(void)
|
||||
|
||||
+2
-2
@@ -4369,8 +4369,8 @@ static void show_pwq(struct pool_workqueue *pwq)
|
||||
/**
|
||||
* show_workqueue_state - dump workqueue state
|
||||
*
|
||||
* Called from a sysrq handler and prints out all busy workqueues and
|
||||
* pools.
|
||||
* Called from a sysrq handler or try_to_freeze_tasks() and prints out
|
||||
* all busy workqueues and pools.
|
||||
*/
|
||||
void show_workqueue_state(void)
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user