UAPI Changes:
- Remove unused flags (Francois Dugast) - Extend uAPI to query HuC micro-controler firmware version (Francois Dugast) - drm/xe/uapi: Define topology types as indexes rather than masks (Francois Dugast) - drm/xe/uapi: Restore flags VM_BIND_FLAG_READONLY and VM_BIND_FLAG_IMMEDIATE (Francois Dugast) - devcoredump updates. Some touching the output format. (José Roberto de Souza, Matthew Brost) - drm/xe/hwmon: Add infra to support card power and energy attributes - Improve LRC, HWSP and HWCTX error capture. (Maarten Lankhorst) - drm/xe/uapi: Add IP version and stepping to GT list query (Matt roper) - Invalidate userptr VMA on page pin fault (Matthew Brost) - Improve xe_bo_move tracepoint (Priyanka Danamudi) - Align fence output format in ftrace log Cross-driver Changes: - drm/i915/hwmon: Get rid of devm (Ashutosh Dixit) (Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>) - drm/i915/display: convert inner wakeref get towards get_if_in_use (SOB Rodrigo Vivi) - drm/i915: Convert intel_runtime_pm_get_noresume towards raw wakeref (Committer, SOB Jani Nikula) Driver Changes: - Fix for unneeded CCS metadata allocation (Akshata Jahagirdar) - Fix for fix multicast support for Xe_LP platforms (Andrzej Hajda) - A couple of build fixes (Arnd Bergmann) - Fix register definition (Ashutosh Dixit) - Add BMG mocs table (Balasubramani Vivekanandan) - Replace sprintf() across driver (Bommu Krishnaiah) - Add an xe2 workaround (Bommu Krishnaiah) - Makefile fix (Dafna Hirschfeld) - force_wake_get error value check (Daniele Ceraolo Spurio) - Handle GSCCS ER interrupt (Daniele Ceraolo Spurio) - GSC Workaround (Daniele Ceraolo Spurio) - Build error fix (Dawei Li) - drm/xe/gt: Add L3 bank mask to GT topology (Francois Dugast) - Implement xe2- and GuC workarounds (Gustavo Sousa, Haridhar Kalvala, Himal rasad Ghimiray, John Harrison, Matt Roper, Radhakrishna Sripada, Vinay Belgaumkar, Badal Nilawar) - xe2hpg compression (Himal Ghimiray Prasad) - Error code cleanups and fixes (Himal Prasad Ghimiray) - struct xe_device cleanup (Jani Nikula) - Avoid validating bos when only requesting an exec dma-fence (José Roberto de Souza) - Remove debug message from migrate_clear (José Roberto de Souza) - Nuke EXEC_QUEUE_FLAG_PERSISTENT leftover internal flag (José Roberto de Souza) - Mark dpt and related vma as uncached (Juha-Pekka Heikkila) - Hwmon updates (Karthik Poosa) - KConfig fix when ACPI_WMI selcted (Lu Yao) - Update intel_uncore_read*() return types (Luca Coelho) - Mocs updates (Lucas De Marchi, Matt Roper) - Drop dynamic load-balancing workaround (Lucas De Marchi) - Fix a PVC workaround (Lucas De Marchi) - Group live kunit tests into a single module (Lucas De Marchi) - Various code cleanups (Lucas De Marchi) - Fix a ggtt init error patch and move ggtt invalidate out of ggtt lock (Maarten Lankhorst) - Fix a bo leak (Marten Lankhorst) - Add LRC parsing for more GPU instructions (Matt Roper) - Add various definitions for hardware and IP (Matt Roper) - Define all possible engines in media IP descriptors (Matt Roper) - Various cleanups, asserts and code fixes (Matthew Auld) - Various cleanups and code fixes (Matthew Brost) - Increase VM_BIND number of per-ioctl Ops (Matthew Brost, Paulo Zanoni) - Don't support execlists in xe_gt_tlb_invalidation layer (Matthew Brost) - Handle timing out of already signaled jobs gracefully (Matthew Brost) - Pipeline evict / restore of pinned BOs during suspend / resume (Matthew Brost) - Do not grab forcewakes when issuing GGTT TLB invalidation via GuC (Matthew Brost) - Drop ggtt invalidate from display code (Matthew Brost) - drm/xe: Add XE_BO_GGTT_INVALIDATE flag (Matthew Brost) - Add debug messages for MMU notifier and VMA invalidate (Matthew Brost) - Use ordered wq for preempt fence waiting (Matthew Brost) - Initial development for SR-IOV support including some refactoring (Michal Wajdeczko) - Various GuC- and GT- related cleanups and fixes (Michal Wajdeczko) - Move userptr over to start using hmm_range_fault (Oak Zeng) - Add new PCI IDs to DG2 platform (Ravi Kumar Vodapalli) - Pcode - and VRAM initialization check update (Riana Tauro) - Large PM update including i915 display patches, and a fix for one of those. (Rodrigo Vivi) - Introduce performance tuning changes for Xe2_HPG (Shekhar Chauhan) - GSC / HDCP updates (Suraj Kandpal) - Minor code cleanup (Tejas Upadhyay) - Rework / fix rebind TLB flushing and move rebind into the drm_exec locking loop (Thomas Hellström) - Backmerge (Thomas Hellström) - GuC updates and fixes (Vinay Belgaumkar, Zhanjun Dong) -----BEGIN PGP SIGNATURE----- iHUEABYKAB0WIQRskUM7w1oG5rx2IZO4FpNVCsYGvwUCZiestQAKCRC4FpNVCsYG v8dLAQCDFUR7R5rwSdfqzNy+Djg+9ZgmtzVEfHZ+rI2lTReaCwEAhWeK7UooIMV0 vGsSdsqGsJQm4VLRzE6H1yemCCQOBgM= =HouD -----END PGP SIGNATURE----- Merge tag 'drm-xe-next-2024-04-23' of https://gitlab.freedesktop.org/drm/xe/kernel into drm-next UAPI Changes: - Remove unused flags (Francois Dugast) - Extend uAPI to query HuC micro-controler firmware version (Francois Dugast) - drm/xe/uapi: Define topology types as indexes rather than masks (Francois Dugast) - drm/xe/uapi: Restore flags VM_BIND_FLAG_READONLY and VM_BIND_FLAG_IMMEDIATE (Francois Dugast) - devcoredump updates. Some touching the output format. (José Roberto de Souza, Matthew Brost) - drm/xe/hwmon: Add infra to support card power and energy attributes - Improve LRC, HWSP and HWCTX error capture. (Maarten Lankhorst) - drm/xe/uapi: Add IP version and stepping to GT list query (Matt roper) - Invalidate userptr VMA on page pin fault (Matthew Brost) - Improve xe_bo_move tracepoint (Priyanka Danamudi) - Align fence output format in ftrace log Cross-driver Changes: - drm/i915/hwmon: Get rid of devm (Ashutosh Dixit) (Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>) - drm/i915/display: convert inner wakeref get towards get_if_in_use (SOB Rodrigo Vivi) - drm/i915: Convert intel_runtime_pm_get_noresume towards raw wakeref (Committer, SOB Jani Nikula) Driver Changes: - Fix for unneeded CCS metadata allocation (Akshata Jahagirdar) - Fix for fix multicast support for Xe_LP platforms (Andrzej Hajda) - A couple of build fixes (Arnd Bergmann) - Fix register definition (Ashutosh Dixit) - Add BMG mocs table (Balasubramani Vivekanandan) - Replace sprintf() across driver (Bommu Krishnaiah) - Add an xe2 workaround (Bommu Krishnaiah) - Makefile fix (Dafna Hirschfeld) - force_wake_get error value check (Daniele Ceraolo Spurio) - Handle GSCCS ER interrupt (Daniele Ceraolo Spurio) - GSC Workaround (Daniele Ceraolo Spurio) - Build error fix (Dawei Li) - drm/xe/gt: Add L3 bank mask to GT topology (Francois Dugast) - Implement xe2- and GuC workarounds (Gustavo Sousa, Haridhar Kalvala, Himal rasad Ghimiray, John Harrison, Matt Roper, Radhakrishna Sripada, Vinay Belgaumkar, Badal Nilawar) - xe2hpg compression (Himal Ghimiray Prasad) - Error code cleanups and fixes (Himal Prasad Ghimiray) - struct xe_device cleanup (Jani Nikula) - Avoid validating bos when only requesting an exec dma-fence (José Roberto de Souza) - Remove debug message from migrate_clear (José Roberto de Souza) - Nuke EXEC_QUEUE_FLAG_PERSISTENT leftover internal flag (José Roberto de Souza) - Mark dpt and related vma as uncached (Juha-Pekka Heikkila) - Hwmon updates (Karthik Poosa) - KConfig fix when ACPI_WMI selcted (Lu Yao) - Update intel_uncore_read*() return types (Luca Coelho) - Mocs updates (Lucas De Marchi, Matt Roper) - Drop dynamic load-balancing workaround (Lucas De Marchi) - Fix a PVC workaround (Lucas De Marchi) - Group live kunit tests into a single module (Lucas De Marchi) - Various code cleanups (Lucas De Marchi) - Fix a ggtt init error patch and move ggtt invalidate out of ggtt lock (Maarten Lankhorst) - Fix a bo leak (Marten Lankhorst) - Add LRC parsing for more GPU instructions (Matt Roper) - Add various definitions for hardware and IP (Matt Roper) - Define all possible engines in media IP descriptors (Matt Roper) - Various cleanups, asserts and code fixes (Matthew Auld) - Various cleanups and code fixes (Matthew Brost) - Increase VM_BIND number of per-ioctl Ops (Matthew Brost, Paulo Zanoni) - Don't support execlists in xe_gt_tlb_invalidation layer (Matthew Brost) - Handle timing out of already signaled jobs gracefully (Matthew Brost) - Pipeline evict / restore of pinned BOs during suspend / resume (Matthew Brost) - Do not grab forcewakes when issuing GGTT TLB invalidation via GuC (Matthew Brost) - Drop ggtt invalidate from display code (Matthew Brost) - drm/xe: Add XE_BO_GGTT_INVALIDATE flag (Matthew Brost) - Add debug messages for MMU notifier and VMA invalidate (Matthew Brost) - Use ordered wq for preempt fence waiting (Matthew Brost) - Initial development for SR-IOV support including some refactoring (Michal Wajdeczko) - Various GuC- and GT- related cleanups and fixes (Michal Wajdeczko) - Move userptr over to start using hmm_range_fault (Oak Zeng) - Add new PCI IDs to DG2 platform (Ravi Kumar Vodapalli) - Pcode - and VRAM initialization check update (Riana Tauro) - Large PM update including i915 display patches, and a fix for one of those. (Rodrigo Vivi) - Introduce performance tuning changes for Xe2_HPG (Shekhar Chauhan) - GSC / HDCP updates (Suraj Kandpal) - Minor code cleanup (Tejas Upadhyay) - Rework / fix rebind TLB flushing and move rebind into the drm_exec locking loop (Thomas Hellström) - Backmerge (Thomas Hellström) - GuC updates and fixes (Vinay Belgaumkar, Zhanjun Dong) Signed-off-by: Dave Airlie <airlied@redhat.com> # -----BEGIN PGP SIGNATURE----- # # iHUEABYKAB0WIQRskUM7w1oG5rx2IZO4FpNVCsYGvwUCZiestQAKCRC4FpNVCsYG # v8dLAQCDFUR7R5rwSdfqzNy+Djg+9ZgmtzVEfHZ+rI2lTReaCwEAhWeK7UooIMV0 # vGsSdsqGsJQm4VLRzE6H1yemCCQOBgM= # =HouD # -----END PGP SIGNATURE----- # gpg: Signature made Tue 23 Apr 2024 22:42:29 AEST # gpg: using EDDSA key 6C91433BC35A06E6BC762193B81693550AC606BF # gpg: Can't check signature: No public key # Conflicts: # drivers/gpu/drm/xe/xe_device_types.h # drivers/gpu/drm/xe/xe_vm.c # drivers/gpu/drm/xe/xe_vm_types.h From: Thomas Hellstrom <thomas.hellstrom@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/Zievlb1wvqDg1ovi@fedora
This commit is contained in:
commit
83221064c2
@ -10,7 +10,7 @@ Description: RW. Card reactive sustained (PL1) power limit in microwatts.
|
||||
power limit is disabled, writing 0 disables the
|
||||
limit. Writing values > 0 and <= TDP will enable the power limit.
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power1_rated_max
|
||||
Date: September 2023
|
||||
@ -18,53 +18,93 @@ KernelVersion: 6.5
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Card default power limit (default TDP setting).
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power1_crit
|
||||
Date: September 2023
|
||||
KernelVersion: 6.5
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Card reactive critical (I1) power limit in microwatts.
|
||||
|
||||
Card reactive critical (I1) power limit in microwatts is exposed
|
||||
for client products. The power controller will throttle the
|
||||
operating frequency if the power averaged over a window exceeds
|
||||
this limit.
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/curr1_crit
|
||||
Date: September 2023
|
||||
KernelVersion: 6.5
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Card reactive critical (I1) power limit in milliamperes.
|
||||
|
||||
Card reactive critical (I1) power limit in milliamperes is
|
||||
exposed for server products. The power controller will throttle
|
||||
the operating frequency if the power averaged over a window
|
||||
exceeds this limit.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/in0_input
|
||||
Date: September 2023
|
||||
KernelVersion: 6.5
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Current Voltage in millivolt.
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/energy1_input
|
||||
Date: September 2023
|
||||
KernelVersion: 6.5
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Energy input of device in microjoules.
|
||||
Description: RO. Card energy input of device in microjoules.
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power1_max_interval
|
||||
Date: October 2023
|
||||
KernelVersion: 6.6
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Sustained power limit interval (Tau in PL1/Tau) in
|
||||
Description: RW. Card sustained power limit interval (Tau in PL1/Tau) in
|
||||
milliseconds over which sustained power is averaged.
|
||||
|
||||
Only supported for particular Intel xe graphics platforms.
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power2_max
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Package reactive sustained (PL1) power limit in microwatts.
|
||||
|
||||
The power controller will throttle the operating frequency
|
||||
if the power averaged over a window (typically seconds)
|
||||
exceeds this limit. A read value of 0 means that the PL1
|
||||
power limit is disabled, writing 0 disables the
|
||||
limit. Writing values > 0 and <= TDP will enable the power limit.
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power2_rated_max
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Package default power limit (default TDP setting).
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power2_crit
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Package reactive critical (I1) power limit in microwatts.
|
||||
|
||||
Package reactive critical (I1) power limit in microwatts is exposed
|
||||
for client products. The power controller will throttle the
|
||||
operating frequency if the power averaged over a window exceeds
|
||||
this limit.
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/curr2_crit
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Package reactive critical (I1) power limit in milliamperes.
|
||||
|
||||
Package reactive critical (I1) power limit in milliamperes is
|
||||
exposed for server products. The power controller will throttle
|
||||
the operating frequency if the power averaged over a window
|
||||
exceeds this limit.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/energy2_input
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Package energy input of device in microjoules.
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/power2_max_interval
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RW. Package sustained power limit interval (Tau in PL1/Tau) in
|
||||
milliseconds over which sustained power is averaged.
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
What: /sys/bus/pci/drivers/xe/.../hwmon/hwmon<i>/in1_input
|
||||
Date: February 2024
|
||||
KernelVersion: 6.8
|
||||
Contact: intel-xe@lists.freedesktop.org
|
||||
Description: RO. Package current voltage in millivolt.
|
||||
|
||||
Only supported for particular Intel Xe graphics platforms.
|
||||
|
||||
@ -304,6 +304,29 @@ static ssize_t devcd_read_from_sgtable(char *buffer, loff_t offset,
|
||||
offset);
|
||||
}
|
||||
|
||||
/**
|
||||
* dev_coredump_put - remove device coredump
|
||||
* @dev: the struct device for the crashed device
|
||||
*
|
||||
* dev_coredump_put() removes coredump, if exists, for a given device from
|
||||
* the file system and free its associated data otherwise, does nothing.
|
||||
*
|
||||
* It is useful for modules that do not want to keep coredump
|
||||
* available after its unload.
|
||||
*/
|
||||
void dev_coredump_put(struct device *dev)
|
||||
{
|
||||
struct device *existing;
|
||||
|
||||
existing = class_find_device(&devcd_class, NULL, dev,
|
||||
devcd_match_failing);
|
||||
if (existing) {
|
||||
devcd_free(existing, NULL);
|
||||
put_device(existing);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(dev_coredump_put);
|
||||
|
||||
/**
|
||||
* dev_coredumpm - create device coredump with read/free methods
|
||||
* @dev: the struct device for the crashed device
|
||||
|
||||
@ -640,13 +640,7 @@ release_async_put_domains(struct i915_power_domains *power_domains,
|
||||
enum intel_display_power_domain domain;
|
||||
intel_wakeref_t wakeref;
|
||||
|
||||
/*
|
||||
* The caller must hold already raw wakeref, upgrade that to a proper
|
||||
* wakeref to make the state checker happy about the HW access during
|
||||
* power well disabling.
|
||||
*/
|
||||
assert_rpm_raw_wakeref_held(rpm);
|
||||
wakeref = intel_runtime_pm_get(rpm);
|
||||
wakeref = intel_runtime_pm_get_noresume(rpm);
|
||||
|
||||
for_each_power_domain(domain, mask) {
|
||||
/* Clear before put, so put's sanity check is happy. */
|
||||
|
||||
@ -13,6 +13,12 @@
|
||||
#include "intel_hdcp_gsc.h"
|
||||
#include "intel_hdcp_gsc_message.h"
|
||||
|
||||
struct intel_hdcp_gsc_message {
|
||||
struct i915_vma *vma;
|
||||
void *hdcp_cmd_in;
|
||||
void *hdcp_cmd_out;
|
||||
};
|
||||
|
||||
bool intel_hdcp_gsc_cs_required(struct drm_i915_private *i915)
|
||||
{
|
||||
return DISPLAY_VER(i915) >= 14;
|
||||
|
||||
@ -10,12 +10,7 @@
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_i915_private;
|
||||
|
||||
struct intel_hdcp_gsc_message {
|
||||
struct i915_vma *vma;
|
||||
void *hdcp_cmd_in;
|
||||
void *hdcp_cmd_out;
|
||||
};
|
||||
struct intel_hdcp_gsc_message;
|
||||
|
||||
bool intel_hdcp_gsc_cs_required(struct drm_i915_private *i915);
|
||||
ssize_t intel_hdcp_gsc_msg_send(struct drm_i915_private *i915, u8 *msg_in,
|
||||
|
||||
@ -787,7 +787,7 @@ void i915_hwmon_register(struct drm_i915_private *i915)
|
||||
if (!IS_DGFX(i915))
|
||||
return;
|
||||
|
||||
hwmon = devm_kzalloc(dev, sizeof(*hwmon), GFP_KERNEL);
|
||||
hwmon = kzalloc(sizeof(*hwmon), GFP_KERNEL);
|
||||
if (!hwmon)
|
||||
return;
|
||||
|
||||
@ -813,14 +813,12 @@ void i915_hwmon_register(struct drm_i915_private *i915)
|
||||
hwm_get_preregistration_info(i915);
|
||||
|
||||
/* hwmon_dev points to device hwmon<i> */
|
||||
hwmon_dev = devm_hwmon_device_register_with_info(dev, ddat->name,
|
||||
ddat,
|
||||
&hwm_chip_info,
|
||||
hwm_groups);
|
||||
if (IS_ERR(hwmon_dev)) {
|
||||
i915->hwmon = NULL;
|
||||
return;
|
||||
}
|
||||
hwmon_dev = hwmon_device_register_with_info(dev, ddat->name,
|
||||
ddat,
|
||||
&hwm_chip_info,
|
||||
hwm_groups);
|
||||
if (IS_ERR(hwmon_dev))
|
||||
goto err;
|
||||
|
||||
ddat->hwmon_dev = hwmon_dev;
|
||||
|
||||
@ -833,16 +831,36 @@ void i915_hwmon_register(struct drm_i915_private *i915)
|
||||
if (!hwm_gt_is_visible(ddat_gt, hwmon_energy, hwmon_energy_input, 0))
|
||||
continue;
|
||||
|
||||
hwmon_dev = devm_hwmon_device_register_with_info(dev, ddat_gt->name,
|
||||
ddat_gt,
|
||||
&hwm_gt_chip_info,
|
||||
NULL);
|
||||
hwmon_dev = hwmon_device_register_with_info(dev, ddat_gt->name,
|
||||
ddat_gt,
|
||||
&hwm_gt_chip_info,
|
||||
NULL);
|
||||
if (!IS_ERR(hwmon_dev))
|
||||
ddat_gt->hwmon_dev = hwmon_dev;
|
||||
}
|
||||
return;
|
||||
err:
|
||||
i915_hwmon_unregister(i915);
|
||||
}
|
||||
|
||||
void i915_hwmon_unregister(struct drm_i915_private *i915)
|
||||
{
|
||||
fetch_and_zero(&i915->hwmon);
|
||||
struct i915_hwmon *hwmon = i915->hwmon;
|
||||
struct intel_gt *gt;
|
||||
int i;
|
||||
|
||||
if (!hwmon)
|
||||
return;
|
||||
|
||||
for_each_gt(gt, i915, i)
|
||||
if (hwmon->ddat_gt[i].hwmon_dev)
|
||||
hwmon_device_unregister(hwmon->ddat_gt[i].hwmon_dev);
|
||||
|
||||
if (hwmon->ddat.hwmon_dev)
|
||||
hwmon_device_unregister(hwmon->ddat.hwmon_dev);
|
||||
|
||||
mutex_destroy(&hwmon->hwmon_lock);
|
||||
|
||||
kfree(i915->hwmon);
|
||||
i915->hwmon = NULL;
|
||||
}
|
||||
|
||||
@ -272,15 +272,11 @@ intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm)
|
||||
* intel_runtime_pm_get_noresume - grab a runtime pm reference
|
||||
* @rpm: the intel_runtime_pm structure
|
||||
*
|
||||
* This function grabs a device-level runtime pm reference (mostly used for GEM
|
||||
* code to ensure the GTT or GT is on).
|
||||
* This function grabs a device-level runtime pm reference.
|
||||
*
|
||||
* It will _not_ power up the device but instead only check that it's powered
|
||||
* on. Therefore it is only valid to call this functions from contexts where
|
||||
* the device is known to be powered up and where trying to power it up would
|
||||
* result in hilarity and deadlocks. That pretty much means only the system
|
||||
* suspend/resume code where this is used to grab runtime pm references for
|
||||
* delayed setup down in work items.
|
||||
* It will _not_ resume the device but instead only get an extra wakeref.
|
||||
* Therefore it is only valid to call this functions from contexts where
|
||||
* the device is known to be active and with another wakeref previously hold.
|
||||
*
|
||||
* Any runtime pm reference obtained by this function must have a symmetric
|
||||
* call to intel_runtime_pm_put() to release the reference again.
|
||||
@ -289,7 +285,7 @@ intel_wakeref_t intel_runtime_pm_get_if_active(struct intel_runtime_pm *rpm)
|
||||
*/
|
||||
intel_wakeref_t intel_runtime_pm_get_noresume(struct intel_runtime_pm *rpm)
|
||||
{
|
||||
assert_rpm_wakelock_held(rpm);
|
||||
assert_rpm_raw_wakeref_held(rpm);
|
||||
pm_runtime_get_noresume(rpm->kdev);
|
||||
|
||||
intel_runtime_pm_acquire(rpm, true);
|
||||
|
||||
@ -29,6 +29,7 @@ config DRM_XE
|
||||
select INPUT if ACPI
|
||||
select ACPI_VIDEO if X86 && ACPI
|
||||
select ACPI_BUTTON if ACPI
|
||||
select X86_PLATFORM_DEVICES if X86 && ACPI
|
||||
select ACPI_WMI if X86 && ACPI
|
||||
select SYNC_FILE
|
||||
select IOSF_MBI
|
||||
@ -44,6 +45,7 @@ config DRM_XE
|
||||
select MMU_NOTIFIER
|
||||
select WANT_DEV_COREDUMP
|
||||
select AUXILIARY_BUS
|
||||
select HMM_MIRROR
|
||||
help
|
||||
Experimental driver for Intel Xe series GPUs
|
||||
|
||||
|
||||
@ -49,6 +49,7 @@ $(obj)/generated/%_wa_oob.c $(obj)/generated/%_wa_oob.h: $(obj)/xe_gen_wa_oob \
|
||||
uses_generated_oob := \
|
||||
$(obj)/xe_gsc.o \
|
||||
$(obj)/xe_guc.o \
|
||||
$(obj)/xe_guc_ads.o \
|
||||
$(obj)/xe_migrate.o \
|
||||
$(obj)/xe_ring_ops.o \
|
||||
$(obj)/xe_vm.o \
|
||||
@ -97,6 +98,8 @@ xe-y += xe_bb.o \
|
||||
xe_guc_db_mgr.o \
|
||||
xe_guc_debugfs.o \
|
||||
xe_guc_hwconfig.o \
|
||||
xe_guc_id_mgr.o \
|
||||
xe_guc_klv_helpers.o \
|
||||
xe_guc_log.o \
|
||||
xe_guc_pc.o \
|
||||
xe_guc_submit.o \
|
||||
@ -145,6 +148,8 @@ xe-y += xe_bb.o \
|
||||
xe_wa.o \
|
||||
xe_wopcm.o
|
||||
|
||||
xe-$(CONFIG_HMM_MIRROR) += xe_hmm.o
|
||||
|
||||
# graphics hardware monitoring (HWMON) support
|
||||
xe-$(CONFIG_HWMON) += xe_hwmon.o
|
||||
|
||||
@ -155,9 +160,14 @@ xe-y += \
|
||||
xe_sriov.o
|
||||
|
||||
xe-$(CONFIG_PCI_IOV) += \
|
||||
xe_gt_sriov_pf.o \
|
||||
xe_gt_sriov_pf_config.o \
|
||||
xe_gt_sriov_pf_control.o \
|
||||
xe_gt_sriov_pf_policy.o \
|
||||
xe_lmtt.o \
|
||||
xe_lmtt_2l.o \
|
||||
xe_lmtt_ml.o
|
||||
xe_lmtt_ml.o \
|
||||
xe_sriov_pf.o
|
||||
|
||||
# include helpers for tests even when XE is built-in
|
||||
ifdef CONFIG_DRM_XE_KUNIT_TEST
|
||||
@ -254,6 +264,7 @@ xe-$(CONFIG_DRM_XE_DISPLAY) += \
|
||||
i915-display/intel_global_state.o \
|
||||
i915-display/intel_gmbus.o \
|
||||
i915-display/intel_hdcp.o \
|
||||
i915-display/intel_hdcp_gsc_message.o \
|
||||
i915-display/intel_hdmi.o \
|
||||
i915-display/intel_hotplug.o \
|
||||
i915-display/intel_hotplug_irq.o \
|
||||
|
||||
@ -3,8 +3,8 @@
|
||||
* Copyright © 2023 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _GUC_ACTIONS_PF_ABI_H
|
||||
#define _GUC_ACTIONS_PF_ABI_H
|
||||
#ifndef _ABI_GUC_ACTIONS_SRIOV_ABI_H
|
||||
#define _ABI_GUC_ACTIONS_SRIOV_ABI_H
|
||||
|
||||
#include "guc_communication_ctb_abi.h"
|
||||
|
||||
@ -171,4 +171,200 @@
|
||||
#define VF2GUC_RELAY_TO_PF_REQUEST_MSG_n_RELAY_DATAx GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define VF2GUC_RELAY_TO_PF_REQUEST_MSG_NUM_RELAY_DATA GUC_RELAY_MSG_MAX_LEN
|
||||
|
||||
/**
|
||||
* DOC: GUC2PF_VF_STATE_NOTIFY
|
||||
*
|
||||
* The GUC2PF_VF_STATE_NOTIFY message is used by the GuC to notify PF about change
|
||||
* of the VF state.
|
||||
*
|
||||
* This G2H message is sent as `CTB HXG Message`_.
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_EVENT_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:16 | DATA0 = MBZ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 15:0 | ACTION = _`GUC_ACTION_GUC2PF_VF_STATE_NOTIFY` = 0x5106 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 1 | 31:0 | DATA1 = **VFID** - VF identifier |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 2 | 31:0 | DATA2 = **EVENT** - notification event: |
|
||||
* | | | |
|
||||
* | | | - _`GUC_PF_NOTIFY_VF_ENABLE` = 1 (only if VFID = 0) |
|
||||
* | | | - _`GUC_PF_NOTIFY_VF_FLR` = 1 |
|
||||
* | | | - _`GUC_PF_NOTIFY_VF_FLR_DONE` = 2 |
|
||||
* | | | - _`GUC_PF_NOTIFY_VF_PAUSE_DONE` = 3 |
|
||||
* | | | - _`GUC_PF_NOTIFY_VF_FIXUP_DONE` = 4 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*/
|
||||
#define GUC_ACTION_GUC2PF_VF_STATE_NOTIFY 0x5106u
|
||||
|
||||
#define GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_LEN (GUC_HXG_EVENT_MSG_MIN_LEN + 2u)
|
||||
#define GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_0_MBZ GUC_HXG_EVENT_MSG_0_DATA0
|
||||
#define GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_1_VFID GUC_HXG_EVENT_MSG_n_DATAn
|
||||
#define GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_2_EVENT GUC_HXG_EVENT_MSG_n_DATAn
|
||||
#define GUC_PF_NOTIFY_VF_ENABLE 1u
|
||||
#define GUC_PF_NOTIFY_VF_FLR 1u
|
||||
#define GUC_PF_NOTIFY_VF_FLR_DONE 2u
|
||||
#define GUC_PF_NOTIFY_VF_PAUSE_DONE 3u
|
||||
#define GUC_PF_NOTIFY_VF_FIXUP_DONE 4u
|
||||
|
||||
/**
|
||||
* DOC: PF2GUC_UPDATE_VGT_POLICY
|
||||
*
|
||||
* This message is used by the PF to set `GuC VGT Policy KLVs`_.
|
||||
*
|
||||
* This message must be sent as `CTB HXG Message`_.
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:16 | MBZ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 15:0 | ACTION = _`GUC_ACTION_PF2GUC_UPDATE_VGT_POLICY` = 0x5502 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 1 | 31:0 | **CFG_ADDR_LO** - dword aligned GGTT offset that |
|
||||
* | | | represents the start of `GuC VGT Policy KLVs`_ list. |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 2 | 31:0 | **CFG_ADDR_HI** - upper 32 bits of above offset. |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 3 | 31:0 | **CFG_SIZE** - size (in dwords) of the config buffer |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:0 | **COUNT** - number of KLVs successfully applied |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*/
|
||||
#define GUC_ACTION_PF2GUC_UPDATE_VGT_POLICY 0x5502u
|
||||
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_REQUEST_MSG_LEN (GUC_HXG_REQUEST_MSG_MIN_LEN + 3u)
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_REQUEST_MSG_1_CFG_ADDR_LO GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_REQUEST_MSG_2_CFG_ADDR_HI GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_REQUEST_MSG_3_CFG_SIZE GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_RESPONSE_MSG_LEN GUC_HXG_RESPONSE_MSG_MIN_LEN
|
||||
#define PF2GUC_UPDATE_VGT_POLICY_RESPONSE_MSG_0_COUNT GUC_HXG_RESPONSE_MSG_0_DATA0
|
||||
|
||||
/**
|
||||
* DOC: PF2GUC_UPDATE_VF_CFG
|
||||
*
|
||||
* The `PF2GUC_UPDATE_VF_CFG`_ message is used by PF to provision single VF in GuC.
|
||||
*
|
||||
* This message must be sent as `CTB HXG Message`_.
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:16 | MBZ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 15:0 | ACTION = _`GUC_ACTION_PF2GUC_UPDATE_VF_CFG` = 0x5503 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 1 | 31:0 | **VFID** - identifier of the VF that the KLV |
|
||||
* | | | configurations are being applied to |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 2 | 31:0 | **CFG_ADDR_LO** - dword aligned GGTT offset that represents |
|
||||
* | | | the start of a list of virtualization related KLV configs |
|
||||
* | | | that are to be applied to the VF. |
|
||||
* | | | If this parameter is zero, the list is not parsed. |
|
||||
* | | | If full configs address parameter is zero and configs_size is|
|
||||
* | | | zero associated VF config shall be reset to its default state|
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 3 | 31:0 | **CFG_ADDR_HI** - upper 32 bits of configs address. |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 4 | 31:0 | **CFG_SIZE** - size (in dwords) of the config buffer |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:0 | **COUNT** - number of KLVs successfully applied |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*/
|
||||
#define GUC_ACTION_PF2GUC_UPDATE_VF_CFG 0x5503u
|
||||
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_LEN (GUC_HXG_REQUEST_MSG_MIN_LEN + 4u)
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_0_MBZ GUC_HXG_REQUEST_MSG_0_DATA0
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_1_VFID GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_2_CFG_ADDR_LO GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_3_CFG_ADDR_HI GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
#define PF2GUC_UPDATE_VF_CFG_REQUEST_MSG_4_CFG_SIZE GUC_HXG_REQUEST_MSG_n_DATAn
|
||||
|
||||
#define PF2GUC_UPDATE_VF_CFG_RESPONSE_MSG_LEN GUC_HXG_RESPONSE_MSG_MIN_LEN
|
||||
#define PF2GUC_UPDATE_VF_CFG_RESPONSE_MSG_0_COUNT GUC_HXG_RESPONSE_MSG_0_DATA0
|
||||
|
||||
/**
|
||||
* DOC: PF2GUC_VF_CONTROL
|
||||
*
|
||||
* The PF2GUC_VF_CONTROL message is used by the PF to trigger VF state change
|
||||
* maintained by the GuC.
|
||||
*
|
||||
* This H2G message must be sent as `CTB HXG Message`_.
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_HOST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_REQUEST_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:16 | DATA0 = MBZ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 15:0 | ACTION = _`GUC_ACTION_PF2GUC_VF_CONTROL_CMD` = 0x5506 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 1 | 31:0 | DATA1 = **VFID** - VF identifier |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | 2 | 31:0 | DATA2 = **COMMAND** - control command: |
|
||||
* | | | |
|
||||
* | | | - _`GUC_PF_TRIGGER_VF_PAUSE` = 1 |
|
||||
* | | | - _`GUC_PF_TRIGGER_VF_RESUME` = 2 |
|
||||
* | | | - _`GUC_PF_TRIGGER_VF_STOP` = 3 |
|
||||
* | | | - _`GUC_PF_TRIGGER_VF_FLR_START` = 4 |
|
||||
* | | | - _`GUC_PF_TRIGGER_VF_FLR_FINISH` = 5 |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
* | | Bits | Description |
|
||||
* +===+=======+==============================================================+
|
||||
* | 0 | 31 | ORIGIN = GUC_HXG_ORIGIN_GUC_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 30:28 | TYPE = GUC_HXG_TYPE_RESPONSE_SUCCESS_ |
|
||||
* | +-------+--------------------------------------------------------------+
|
||||
* | | 27:0 | DATA0 = MBZ |
|
||||
* +---+-------+--------------------------------------------------------------+
|
||||
*/
|
||||
#define GUC_ACTION_PF2GUC_VF_CONTROL 0x5506u
|
||||
|
||||
#define PF2GUC_VF_CONTROL_REQUEST_MSG_LEN (GUC_HXG_EVENT_MSG_MIN_LEN + 2u)
|
||||
#define PF2GUC_VF_CONTROL_REQUEST_MSG_0_MBZ GUC_HXG_EVENT_MSG_0_DATA0
|
||||
#define PF2GUC_VF_CONTROL_REQUEST_MSG_1_VFID GUC_HXG_EVENT_MSG_n_DATAn
|
||||
#define PF2GUC_VF_CONTROL_REQUEST_MSG_2_COMMAND GUC_HXG_EVENT_MSG_n_DATAn
|
||||
#define GUC_PF_TRIGGER_VF_PAUSE 1u
|
||||
#define GUC_PF_TRIGGER_VF_RESUME 2u
|
||||
#define GUC_PF_TRIGGER_VF_STOP 3u
|
||||
#define GUC_PF_TRIGGER_VF_FLR_START 4u
|
||||
#define GUC_PF_TRIGGER_VF_FLR_FINISH 5u
|
||||
|
||||
#endif
|
||||
|
||||
@ -319,4 +319,14 @@ enum {
|
||||
#define GUC_KLV_VF_CFG_BEGIN_CONTEXT_ID_KEY 0x8a0b
|
||||
#define GUC_KLV_VF_CFG_BEGIN_CONTEXT_ID_LEN 1u
|
||||
|
||||
/*
|
||||
* Workaround keys:
|
||||
*/
|
||||
enum xe_guc_klv_ids {
|
||||
GUC_WORKAROUND_KLV_BLOCK_INTERRUPTS_WHEN_MGSR_BLOCKED = 0x9002,
|
||||
GUC_WORKAROUND_KLV_ID_GAM_PFQ_SHADOW_TAIL_POLLING = 0x9005,
|
||||
GUC_WORKAROUND_KLV_ID_DISABLE_MTP_DURING_ASYNC_COMPUTE = 0x9007,
|
||||
GUC_WA_KLV_NP_RD_WRITE_TO_CLEAR_RCSM_AT_CGP_LATE_RESTORE = 0x9008,
|
||||
};
|
||||
|
||||
#endif
|
||||
|
||||
@ -82,6 +82,7 @@ static inline struct drm_i915_private *kdev_to_i915(struct device *kdev)
|
||||
#define IS_DG2(dev_priv) IS_PLATFORM(dev_priv, XE_DG2)
|
||||
#define IS_METEORLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_METEORLAKE)
|
||||
#define IS_LUNARLAKE(dev_priv) IS_PLATFORM(dev_priv, XE_LUNARLAKE)
|
||||
#define IS_BATTLEMAGE(dev_priv) IS_PLATFORM(dev_priv, XE_BATTLEMAGE)
|
||||
|
||||
#define IS_HASWELL_ULT(dev_priv) (dev_priv && 0)
|
||||
#define IS_BROADWELL_ULT(dev_priv) (dev_priv && 0)
|
||||
@ -127,18 +128,22 @@ static inline intel_wakeref_t intel_runtime_pm_get(struct xe_runtime_pm *pm)
|
||||
{
|
||||
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
|
||||
|
||||
if (xe_pm_runtime_get(xe) < 0) {
|
||||
xe_pm_runtime_put(xe);
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
return xe_pm_runtime_resume_and_get(xe);
|
||||
}
|
||||
|
||||
static inline intel_wakeref_t intel_runtime_pm_get_if_in_use(struct xe_runtime_pm *pm)
|
||||
{
|
||||
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
|
||||
|
||||
return xe_pm_runtime_get_if_active(xe);
|
||||
return xe_pm_runtime_get_if_in_use(xe);
|
||||
}
|
||||
|
||||
static inline intel_wakeref_t intel_runtime_pm_get_noresume(struct xe_runtime_pm *pm)
|
||||
{
|
||||
struct xe_device *xe = container_of(pm, struct xe_device, runtime_pm);
|
||||
|
||||
xe_pm_runtime_get_noresume(xe);
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline void intel_runtime_pm_put_unchecked(struct xe_runtime_pm *pm)
|
||||
|
||||
@ -17,10 +17,15 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
|
||||
{
|
||||
struct xe_bo *bo;
|
||||
int err;
|
||||
u32 flags = XE_BO_CREATE_PINNED_BIT | XE_BO_CREATE_STOLEN_BIT;
|
||||
u32 flags = XE_BO_FLAG_PINNED | XE_BO_FLAG_STOLEN;
|
||||
|
||||
if (align)
|
||||
if (start < SZ_4K)
|
||||
start = SZ_4K;
|
||||
|
||||
if (align) {
|
||||
size = ALIGN(size, align);
|
||||
start = ALIGN(start, align);
|
||||
}
|
||||
|
||||
bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe),
|
||||
NULL, size, start, end,
|
||||
|
||||
@ -25,15 +25,15 @@ static inline u32 intel_uncore_read(struct intel_uncore *uncore,
|
||||
return xe_mmio_read32(__compat_uncore_to_gt(uncore), reg);
|
||||
}
|
||||
|
||||
static inline u32 intel_uncore_read8(struct intel_uncore *uncore,
|
||||
i915_reg_t i915_reg)
|
||||
static inline u8 intel_uncore_read8(struct intel_uncore *uncore,
|
||||
i915_reg_t i915_reg)
|
||||
{
|
||||
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
|
||||
|
||||
return xe_mmio_read8(__compat_uncore_to_gt(uncore), reg);
|
||||
}
|
||||
|
||||
static inline u32 intel_uncore_read16(struct intel_uncore *uncore,
|
||||
static inline u16 intel_uncore_read16(struct intel_uncore *uncore,
|
||||
i915_reg_t i915_reg)
|
||||
{
|
||||
struct xe_reg reg = XE_REG(i915_mmio_reg_offset(i915_reg));
|
||||
|
||||
@ -11,7 +11,7 @@
|
||||
|
||||
void intel_fb_bo_framebuffer_fini(struct xe_bo *bo)
|
||||
{
|
||||
if (bo->flags & XE_BO_CREATE_PINNED_BIT) {
|
||||
if (bo->flags & XE_BO_FLAG_PINNED) {
|
||||
/* Unpin our kernel fb first */
|
||||
xe_bo_lock(bo, false);
|
||||
xe_bo_unpin(bo);
|
||||
@ -33,9 +33,9 @@ int intel_fb_bo_framebuffer_init(struct intel_framebuffer *intel_fb,
|
||||
if (ret)
|
||||
goto err;
|
||||
|
||||
if (!(bo->flags & XE_BO_SCANOUT_BIT)) {
|
||||
if (!(bo->flags & XE_BO_FLAG_SCANOUT)) {
|
||||
/*
|
||||
* XE_BO_SCANOUT_BIT should ideally be set at creation, or is
|
||||
* XE_BO_FLAG_SCANOUT should ideally be set at creation, or is
|
||||
* automatically set when creating FB. We cannot change caching
|
||||
* mode when the boect is VM_BINDed, so we can only set
|
||||
* coherency with display when unbound.
|
||||
@ -45,7 +45,7 @@ int intel_fb_bo_framebuffer_init(struct intel_framebuffer *intel_fb,
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
bo->flags |= XE_BO_SCANOUT_BIT;
|
||||
bo->flags |= XE_BO_FLAG_SCANOUT;
|
||||
}
|
||||
ttm_bo_unreserve(&bo->ttm);
|
||||
return 0;
|
||||
|
||||
@ -42,9 +42,9 @@ struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
|
||||
if (!IS_DGFX(dev_priv)) {
|
||||
obj = xe_bo_create_pin_map(dev_priv, xe_device_get_root_tile(dev_priv),
|
||||
NULL, size,
|
||||
ttm_bo_type_kernel, XE_BO_SCANOUT_BIT |
|
||||
XE_BO_CREATE_STOLEN_BIT |
|
||||
XE_BO_CREATE_PINNED_BIT);
|
||||
ttm_bo_type_kernel, XE_BO_FLAG_SCANOUT |
|
||||
XE_BO_FLAG_STOLEN |
|
||||
XE_BO_FLAG_PINNED);
|
||||
if (!IS_ERR(obj))
|
||||
drm_info(&dev_priv->drm, "Allocated fbdev into stolen\n");
|
||||
else
|
||||
@ -52,9 +52,9 @@ struct drm_framebuffer *intel_fbdev_fb_alloc(struct drm_fb_helper *helper,
|
||||
}
|
||||
if (IS_ERR(obj)) {
|
||||
obj = xe_bo_create_pin_map(dev_priv, xe_device_get_root_tile(dev_priv), NULL, size,
|
||||
ttm_bo_type_kernel, XE_BO_SCANOUT_BIT |
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(xe_device_get_root_tile(dev_priv)) |
|
||||
XE_BO_CREATE_PINNED_BIT);
|
||||
ttm_bo_type_kernel, XE_BO_FLAG_SCANOUT |
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(xe_device_get_root_tile(dev_priv)) |
|
||||
XE_BO_FLAG_PINNED);
|
||||
}
|
||||
|
||||
if (IS_ERR(obj)) {
|
||||
@ -81,8 +81,8 @@ int intel_fbdev_fb_fill_info(struct drm_i915_private *i915, struct fb_info *info
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
|
||||
|
||||
if (!(obj->flags & XE_BO_CREATE_SYSTEM_BIT)) {
|
||||
if (obj->flags & XE_BO_CREATE_STOLEN_BIT)
|
||||
if (!(obj->flags & XE_BO_FLAG_SYSTEM)) {
|
||||
if (obj->flags & XE_BO_FLAG_STOLEN)
|
||||
info->fix.smem_start = xe_ttm_stolen_io_offset(obj, 0);
|
||||
else
|
||||
info->fix.smem_start =
|
||||
|
||||
@ -101,8 +101,6 @@ static void display_destroy(struct drm_device *dev, void *dummy)
|
||||
*/
|
||||
int xe_display_create(struct xe_device *xe)
|
||||
{
|
||||
int err;
|
||||
|
||||
spin_lock_init(&xe->display.fb_tracking.lock);
|
||||
|
||||
xe->display.hotplug.dp_wq = alloc_ordered_workqueue("xe-dp", 0);
|
||||
@ -110,11 +108,7 @@ int xe_display_create(struct xe_device *xe)
|
||||
drmm_mutex_init(&xe->drm, &xe->sb_lock);
|
||||
xe->enabled_irq_mask = ~0;
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, display_destroy, NULL);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
return drmm_add_action_or_reset(&xe->drm, display_destroy, NULL);
|
||||
}
|
||||
|
||||
static void xe_display_fini_nommio(struct drm_device *dev, void *dummy)
|
||||
|
||||
@ -45,8 +45,8 @@ bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *d
|
||||
obj = xe_bo_create_pin_map(i915, xe_device_get_root_tile(i915),
|
||||
NULL, PAGE_ALIGN(size),
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(xe_device_get_root_tile(i915)) |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(xe_device_get_root_tile(i915)) |
|
||||
XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(obj)) {
|
||||
kfree(vma);
|
||||
return false;
|
||||
|
||||
@ -10,6 +10,7 @@
|
||||
#include "intel_fb_pin.h"
|
||||
#include "xe_ggtt.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
#include <drm/ttm/ttm_bo.h>
|
||||
|
||||
@ -30,7 +31,7 @@ write_dpt_rotated(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs, u32 bo_
|
||||
|
||||
for (row = 0; row < height; row++) {
|
||||
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
|
||||
xe->pat.idx[XE_CACHE_WB]);
|
||||
xe->pat.idx[XE_CACHE_NONE]);
|
||||
|
||||
iosys_map_wr(map, *dpt_ofs, u64, pte);
|
||||
*dpt_ofs += 8;
|
||||
@ -62,7 +63,7 @@ write_dpt_remapped(struct xe_bo *bo, struct iosys_map *map, u32 *dpt_ofs,
|
||||
for (column = 0; column < width; column++) {
|
||||
iosys_map_wr(map, *dpt_ofs, u64,
|
||||
pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
|
||||
xe->pat.idx[XE_CACHE_WB]));
|
||||
xe->pat.idx[XE_CACHE_NONE]));
|
||||
|
||||
*dpt_ofs += 8;
|
||||
src_idx++;
|
||||
@ -99,18 +100,21 @@ static int __xe_pin_fb_vma_dpt(struct intel_framebuffer *fb,
|
||||
if (IS_DGFX(xe))
|
||||
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_VRAM0_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_VRAM0 |
|
||||
XE_BO_FLAG_GGTT |
|
||||
XE_BO_FLAG_PAGETABLE);
|
||||
else
|
||||
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_STOLEN_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_STOLEN |
|
||||
XE_BO_FLAG_GGTT |
|
||||
XE_BO_FLAG_PAGETABLE);
|
||||
if (IS_ERR(dpt))
|
||||
dpt = xe_bo_create_pin_map(xe, tile0, NULL, dpt_size,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_SYSTEM_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_SYSTEM |
|
||||
XE_BO_FLAG_GGTT |
|
||||
XE_BO_FLAG_PAGETABLE);
|
||||
if (IS_ERR(dpt))
|
||||
return PTR_ERR(dpt);
|
||||
|
||||
@ -119,7 +123,7 @@ static int __xe_pin_fb_vma_dpt(struct intel_framebuffer *fb,
|
||||
|
||||
for (x = 0; x < size / XE_PAGE_SIZE; x++) {
|
||||
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, x * XE_PAGE_SIZE,
|
||||
xe->pat.idx[XE_CACHE_WB]);
|
||||
xe->pat.idx[XE_CACHE_NONE]);
|
||||
|
||||
iosys_map_wr(&dpt->vmap, x * 8, u64, pte);
|
||||
}
|
||||
@ -165,7 +169,7 @@ write_ggtt_rotated(struct xe_bo *bo, struct xe_ggtt *ggtt, u32 *ggtt_ofs, u32 bo
|
||||
|
||||
for (row = 0; row < height; row++) {
|
||||
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, src_idx * XE_PAGE_SIZE,
|
||||
xe->pat.idx[XE_CACHE_WB]);
|
||||
xe->pat.idx[XE_CACHE_NONE]);
|
||||
|
||||
xe_ggtt_set_pte(ggtt, *ggtt_ofs, pte);
|
||||
*ggtt_ofs += XE_PAGE_SIZE;
|
||||
@ -190,7 +194,7 @@ static int __xe_pin_fb_vma_ggtt(struct intel_framebuffer *fb,
|
||||
/* TODO: Consider sharing framebuffer mapping?
|
||||
* embed i915_vma inside intel_framebuffer
|
||||
*/
|
||||
xe_device_mem_access_get(tile_to_xe(ggtt->tile));
|
||||
xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile));
|
||||
ret = mutex_lock_interruptible(&ggtt->lock);
|
||||
if (ret)
|
||||
goto out;
|
||||
@ -211,7 +215,7 @@ static int __xe_pin_fb_vma_ggtt(struct intel_framebuffer *fb,
|
||||
|
||||
for (x = 0; x < size; x += XE_PAGE_SIZE) {
|
||||
u64 pte = ggtt->pt_ops->pte_encode_bo(bo, x,
|
||||
xe->pat.idx[XE_CACHE_WB]);
|
||||
xe->pat.idx[XE_CACHE_NONE]);
|
||||
|
||||
xe_ggtt_set_pte(ggtt, vma->node.start + x, pte);
|
||||
}
|
||||
@ -238,11 +242,10 @@ static int __xe_pin_fb_vma_ggtt(struct intel_framebuffer *fb,
|
||||
rot_info->plane[i].dst_stride);
|
||||
}
|
||||
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
out_unlock:
|
||||
mutex_unlock(&ggtt->lock);
|
||||
out:
|
||||
xe_device_mem_access_put(tile_to_xe(ggtt->tile));
|
||||
xe_pm_runtime_put(tile_to_xe(ggtt->tile));
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -260,7 +263,7 @@ static struct i915_vma *__xe_pin_fb_vma(struct intel_framebuffer *fb,
|
||||
|
||||
if (IS_DGFX(to_xe_device(bo->ttm.base.dev)) &&
|
||||
intel_fb_rc_ccs_cc_plane(&fb->base) >= 0 &&
|
||||
!(bo->flags & XE_BO_NEEDS_CPU_ACCESS)) {
|
||||
!(bo->flags & XE_BO_FLAG_NEEDS_CPU_ACCESS)) {
|
||||
struct xe_tile *tile = xe_device_get_root_tile(xe);
|
||||
|
||||
/*
|
||||
@ -321,7 +324,7 @@ static void __xe_unpin_fb_vma(struct i915_vma *vma)
|
||||
xe_bo_unpin_map_no_vm(vma->dpt);
|
||||
else if (!drm_mm_node_allocated(&vma->bo->ggtt_node) ||
|
||||
vma->bo->ggtt_node.start != vma->node.start)
|
||||
xe_ggtt_remove_node(ggtt, &vma->node);
|
||||
xe_ggtt_remove_node(ggtt, &vma->node, false);
|
||||
|
||||
ttm_bo_reserve(&vma->bo->ttm, false, false, NULL);
|
||||
ttm_bo_unpin(&vma->bo->ttm);
|
||||
@ -353,7 +356,7 @@ int intel_plane_pin_fb(struct intel_plane_state *plane_state)
|
||||
struct i915_vma *vma;
|
||||
|
||||
/* We reject creating !SCANOUT fb's, so this is weird.. */
|
||||
drm_WARN_ON(bo->ttm.base.dev, !(bo->flags & XE_BO_SCANOUT_BIT));
|
||||
drm_WARN_ON(bo->ttm.base.dev, !(bo->flags & XE_BO_FLAG_SCANOUT));
|
||||
|
||||
vma = __xe_pin_fb_vma(to_intel_framebuffer(fb), &plane_state->view.gtt);
|
||||
if (IS_ERR(vma))
|
||||
@ -381,4 +384,4 @@ struct i915_address_space *intel_dpt_create(struct intel_framebuffer *fb)
|
||||
void intel_dpt_destroy(struct i915_address_space *vm)
|
||||
{
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
@ -3,32 +3,250 @@
|
||||
* Copyright 2023, Intel Corporation.
|
||||
*/
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include <drm/drm_print.h>
|
||||
#include <drm/i915_hdcp_interface.h>
|
||||
#include <linux/delay.h>
|
||||
|
||||
#include "abi/gsc_command_header_abi.h"
|
||||
#include "intel_hdcp_gsc.h"
|
||||
#include "intel_hdcp_gsc_message.h"
|
||||
#include "xe_bo.h"
|
||||
#include "xe_device.h"
|
||||
#include "xe_device_types.h"
|
||||
#include "xe_gsc_proxy.h"
|
||||
#include "xe_gsc_submit.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_map.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_uc_fw.h"
|
||||
|
||||
bool intel_hdcp_gsc_cs_required(struct drm_i915_private *i915)
|
||||
#define HECI_MEADDRESS_HDCP 18
|
||||
|
||||
struct intel_hdcp_gsc_message {
|
||||
struct xe_bo *hdcp_bo;
|
||||
u64 hdcp_cmd_in;
|
||||
u64 hdcp_cmd_out;
|
||||
};
|
||||
|
||||
#define HDCP_GSC_HEADER_SIZE sizeof(struct intel_gsc_mtl_header)
|
||||
|
||||
bool intel_hdcp_gsc_cs_required(struct xe_device *xe)
|
||||
{
|
||||
return true;
|
||||
return DISPLAY_VER(xe) >= 14;
|
||||
}
|
||||
|
||||
bool intel_hdcp_gsc_check_status(struct drm_i915_private *i915)
|
||||
bool intel_hdcp_gsc_check_status(struct xe_device *xe)
|
||||
{
|
||||
return false;
|
||||
struct xe_tile *tile = xe_device_get_root_tile(xe);
|
||||
struct xe_gt *gt = tile->media_gt;
|
||||
bool ret = true;
|
||||
|
||||
if (!xe_uc_fw_is_enabled(>->uc.gsc.fw))
|
||||
return false;
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
if (xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC)) {
|
||||
drm_dbg_kms(&xe->drm,
|
||||
"failed to get forcewake to check proxy status\n");
|
||||
ret = false;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!xe_gsc_proxy_init_done(>->uc.gsc))
|
||||
ret = false;
|
||||
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC);
|
||||
out:
|
||||
xe_pm_runtime_put(xe);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int intel_hdcp_gsc_init(struct drm_i915_private *i915)
|
||||
/*This function helps allocate memory for the command that we will send to gsc cs */
|
||||
static int intel_hdcp_gsc_initialize_message(struct xe_device *xe,
|
||||
struct intel_hdcp_gsc_message *hdcp_message)
|
||||
{
|
||||
drm_info(&i915->drm, "HDCP support not yet implemented\n");
|
||||
return -ENODEV;
|
||||
struct xe_bo *bo = NULL;
|
||||
u64 cmd_in, cmd_out;
|
||||
int ret = 0;
|
||||
|
||||
/* allocate object of two page for HDCP command memory and store it */
|
||||
bo = xe_bo_create_pin_map(xe, xe_device_get_root_tile(xe), NULL, PAGE_SIZE * 2,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_FLAG_SYSTEM |
|
||||
XE_BO_FLAG_GGTT);
|
||||
|
||||
if (IS_ERR(bo)) {
|
||||
drm_err(&xe->drm, "Failed to allocate bo for HDCP streaming command!\n");
|
||||
ret = PTR_ERR(bo);
|
||||
goto out;
|
||||
}
|
||||
|
||||
cmd_in = xe_bo_ggtt_addr(bo);
|
||||
cmd_out = cmd_in + PAGE_SIZE;
|
||||
xe_map_memset(xe, &bo->vmap, 0, 0, bo->size);
|
||||
|
||||
hdcp_message->hdcp_bo = bo;
|
||||
hdcp_message->hdcp_cmd_in = cmd_in;
|
||||
hdcp_message->hdcp_cmd_out = cmd_out;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
void intel_hdcp_gsc_fini(struct drm_i915_private *i915)
|
||||
static int intel_hdcp_gsc_hdcp2_init(struct xe_device *xe)
|
||||
{
|
||||
struct intel_hdcp_gsc_message *hdcp_message;
|
||||
int ret;
|
||||
|
||||
hdcp_message = kzalloc(sizeof(*hdcp_message), GFP_KERNEL);
|
||||
|
||||
if (!hdcp_message)
|
||||
return -ENOMEM;
|
||||
|
||||
/*
|
||||
* NOTE: No need to lock the comp mutex here as it is already
|
||||
* going to be taken before this function called
|
||||
*/
|
||||
ret = intel_hdcp_gsc_initialize_message(xe, hdcp_message);
|
||||
if (ret) {
|
||||
drm_err(&xe->drm, "Could not initialize hdcp_message\n");
|
||||
kfree(hdcp_message);
|
||||
return ret;
|
||||
}
|
||||
|
||||
xe->display.hdcp.hdcp_message = hdcp_message;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ssize_t intel_hdcp_gsc_msg_send(struct drm_i915_private *i915, u8 *msg_in,
|
||||
static const struct i915_hdcp_ops gsc_hdcp_ops = {
|
||||
.initiate_hdcp2_session = intel_hdcp_gsc_initiate_session,
|
||||
.verify_receiver_cert_prepare_km =
|
||||
intel_hdcp_gsc_verify_receiver_cert_prepare_km,
|
||||
.verify_hprime = intel_hdcp_gsc_verify_hprime,
|
||||
.store_pairing_info = intel_hdcp_gsc_store_pairing_info,
|
||||
.initiate_locality_check = intel_hdcp_gsc_initiate_locality_check,
|
||||
.verify_lprime = intel_hdcp_gsc_verify_lprime,
|
||||
.get_session_key = intel_hdcp_gsc_get_session_key,
|
||||
.repeater_check_flow_prepare_ack =
|
||||
intel_hdcp_gsc_repeater_check_flow_prepare_ack,
|
||||
.verify_mprime = intel_hdcp_gsc_verify_mprime,
|
||||
.enable_hdcp_authentication = intel_hdcp_gsc_enable_authentication,
|
||||
.close_hdcp_session = intel_hdcp_gsc_close_session,
|
||||
};
|
||||
|
||||
int intel_hdcp_gsc_init(struct xe_device *xe)
|
||||
{
|
||||
struct i915_hdcp_arbiter *data;
|
||||
int ret;
|
||||
|
||||
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
return -ENOMEM;
|
||||
|
||||
mutex_lock(&xe->display.hdcp.hdcp_mutex);
|
||||
xe->display.hdcp.arbiter = data;
|
||||
xe->display.hdcp.arbiter->hdcp_dev = xe->drm.dev;
|
||||
xe->display.hdcp.arbiter->ops = &gsc_hdcp_ops;
|
||||
ret = intel_hdcp_gsc_hdcp2_init(xe);
|
||||
if (ret)
|
||||
kfree(data);
|
||||
|
||||
mutex_unlock(&xe->display.hdcp.hdcp_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void intel_hdcp_gsc_fini(struct xe_device *xe)
|
||||
{
|
||||
struct intel_hdcp_gsc_message *hdcp_message =
|
||||
xe->display.hdcp.hdcp_message;
|
||||
|
||||
if (!hdcp_message)
|
||||
return;
|
||||
|
||||
xe_bo_unpin_map_no_vm(hdcp_message->hdcp_bo);
|
||||
kfree(hdcp_message);
|
||||
}
|
||||
|
||||
static int xe_gsc_send_sync(struct xe_device *xe,
|
||||
struct intel_hdcp_gsc_message *hdcp_message,
|
||||
u32 msg_size_in, u32 msg_size_out,
|
||||
u32 addr_out_off)
|
||||
{
|
||||
struct xe_gt *gt = hdcp_message->hdcp_bo->tile->media_gt;
|
||||
struct iosys_map *map = &hdcp_message->hdcp_bo->vmap;
|
||||
struct xe_gsc *gsc = >->uc.gsc;
|
||||
int ret;
|
||||
|
||||
ret = xe_gsc_pkt_submit_kernel(gsc, hdcp_message->hdcp_cmd_in, msg_size_in,
|
||||
hdcp_message->hdcp_cmd_out, msg_size_out);
|
||||
if (ret) {
|
||||
drm_err(&xe->drm, "failed to send gsc HDCP msg (%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (xe_gsc_check_and_update_pending(xe, map, 0, map, addr_out_off))
|
||||
return -EAGAIN;
|
||||
|
||||
ret = xe_gsc_read_out_header(xe, map, addr_out_off,
|
||||
sizeof(struct hdcp_cmd_header), NULL);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
ssize_t intel_hdcp_gsc_msg_send(struct xe_device *xe, u8 *msg_in,
|
||||
size_t msg_in_len, u8 *msg_out,
|
||||
size_t msg_out_len)
|
||||
{
|
||||
return -ENODEV;
|
||||
const size_t max_msg_size = PAGE_SIZE - HDCP_GSC_HEADER_SIZE;
|
||||
struct intel_hdcp_gsc_message *hdcp_message;
|
||||
u64 host_session_id;
|
||||
u32 msg_size_in, msg_size_out;
|
||||
u32 addr_out_off, addr_in_wr_off = 0;
|
||||
int ret, tries = 0;
|
||||
|
||||
if (msg_in_len > max_msg_size || msg_out_len > max_msg_size) {
|
||||
ret = -ENOSPC;
|
||||
goto out;
|
||||
}
|
||||
|
||||
msg_size_in = msg_in_len + HDCP_GSC_HEADER_SIZE;
|
||||
msg_size_out = msg_out_len + HDCP_GSC_HEADER_SIZE;
|
||||
hdcp_message = xe->display.hdcp.hdcp_message;
|
||||
addr_out_off = PAGE_SIZE;
|
||||
|
||||
host_session_id = xe_gsc_create_host_session_id();
|
||||
xe_pm_runtime_get_noresume(xe);
|
||||
addr_in_wr_off = xe_gsc_emit_header(xe, &hdcp_message->hdcp_bo->vmap,
|
||||
addr_in_wr_off, HECI_MEADDRESS_HDCP,
|
||||
host_session_id, msg_in_len);
|
||||
xe_map_memcpy_to(xe, &hdcp_message->hdcp_bo->vmap, addr_in_wr_off,
|
||||
msg_in, msg_in_len);
|
||||
/*
|
||||
* Keep sending request in case the pending bit is set no need to add
|
||||
* message handle as we are using same address hence loc. of header is
|
||||
* same and it will contain the message handle. we will send the message
|
||||
* 20 times each message 50 ms apart
|
||||
*/
|
||||
do {
|
||||
ret = xe_gsc_send_sync(xe, hdcp_message, msg_size_in, msg_size_out,
|
||||
addr_out_off);
|
||||
|
||||
/* Only try again if gsc says so */
|
||||
if (ret != -EAGAIN)
|
||||
break;
|
||||
|
||||
msleep(50);
|
||||
|
||||
} while (++tries < 20);
|
||||
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
xe_map_memcpy_from(xe, msg_out, &hdcp_message->hdcp_bo->vmap,
|
||||
addr_out_off + HDCP_GSC_HEADER_SIZE,
|
||||
msg_out_len);
|
||||
|
||||
out:
|
||||
xe_pm_runtime_put(xe);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -6,6 +6,7 @@
|
||||
/* for ioread64 */
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
|
||||
#include "regs/xe_gtt_defs.h"
|
||||
#include "xe_ggtt.h"
|
||||
|
||||
#include "i915_drv.h"
|
||||
@ -62,7 +63,7 @@ initial_plane_bo(struct xe_device *xe,
|
||||
if (plane_config->size == 0)
|
||||
return NULL;
|
||||
|
||||
flags = XE_BO_CREATE_PINNED_BIT | XE_BO_SCANOUT_BIT | XE_BO_CREATE_GGTT_BIT;
|
||||
flags = XE_BO_FLAG_PINNED | XE_BO_FLAG_SCANOUT | XE_BO_FLAG_GGTT;
|
||||
|
||||
base = round_down(plane_config->base, page_size);
|
||||
if (IS_DGFX(xe)) {
|
||||
@ -79,7 +80,7 @@ initial_plane_bo(struct xe_device *xe,
|
||||
}
|
||||
|
||||
phys_base = pte & ~(page_size - 1);
|
||||
flags |= XE_BO_CREATE_VRAM0_BIT;
|
||||
flags |= XE_BO_FLAG_VRAM0;
|
||||
|
||||
/*
|
||||
* We don't currently expect this to ever be placed in the
|
||||
@ -101,7 +102,7 @@ initial_plane_bo(struct xe_device *xe,
|
||||
if (!stolen)
|
||||
return NULL;
|
||||
phys_base = base;
|
||||
flags |= XE_BO_CREATE_STOLEN_BIT;
|
||||
flags |= XE_BO_FLAG_STOLEN;
|
||||
|
||||
/*
|
||||
* If the FB is too big, just don't use it since fbdev is not very
|
||||
|
||||
18
drivers/gpu/drm/xe/instructions/xe_gfx_state_commands.h
Normal file
18
drivers/gpu/drm/xe/instructions/xe_gfx_state_commands.h
Normal file
@ -0,0 +1,18 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GFX_STATE_COMMANDS_H_
|
||||
#define _XE_GFX_STATE_COMMANDS_H_
|
||||
|
||||
#include "instructions/xe_instr_defs.h"
|
||||
|
||||
#define GFX_STATE_OPCODE REG_GENMASK(28, 26)
|
||||
|
||||
#define GFX_STATE_CMD(opcode) \
|
||||
(XE_INSTR_GFX_STATE | REG_FIELD_PREP(GFX_STATE_OPCODE, opcode))
|
||||
|
||||
#define STATE_WRITE_INLINE GFX_STATE_CMD(0x0)
|
||||
|
||||
#endif
|
||||
@ -47,6 +47,8 @@
|
||||
#define GPGPU_CSR_BASE_ADDRESS GFXPIPE_COMMON_CMD(0x1, 0x4)
|
||||
#define STATE_COMPUTE_MODE GFXPIPE_COMMON_CMD(0x1, 0x5)
|
||||
#define CMD_3DSTATE_BTD GFXPIPE_COMMON_CMD(0x1, 0x6)
|
||||
#define STATE_SYSTEM_MEM_FENCE_ADDRESS GFXPIPE_COMMON_CMD(0x1, 0x9)
|
||||
#define STATE_CONTEXT_DATA_BASE_ADDRESS GFXPIPE_COMMON_CMD(0x1, 0xB)
|
||||
|
||||
#define CMD_3DSTATE_VF_STATISTICS GFXPIPE_SINGLE_DW_CMD(0x0, 0xB)
|
||||
|
||||
@ -71,6 +73,7 @@
|
||||
#define CMD_3DSTATE_WM GFXPIPE_3D_CMD(0x0, 0x14)
|
||||
#define CMD_3DSTATE_CONSTANT_VS GFXPIPE_3D_CMD(0x0, 0x15)
|
||||
#define CMD_3DSTATE_CONSTANT_GS GFXPIPE_3D_CMD(0x0, 0x16)
|
||||
#define CMD_3DSTATE_CONSTANT_PS GFXPIPE_3D_CMD(0x0, 0x17)
|
||||
#define CMD_3DSTATE_SAMPLE_MASK GFXPIPE_3D_CMD(0x0, 0x18)
|
||||
#define CMD_3DSTATE_CONSTANT_HS GFXPIPE_3D_CMD(0x0, 0x19)
|
||||
#define CMD_3DSTATE_CONSTANT_DS GFXPIPE_3D_CMD(0x0, 0x1A)
|
||||
|
||||
@ -17,6 +17,7 @@
|
||||
#define XE_INSTR_MI REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x0)
|
||||
#define XE_INSTR_GSC REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x2)
|
||||
#define XE_INSTR_GFXPIPE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x3)
|
||||
#define XE_INSTR_GFX_STATE REG_FIELD_PREP(XE_INSTR_CMD_TYPE, 0x4)
|
||||
|
||||
/*
|
||||
* Most (but not all) instructions have a "length" field in the instruction
|
||||
|
||||
@ -104,9 +104,6 @@
|
||||
#define FF_SLICE_CS_CHICKEN1(base) XE_REG((base) + 0xe0, XE_REG_OPTION_MASKED)
|
||||
#define FFSC_PERCTX_PREEMPT_CTRL REG_BIT(14)
|
||||
|
||||
#define FF_SLICE_CS_CHICKEN2(base) XE_REG((base) + 0xe4, XE_REG_OPTION_MASKED)
|
||||
#define PERF_FIX_BALANCING_CFE_DISABLE REG_BIT(15)
|
||||
|
||||
#define CS_DEBUG_MODE1(base) XE_REG((base) + 0xec, XE_REG_OPTION_MASKED)
|
||||
#define FF_DOP_CLOCK_GATE_DISABLE REG_BIT(1)
|
||||
#define REPLAY_MODE_GRANULARITY REG_BIT(0)
|
||||
|
||||
@ -38,4 +38,11 @@
|
||||
#define HECI_H_GS1(base) XE_REG((base) + 0xc4c)
|
||||
#define HECI_H_GS1_ER_PREP REG_BIT(0)
|
||||
|
||||
#define GSCI_TIMER_STATUS XE_REG(0x11ca28)
|
||||
#define GSCI_TIMER_STATUS_VALUE REG_GENMASK(1, 0)
|
||||
#define GSCI_TIMER_STATUS_RESET_IN_PROGRESS 0
|
||||
#define GSCI_TIMER_STATUS_TIMER_EXPIRED 1
|
||||
#define GSCI_TIMER_STATUS_RESET_COMPLETE 2
|
||||
#define GSCI_TIMER_STATUS_OUT_OF_RESET 3
|
||||
|
||||
#endif
|
||||
|
||||
@ -69,10 +69,14 @@
|
||||
|
||||
#define XEHP_TILE_ADDR_RANGE(_idx) XE_REG_MCR(0x4900 + (_idx) * 4)
|
||||
#define XEHP_FLAT_CCS_BASE_ADDR XE_REG_MCR(0x4910)
|
||||
#define XEHP_FLAT_CCS_PTR REG_GENMASK(31, 8)
|
||||
|
||||
#define WM_CHICKEN3 XE_REG_MCR(0x5588, XE_REG_OPTION_MASKED)
|
||||
#define HIZ_PLANE_COMPRESSION_DIS REG_BIT(10)
|
||||
|
||||
#define CHICKEN_RASTER_1 XE_REG_MCR(0x6204, XE_REG_OPTION_MASKED)
|
||||
#define DIS_SF_ROUND_NEAREST_EVEN REG_BIT(8)
|
||||
|
||||
#define CHICKEN_RASTER_2 XE_REG_MCR(0x6208, XE_REG_OPTION_MASKED)
|
||||
#define TBIMR_FAST_CLIP REG_BIT(5)
|
||||
|
||||
@ -97,7 +101,8 @@
|
||||
#define CACHE_MODE_1 XE_REG(0x7004, XE_REG_OPTION_MASKED)
|
||||
#define MSAA_OPTIMIZATION_REDUC_DISABLE REG_BIT(11)
|
||||
|
||||
#define COMMON_SLICE_CHICKEN1 XE_REG(0x7010)
|
||||
#define COMMON_SLICE_CHICKEN1 XE_REG(0x7010, XE_REG_OPTION_MASKED)
|
||||
#define DISABLE_BOTTOM_CLIP_RECTANGLE_TEST REG_BIT(14)
|
||||
|
||||
#define HIZ_CHICKEN XE_REG(0x7018, XE_REG_OPTION_MASKED)
|
||||
#define DG1_HZ_READ_SUPPRESSION_OPTIMIZATION_DISABLE REG_BIT(14)
|
||||
@ -141,6 +146,10 @@
|
||||
|
||||
#define XE2_FLAT_CCS_BASE_RANGE_LOWER XE_REG_MCR(0x8800)
|
||||
#define XE2_FLAT_CCS_ENABLE REG_BIT(0)
|
||||
#define XE2_FLAT_CCS_BASE_LOWER_ADDR_MASK REG_GENMASK(31, 6)
|
||||
|
||||
#define XE2_FLAT_CCS_BASE_RANGE_UPPER XE_REG_MCR(0x8804)
|
||||
#define XE2_FLAT_CCS_BASE_UPPER_ADDR_MASK REG_GENMASK(7, 0)
|
||||
|
||||
#define GSCPSMI_BASE XE_REG(0x880c)
|
||||
|
||||
@ -156,7 +165,10 @@
|
||||
#define MIRROR_FUSE3 XE_REG(0x9118)
|
||||
#define XE2_NODE_ENABLE_MASK REG_GENMASK(31, 16)
|
||||
#define L3BANK_PAIR_COUNT 4
|
||||
#define XEHPC_GT_L3_MODE_MASK REG_GENMASK(7, 4)
|
||||
#define XE2_GT_L3_MODE_MASK REG_GENMASK(7, 4)
|
||||
#define L3BANK_MASK REG_GENMASK(3, 0)
|
||||
#define XELP_GT_L3_MODE_MASK REG_GENMASK(7, 0)
|
||||
/* on Xe_HP the same fuses indicates mslices instead of L3 banks */
|
||||
#define MAX_MSLICES 4
|
||||
#define MEML3_EN_MASK REG_GENMASK(3, 0)
|
||||
@ -271,6 +283,10 @@
|
||||
#define FORCEWAKE_GT XE_REG(0xa188)
|
||||
|
||||
#define PG_ENABLE XE_REG(0xa210)
|
||||
#define VD2_MFXVDENC_POWERGATE_ENABLE REG_BIT(8)
|
||||
#define VD2_HCP_POWERGATE_ENABLE REG_BIT(7)
|
||||
#define VD0_MFXVDENC_POWERGATE_ENABLE REG_BIT(4)
|
||||
#define VD0_HCP_POWERGATE_ENABLE REG_BIT(3)
|
||||
|
||||
#define CTC_MODE XE_REG(0xa26c)
|
||||
#define CTC_SHIFT_PARAMETER_MASK REG_GENMASK(2, 1)
|
||||
@ -349,6 +365,7 @@
|
||||
#define THREAD_EX_ARB_MODE_RR_AFTER_DEP REG_FIELD_PREP(THREAD_EX_ARB_MODE, 0x2)
|
||||
|
||||
#define ROW_CHICKEN3 XE_REG_MCR(0xe49c, XE_REG_OPTION_MASKED)
|
||||
#define XE2_EUPEND_CHK_FLUSH_DIS REG_BIT(14)
|
||||
#define DIS_FIX_EOT1_FLUSH REG_BIT(9)
|
||||
|
||||
#define TDL_TSL_CHICKEN XE_REG_MCR(0xe4c4, XE_REG_OPTION_MASKED)
|
||||
@ -364,17 +381,22 @@
|
||||
#define DISABLE_EARLY_READ REG_BIT(14)
|
||||
#define ENABLE_LARGE_GRF_MODE REG_BIT(12)
|
||||
#define PUSH_CONST_DEREF_HOLD_DIS REG_BIT(8)
|
||||
#define DISABLE_TDL_SVHS_GATING REG_BIT(1)
|
||||
#define DISABLE_DOP_GATING REG_BIT(0)
|
||||
|
||||
#define RT_CTRL XE_REG_MCR(0xe530)
|
||||
#define DIS_NULL_QUERY REG_BIT(10)
|
||||
|
||||
#define EU_SYSTOLIC_LIC_THROTTLE_CTL_WITH_LOCK XE_REG_MCR(0xe534)
|
||||
#define EU_SYSTOLIC_LIC_THROTTLE_CTL_LOCK_BIT REG_BIT(31)
|
||||
|
||||
#define XEHP_HDC_CHICKEN0 XE_REG_MCR(0xe5f0, XE_REG_OPTION_MASKED)
|
||||
#define LSC_L1_FLUSH_CTL_3D_DATAPORT_FLUSH_EVENTS_MASK REG_GENMASK(13, 11)
|
||||
#define DIS_ATOMIC_CHAINING_TYPED_WRITES REG_BIT(3)
|
||||
|
||||
#define LSC_CHICKEN_BIT_0 XE_REG_MCR(0xe7c8)
|
||||
#define DISABLE_D8_D16_COASLESCE REG_BIT(30)
|
||||
#define WR_REQ_CHAINING_DIS REG_BIT(26)
|
||||
#define TGM_WRITE_EOM_FORCE REG_BIT(17)
|
||||
#define FORCE_1_SUB_MESSAGE_PER_FRAGMENT REG_BIT(15)
|
||||
#define SEQUENTIAL_ACCESS_UPGRADE_DISABLE REG_BIT(13)
|
||||
@ -439,7 +461,13 @@
|
||||
#define GT_PERF_STATUS XE_REG(0x1381b4)
|
||||
#define VOLTAGE_MASK REG_GENMASK(10, 0)
|
||||
|
||||
#define GT_INTR_DW(x) XE_REG(0x190018 + ((x) * 4))
|
||||
/*
|
||||
* Note: Interrupt registers 1900xx are VF accessible only until version 12.50.
|
||||
* On newer platforms, VFs are using memory-based interrupts instead.
|
||||
* However, for simplicity we keep this XE_REG_OPTION_VF tag intact.
|
||||
*/
|
||||
|
||||
#define GT_INTR_DW(x) XE_REG(0x190018 + ((x) * 4), XE_REG_OPTION_VF)
|
||||
#define INTR_GSC REG_BIT(31)
|
||||
#define INTR_GUC REG_BIT(25)
|
||||
#define INTR_MGUC REG_BIT(24)
|
||||
@ -450,16 +478,16 @@
|
||||
#define INTR_VECS(x) REG_BIT(31 - (x))
|
||||
#define INTR_VCS(x) REG_BIT(x)
|
||||
|
||||
#define RENDER_COPY_INTR_ENABLE XE_REG(0x190030)
|
||||
#define VCS_VECS_INTR_ENABLE XE_REG(0x190034)
|
||||
#define GUC_SG_INTR_ENABLE XE_REG(0x190038)
|
||||
#define RENDER_COPY_INTR_ENABLE XE_REG(0x190030, XE_REG_OPTION_VF)
|
||||
#define VCS_VECS_INTR_ENABLE XE_REG(0x190034, XE_REG_OPTION_VF)
|
||||
#define GUC_SG_INTR_ENABLE XE_REG(0x190038, XE_REG_OPTION_VF)
|
||||
#define ENGINE1_MASK REG_GENMASK(31, 16)
|
||||
#define ENGINE0_MASK REG_GENMASK(15, 0)
|
||||
#define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c)
|
||||
#define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044)
|
||||
#define CCS_RSVD_INTR_ENABLE XE_REG(0x190048)
|
||||
#define GPM_WGBOXPERF_INTR_ENABLE XE_REG(0x19003c, XE_REG_OPTION_VF)
|
||||
#define GUNIT_GSC_INTR_ENABLE XE_REG(0x190044, XE_REG_OPTION_VF)
|
||||
#define CCS_RSVD_INTR_ENABLE XE_REG(0x190048, XE_REG_OPTION_VF)
|
||||
|
||||
#define INTR_IDENTITY_REG(x) XE_REG(0x190060 + ((x) * 4))
|
||||
#define INTR_IDENTITY_REG(x) XE_REG(0x190060 + ((x) * 4), XE_REG_OPTION_VF)
|
||||
#define INTR_DATA_VALID REG_BIT(31)
|
||||
#define INTR_ENGINE_INSTANCE(x) REG_FIELD_GET(GENMASK(25, 20), x)
|
||||
#define INTR_ENGINE_CLASS(x) REG_FIELD_GET(GENMASK(18, 16), x)
|
||||
@ -468,16 +496,16 @@
|
||||
#define OTHER_GSC_HECI2_INSTANCE 3
|
||||
#define OTHER_GSC_INSTANCE 6
|
||||
|
||||
#define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4))
|
||||
#define RCS0_RSVD_INTR_MASK XE_REG(0x190090)
|
||||
#define BCS_RSVD_INTR_MASK XE_REG(0x1900a0)
|
||||
#define VCS0_VCS1_INTR_MASK XE_REG(0x1900a8)
|
||||
#define VCS2_VCS3_INTR_MASK XE_REG(0x1900ac)
|
||||
#define VECS0_VECS1_INTR_MASK XE_REG(0x1900d0)
|
||||
#define IIR_REG_SELECTOR(x) XE_REG(0x190070 + ((x) * 4), XE_REG_OPTION_VF)
|
||||
#define RCS0_RSVD_INTR_MASK XE_REG(0x190090, XE_REG_OPTION_VF)
|
||||
#define BCS_RSVD_INTR_MASK XE_REG(0x1900a0, XE_REG_OPTION_VF)
|
||||
#define VCS0_VCS1_INTR_MASK XE_REG(0x1900a8, XE_REG_OPTION_VF)
|
||||
#define VCS2_VCS3_INTR_MASK XE_REG(0x1900ac, XE_REG_OPTION_VF)
|
||||
#define VECS0_VECS1_INTR_MASK XE_REG(0x1900d0, XE_REG_OPTION_VF)
|
||||
#define HECI2_RSVD_INTR_MASK XE_REG(0x1900e4)
|
||||
#define GUC_SG_INTR_MASK XE_REG(0x1900e8)
|
||||
#define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec)
|
||||
#define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4)
|
||||
#define GUC_SG_INTR_MASK XE_REG(0x1900e8, XE_REG_OPTION_VF)
|
||||
#define GPM_WGBOXPERF_INTR_MASK XE_REG(0x1900ec, XE_REG_OPTION_VF)
|
||||
#define GUNIT_GSC_INTR_MASK XE_REG(0x1900f4, XE_REG_OPTION_VF)
|
||||
#define CCS0_CCS1_INTR_MASK XE_REG(0x190100)
|
||||
#define CCS2_CCS3_INTR_MASK XE_REG(0x190104)
|
||||
#define XEHPC_BCS1_BCS2_INTR_MASK XE_REG(0x190110)
|
||||
@ -486,6 +514,7 @@
|
||||
#define XEHPC_BCS7_BCS8_INTR_MASK XE_REG(0x19011c)
|
||||
#define GT_WAIT_SEMAPHORE_INTERRUPT REG_BIT(11)
|
||||
#define GT_CONTEXT_SWITCH_INTERRUPT REG_BIT(8)
|
||||
#define GSC_ER_COMPLETE REG_BIT(5)
|
||||
#define GT_RENDER_PIPECTL_NOTIFY_INTERRUPT REG_BIT(4)
|
||||
#define GT_CS_MASTER_ERROR_INTERRUPT REG_BIT(3)
|
||||
#define GT_RENDER_USER_INTERRUPT REG_BIT(0)
|
||||
|
||||
37
drivers/gpu/drm/xe/regs/xe_gtt_defs.h
Normal file
37
drivers/gpu/drm/xe/regs/xe_gtt_defs.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GTT_DEFS_H_
|
||||
#define _XE_GTT_DEFS_H_
|
||||
|
||||
#define XELPG_GGTT_PTE_PAT0 BIT_ULL(52)
|
||||
#define XELPG_GGTT_PTE_PAT1 BIT_ULL(53)
|
||||
|
||||
#define GGTT_PTE_VFID GENMASK_ULL(11, 2)
|
||||
|
||||
#define GUC_GGTT_TOP 0xFEE00000
|
||||
|
||||
#define XELPG_PPGTT_PTE_PAT3 BIT_ULL(62)
|
||||
#define XE2_PPGTT_PTE_PAT4 BIT_ULL(61)
|
||||
#define XE_PPGTT_PDE_PDPE_PAT2 BIT_ULL(12)
|
||||
#define XE_PPGTT_PTE_PAT2 BIT_ULL(7)
|
||||
#define XE_PPGTT_PTE_PAT1 BIT_ULL(4)
|
||||
#define XE_PPGTT_PTE_PAT0 BIT_ULL(3)
|
||||
|
||||
#define XE_PDE_PS_2M BIT_ULL(7)
|
||||
#define XE_PDPE_PS_1G BIT_ULL(7)
|
||||
#define XE_PDE_IPS_64K BIT_ULL(11)
|
||||
|
||||
#define XE_GGTT_PTE_DM BIT_ULL(1)
|
||||
#define XE_USM_PPGTT_PTE_AE BIT_ULL(10)
|
||||
#define XE_PPGTT_PTE_DM BIT_ULL(11)
|
||||
#define XE_PDE_64K BIT_ULL(6)
|
||||
#define XE_PTE_PS64 BIT_ULL(8)
|
||||
#define XE_PTE_NULL BIT_ULL(9)
|
||||
|
||||
#define XE_PAGE_PRESENT BIT_ULL(0)
|
||||
#define XE_PAGE_RW BIT_ULL(1)
|
||||
|
||||
#endif
|
||||
@ -100,16 +100,23 @@
|
||||
#define GT_PM_CONFIG XE_REG(0x13816c)
|
||||
#define GT_DOORBELL_ENABLE REG_BIT(0)
|
||||
|
||||
#define GUC_HOST_INTERRUPT XE_REG(0x1901f0)
|
||||
#define GUC_HOST_INTERRUPT XE_REG(0x1901f0, XE_REG_OPTION_VF)
|
||||
|
||||
#define VF_SW_FLAG(n) XE_REG(0x190240 + (n) * 4)
|
||||
#define VF_SW_FLAG(n) XE_REG(0x190240 + (n) * 4, XE_REG_OPTION_VF)
|
||||
#define VF_SW_FLAG_COUNT 4
|
||||
|
||||
#define MED_GUC_HOST_INTERRUPT XE_REG(0x190304)
|
||||
#define MED_GUC_HOST_INTERRUPT XE_REG(0x190304, XE_REG_OPTION_VF)
|
||||
|
||||
#define MED_VF_SW_FLAG(n) XE_REG(0x190310 + (n) * 4)
|
||||
#define MED_VF_SW_FLAG(n) XE_REG(0x190310 + (n) * 4, XE_REG_OPTION_VF)
|
||||
#define MED_VF_SW_FLAG_COUNT 4
|
||||
|
||||
#define GUC_TLB_INV_CR XE_REG(0xcee8)
|
||||
#define GUC_TLB_INV_CR_INVALIDATE REG_BIT(0)
|
||||
#define PVC_GUC_TLB_INV_DESC0 XE_REG(0xcf7c)
|
||||
#define PVC_GUC_TLB_INV_DESC0_VALID REG_BIT(0)
|
||||
#define PVC_GUC_TLB_INV_DESC1 XE_REG(0xcf80)
|
||||
#define PVC_GUC_TLB_INV_DESC1_INVALIDATE REG_BIT(6)
|
||||
|
||||
/* GuC Interrupt Vector */
|
||||
#define GUC_INTR_GUC2HOST REG_BIT(15)
|
||||
#define GUC_INTR_EXEC_ERROR REG_BIT(14)
|
||||
|
||||
@ -6,6 +6,8 @@
|
||||
#ifndef _XE_REG_DEFS_H_
|
||||
#define _XE_REG_DEFS_H_
|
||||
|
||||
#include <linux/build_bug.h>
|
||||
|
||||
#include "compat-i915-headers/i915_reg_defs.h"
|
||||
|
||||
/**
|
||||
@ -35,6 +37,10 @@ struct xe_reg {
|
||||
* value can inspect it.
|
||||
*/
|
||||
u32 mcr:1;
|
||||
/**
|
||||
* @vf: register is accessible from the Virtual Function.
|
||||
*/
|
||||
u32 vf:1;
|
||||
/**
|
||||
* @ext: access MMIO extension space for current register.
|
||||
*/
|
||||
@ -44,6 +50,7 @@ struct xe_reg {
|
||||
u32 raw;
|
||||
};
|
||||
};
|
||||
static_assert(sizeof(struct xe_reg) == sizeof(u32));
|
||||
|
||||
/**
|
||||
* struct xe_reg_mcr - MCR register definition
|
||||
@ -75,6 +82,13 @@ struct xe_reg_mcr {
|
||||
*/
|
||||
#define XE_REG_OPTION_MASKED .masked = 1
|
||||
|
||||
/**
|
||||
* XE_REG_OPTION_VF - Register is "VF" accessible.
|
||||
*
|
||||
* To be used with XE_REG() and XE_REG_INITIALIZER().
|
||||
*/
|
||||
#define XE_REG_OPTION_VF .vf = 1
|
||||
|
||||
/**
|
||||
* XE_REG_INITIALIZER - Initializer for xe_reg_t.
|
||||
* @r_: Register offset
|
||||
@ -117,4 +131,9 @@ struct xe_reg_mcr {
|
||||
.__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \
|
||||
})
|
||||
|
||||
static inline bool xe_reg_is_valid(struct xe_reg r)
|
||||
{
|
||||
return r.addr;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
@ -57,7 +57,7 @@
|
||||
#define DG1_MSTR_IRQ REG_BIT(31)
|
||||
#define DG1_MSTR_TILE(t) REG_BIT(t)
|
||||
|
||||
#define GFX_MSTR_IRQ XE_REG(0x190010)
|
||||
#define GFX_MSTR_IRQ XE_REG(0x190010, XE_REG_OPTION_VF)
|
||||
#define MASTER_IRQ REG_BIT(31)
|
||||
#define GU_MISC_IRQ REG_BIT(29)
|
||||
#define DISPLAY_IRQ REG_BIT(16)
|
||||
|
||||
@ -14,4 +14,7 @@
|
||||
#define LMEM_EN REG_BIT(31)
|
||||
#define LMTT_DIR_PTR REG_GENMASK(30, 0) /* in multiples of 64KB */
|
||||
|
||||
#define VF_CAP_REG XE_REG(0x1901f8, XE_REG_OPTION_VF)
|
||||
#define VF_CAP REG_BIT(0)
|
||||
|
||||
#endif
|
||||
|
||||
@ -1,7 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
# "live" kunit tests
|
||||
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += \
|
||||
obj-$(CONFIG_DRM_XE_KUNIT_TEST) += xe_live_test.o
|
||||
xe_live_test-y = xe_live_test_mod.o \
|
||||
xe_bo_test.o \
|
||||
xe_dma_buf_test.o \
|
||||
xe_migrate_test.o \
|
||||
|
||||
@ -116,7 +116,7 @@ static void ccs_test_run_tile(struct xe_device *xe, struct xe_tile *tile,
|
||||
int ret;
|
||||
|
||||
/* TODO: Sanity check */
|
||||
unsigned int bo_flags = XE_BO_CREATE_VRAM_IF_DGFX(tile);
|
||||
unsigned int bo_flags = XE_BO_FLAG_VRAM_IF_DGFX(tile);
|
||||
|
||||
if (IS_DGFX(xe))
|
||||
kunit_info(test, "Testing vram id %u\n", tile->id);
|
||||
@ -163,7 +163,7 @@ static int ccs_test_run_device(struct xe_device *xe)
|
||||
return 0;
|
||||
}
|
||||
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
for_each_tile(tile, xe, id) {
|
||||
/* For igfx run only for primary tile */
|
||||
@ -172,7 +172,7 @@ static int ccs_test_run_device(struct xe_device *xe)
|
||||
ccs_test_run_tile(xe, tile, test);
|
||||
}
|
||||
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -186,7 +186,7 @@ EXPORT_SYMBOL_IF_KUNIT(xe_ccs_migrate_kunit);
|
||||
static int evict_test_run_tile(struct xe_device *xe, struct xe_tile *tile, struct kunit *test)
|
||||
{
|
||||
struct xe_bo *bo, *external;
|
||||
unsigned int bo_flags = XE_BO_CREATE_VRAM_IF_DGFX(tile);
|
||||
unsigned int bo_flags = XE_BO_FLAG_VRAM_IF_DGFX(tile);
|
||||
struct xe_vm *vm = xe_migrate_get_vm(xe_device_get_root_tile(xe)->migrate);
|
||||
struct xe_gt *__gt;
|
||||
int err, i, id;
|
||||
@ -335,12 +335,12 @@ static int evict_test_run_device(struct xe_device *xe)
|
||||
return 0;
|
||||
}
|
||||
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
for_each_tile(tile, xe, id)
|
||||
evict_test_run_tile(xe, tile, test);
|
||||
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -19,8 +19,3 @@ static struct kunit_suite xe_bo_test_suite = {
|
||||
};
|
||||
|
||||
kunit_test_suite(xe_bo_test_suite);
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("xe_bo kunit test");
|
||||
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
|
||||
|
||||
@ -12,6 +12,7 @@
|
||||
#include "tests/xe_pci_test.h"
|
||||
|
||||
#include "xe_pci.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
static bool p2p_enabled(struct dma_buf_test_params *params)
|
||||
{
|
||||
@ -36,14 +37,14 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
|
||||
xe_bo_assert_held(imported);
|
||||
|
||||
mem_type = XE_PL_VRAM0;
|
||||
if (!(params->mem_mask & XE_BO_CREATE_VRAM0_BIT))
|
||||
if (!(params->mem_mask & XE_BO_FLAG_VRAM0))
|
||||
/* No VRAM allowed */
|
||||
mem_type = XE_PL_TT;
|
||||
else if (params->force_different_devices && !p2p_enabled(params))
|
||||
/* No P2P */
|
||||
mem_type = XE_PL_TT;
|
||||
else if (params->force_different_devices && !is_dynamic(params) &&
|
||||
(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT))
|
||||
(params->mem_mask & XE_BO_FLAG_SYSTEM))
|
||||
/* Pin migrated to TT */
|
||||
mem_type = XE_PL_TT;
|
||||
|
||||
@ -93,7 +94,7 @@ static void check_residency(struct kunit *test, struct xe_bo *exported,
|
||||
* possible, saving a migration step as the transfer is just
|
||||
* likely as fast from system memory.
|
||||
*/
|
||||
if (params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)
|
||||
if (params->mem_mask & XE_BO_FLAG_SYSTEM)
|
||||
KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, XE_PL_TT));
|
||||
else
|
||||
KUNIT_EXPECT_TRUE(test, xe_bo_is_mem_type(exported, mem_type));
|
||||
@ -115,17 +116,17 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
|
||||
|
||||
/* No VRAM on this device? */
|
||||
if (!ttm_manager_type(&xe->ttm, XE_PL_VRAM0) &&
|
||||
(params->mem_mask & XE_BO_CREATE_VRAM0_BIT))
|
||||
(params->mem_mask & XE_BO_FLAG_VRAM0))
|
||||
return;
|
||||
|
||||
size = PAGE_SIZE;
|
||||
if ((params->mem_mask & XE_BO_CREATE_VRAM0_BIT) &&
|
||||
if ((params->mem_mask & XE_BO_FLAG_VRAM0) &&
|
||||
xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K)
|
||||
size = SZ_64K;
|
||||
|
||||
kunit_info(test, "running %s\n", __func__);
|
||||
bo = xe_bo_create_user(xe, NULL, NULL, size, DRM_XE_GEM_CPU_CACHING_WC,
|
||||
ttm_bo_type_device, XE_BO_CREATE_USER_BIT | params->mem_mask);
|
||||
ttm_bo_type_device, params->mem_mask);
|
||||
if (IS_ERR(bo)) {
|
||||
KUNIT_FAIL(test, "xe_bo_create() failed with err=%ld\n",
|
||||
PTR_ERR(bo));
|
||||
@ -148,7 +149,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
|
||||
*/
|
||||
if (params->force_different_devices &&
|
||||
!p2p_enabled(params) &&
|
||||
!(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)) {
|
||||
!(params->mem_mask & XE_BO_FLAG_SYSTEM)) {
|
||||
KUNIT_FAIL(test,
|
||||
"xe_gem_prime_import() succeeded when it shouldn't have\n");
|
||||
} else {
|
||||
@ -161,7 +162,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
|
||||
/* Pinning in VRAM is not allowed. */
|
||||
if (!is_dynamic(params) &&
|
||||
params->force_different_devices &&
|
||||
!(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT))
|
||||
!(params->mem_mask & XE_BO_FLAG_SYSTEM))
|
||||
KUNIT_EXPECT_EQ(test, err, -EINVAL);
|
||||
/* Otherwise only expect interrupts or success. */
|
||||
else if (err && err != -EINTR && err != -ERESTARTSYS)
|
||||
@ -180,7 +181,7 @@ static void xe_test_dmabuf_import_same_driver(struct xe_device *xe)
|
||||
PTR_ERR(import));
|
||||
} else if (!params->force_different_devices ||
|
||||
p2p_enabled(params) ||
|
||||
(params->mem_mask & XE_BO_CREATE_SYSTEM_BIT)) {
|
||||
(params->mem_mask & XE_BO_FLAG_SYSTEM)) {
|
||||
/* Shouldn't fail if we can reuse same bo, use p2p or use system */
|
||||
KUNIT_FAIL(test, "dynamic p2p attachment failed with err=%ld\n",
|
||||
PTR_ERR(import));
|
||||
@ -203,52 +204,52 @@ static const struct dma_buf_attach_ops nop2p_attach_ops = {
|
||||
* gem object.
|
||||
*/
|
||||
static const struct dma_buf_test_params test_params[] = {
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &xe_dma_buf_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &xe_dma_buf_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &nop2p_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &nop2p_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT},
|
||||
{.mem_mask = XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0},
|
||||
{.mem_mask = XE_BO_FLAG_VRAM0,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM,
|
||||
.attach_ops = &xe_dma_buf_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM,
|
||||
.attach_ops = &xe_dma_buf_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM,
|
||||
.attach_ops = &nop2p_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM,
|
||||
.attach_ops = &nop2p_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM},
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &xe_dma_buf_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &xe_dma_buf_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &nop2p_attach_ops},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0,
|
||||
.attach_ops = &nop2p_attach_ops,
|
||||
.force_different_devices = true},
|
||||
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT},
|
||||
{.mem_mask = XE_BO_CREATE_SYSTEM_BIT | XE_BO_CREATE_VRAM0_BIT,
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0},
|
||||
{.mem_mask = XE_BO_FLAG_SYSTEM | XE_BO_FLAG_VRAM0,
|
||||
.force_different_devices = true},
|
||||
|
||||
{}
|
||||
@ -259,6 +260,7 @@ static int dma_buf_run_device(struct xe_device *xe)
|
||||
const struct dma_buf_test_params *params;
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
for (params = test_params; params->mem_mask; ++params) {
|
||||
struct dma_buf_test_params p = *params;
|
||||
|
||||
@ -266,6 +268,7 @@ static int dma_buf_run_device(struct xe_device *xe)
|
||||
test->priv = &p;
|
||||
xe_test_dmabuf_import_same_driver(xe);
|
||||
}
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
/* A non-zero return would halt iteration over driver devices */
|
||||
return 0;
|
||||
|
||||
@ -18,8 +18,3 @@ static struct kunit_suite xe_dma_buf_test_suite = {
|
||||
};
|
||||
|
||||
kunit_test_suite(xe_dma_buf_test_suite);
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("xe_dma_buf kunit test");
|
||||
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
|
||||
|
||||
136
drivers/gpu/drm/xe/tests/xe_guc_id_mgr_test.c
Normal file
136
drivers/gpu/drm/xe/tests/xe_guc_id_mgr_test.c
Normal file
@ -0,0 +1,136 @@
|
||||
// SPDX-License-Identifier: GPL-2.0 AND MIT
|
||||
/*
|
||||
* Copyright © 2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <kunit/test.h>
|
||||
|
||||
#include "xe_device.h"
|
||||
#include "xe_kunit_helpers.h"
|
||||
|
||||
static int guc_id_mgr_test_init(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm;
|
||||
|
||||
xe_kunit_helper_xe_device_test_init(test);
|
||||
idm = &xe_device_get_gt(test->priv, 0)->uc.guc.submission_state.idm;
|
||||
|
||||
mutex_init(idm_mutex(idm));
|
||||
test->priv = idm;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void bad_init(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
|
||||
KUNIT_EXPECT_EQ(test, -EINVAL, xe_guc_id_mgr_init(idm, 0));
|
||||
KUNIT_EXPECT_EQ(test, -ERANGE, xe_guc_id_mgr_init(idm, GUC_ID_MAX + 1));
|
||||
}
|
||||
|
||||
static void no_init(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
|
||||
mutex_lock(idm_mutex(idm));
|
||||
KUNIT_EXPECT_EQ(test, -ENODATA, xe_guc_id_mgr_reserve_locked(idm, 0));
|
||||
mutex_unlock(idm_mutex(idm));
|
||||
|
||||
KUNIT_EXPECT_EQ(test, -ENODATA, xe_guc_id_mgr_reserve(idm, 1, 1));
|
||||
}
|
||||
|
||||
static void init_fini(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
|
||||
KUNIT_ASSERT_EQ(test, 0, xe_guc_id_mgr_init(idm, -1));
|
||||
KUNIT_EXPECT_NOT_NULL(test, idm->bitmap);
|
||||
KUNIT_EXPECT_EQ(test, idm->total, GUC_ID_MAX);
|
||||
__fini_idm(NULL, idm);
|
||||
KUNIT_EXPECT_NULL(test, idm->bitmap);
|
||||
KUNIT_EXPECT_EQ(test, idm->total, 0);
|
||||
}
|
||||
|
||||
static void check_used(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
unsigned int n;
|
||||
|
||||
KUNIT_ASSERT_EQ(test, 0, xe_guc_id_mgr_init(idm, 2));
|
||||
|
||||
mutex_lock(idm_mutex(idm));
|
||||
|
||||
for (n = 0; n < idm->total; n++) {
|
||||
kunit_info(test, "n=%u", n);
|
||||
KUNIT_EXPECT_EQ(test, idm->used, n);
|
||||
KUNIT_EXPECT_GE(test, idm_reserve_chunk_locked(idm, 1, 0), 0);
|
||||
KUNIT_EXPECT_EQ(test, idm->used, n + 1);
|
||||
}
|
||||
KUNIT_EXPECT_EQ(test, idm->used, idm->total);
|
||||
idm_release_chunk_locked(idm, 0, idm->used);
|
||||
KUNIT_EXPECT_EQ(test, idm->used, 0);
|
||||
|
||||
mutex_unlock(idm_mutex(idm));
|
||||
}
|
||||
|
||||
static void check_quota(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
unsigned int n;
|
||||
|
||||
KUNIT_ASSERT_EQ(test, 0, xe_guc_id_mgr_init(idm, 2));
|
||||
|
||||
mutex_lock(idm_mutex(idm));
|
||||
|
||||
for (n = 0; n < idm->total - 1; n++) {
|
||||
kunit_info(test, "n=%u", n);
|
||||
KUNIT_EXPECT_EQ(test, idm_reserve_chunk_locked(idm, 1, idm->total), -EDQUOT);
|
||||
KUNIT_EXPECT_EQ(test, idm_reserve_chunk_locked(idm, 1, idm->total - n), -EDQUOT);
|
||||
KUNIT_EXPECT_EQ(test, idm_reserve_chunk_locked(idm, idm->total - n, 1), -EDQUOT);
|
||||
KUNIT_EXPECT_GE(test, idm_reserve_chunk_locked(idm, 1, 1), 0);
|
||||
}
|
||||
KUNIT_EXPECT_LE(test, 0, idm_reserve_chunk_locked(idm, 1, 0));
|
||||
KUNIT_EXPECT_EQ(test, idm->used, idm->total);
|
||||
idm_release_chunk_locked(idm, 0, idm->total);
|
||||
KUNIT_EXPECT_EQ(test, idm->used, 0);
|
||||
|
||||
mutex_unlock(idm_mutex(idm));
|
||||
}
|
||||
|
||||
static void check_all(struct kunit *test)
|
||||
{
|
||||
struct xe_guc_id_mgr *idm = test->priv;
|
||||
unsigned int n;
|
||||
|
||||
KUNIT_ASSERT_EQ(test, 0, xe_guc_id_mgr_init(idm, -1));
|
||||
|
||||
mutex_lock(idm_mutex(idm));
|
||||
|
||||
for (n = 0; n < idm->total; n++)
|
||||
KUNIT_EXPECT_LE(test, 0, idm_reserve_chunk_locked(idm, 1, 0));
|
||||
KUNIT_EXPECT_EQ(test, idm->used, idm->total);
|
||||
for (n = 0; n < idm->total; n++)
|
||||
idm_release_chunk_locked(idm, n, 1);
|
||||
|
||||
mutex_unlock(idm_mutex(idm));
|
||||
}
|
||||
|
||||
static struct kunit_case guc_id_mgr_test_cases[] = {
|
||||
KUNIT_CASE(bad_init),
|
||||
KUNIT_CASE(no_init),
|
||||
KUNIT_CASE(init_fini),
|
||||
KUNIT_CASE(check_used),
|
||||
KUNIT_CASE(check_quota),
|
||||
KUNIT_CASE_SLOW(check_all),
|
||||
{}
|
||||
};
|
||||
|
||||
static struct kunit_suite guc_id_mgr_suite = {
|
||||
.name = "guc_idm",
|
||||
.test_cases = guc_id_mgr_test_cases,
|
||||
|
||||
.init = guc_id_mgr_test_init,
|
||||
.exit = NULL,
|
||||
};
|
||||
|
||||
kunit_test_suites(&guc_id_mgr_suite);
|
||||
10
drivers/gpu/drm/xe/tests/xe_live_test_mod.c
Normal file
10
drivers/gpu/drm/xe/tests/xe_live_test_mod.c
Normal file
@ -0,0 +1,10 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright © 2023 Intel Corporation
|
||||
*/
|
||||
#include <linux/module.h>
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("xe live kunit tests");
|
||||
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
|
||||
@ -10,6 +10,7 @@
|
||||
#include "tests/xe_pci_test.h"
|
||||
|
||||
#include "xe_pci.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
static bool sanity_fence_failed(struct xe_device *xe, struct dma_fence *fence,
|
||||
const char *str, struct kunit *test)
|
||||
@ -112,7 +113,7 @@ static void test_copy(struct xe_migrate *m, struct xe_bo *bo,
|
||||
bo->size,
|
||||
ttm_bo_type_kernel,
|
||||
region |
|
||||
XE_BO_NEEDS_CPU_ACCESS);
|
||||
XE_BO_FLAG_NEEDS_CPU_ACCESS);
|
||||
if (IS_ERR(remote)) {
|
||||
KUNIT_FAIL(test, "Failed to allocate remote bo for %s: %pe\n",
|
||||
str, remote);
|
||||
@ -190,7 +191,7 @@ static void test_copy(struct xe_migrate *m, struct xe_bo *bo,
|
||||
static void test_copy_sysmem(struct xe_migrate *m, struct xe_bo *bo,
|
||||
struct kunit *test)
|
||||
{
|
||||
test_copy(m, bo, test, XE_BO_CREATE_SYSTEM_BIT);
|
||||
test_copy(m, bo, test, XE_BO_FLAG_SYSTEM);
|
||||
}
|
||||
|
||||
static void test_copy_vram(struct xe_migrate *m, struct xe_bo *bo,
|
||||
@ -202,9 +203,9 @@ static void test_copy_vram(struct xe_migrate *m, struct xe_bo *bo,
|
||||
return;
|
||||
|
||||
if (bo->ttm.resource->mem_type == XE_PL_VRAM0)
|
||||
region = XE_BO_CREATE_VRAM1_BIT;
|
||||
region = XE_BO_FLAG_VRAM1;
|
||||
else
|
||||
region = XE_BO_CREATE_VRAM0_BIT;
|
||||
region = XE_BO_FLAG_VRAM0;
|
||||
test_copy(m, bo, test, region);
|
||||
}
|
||||
|
||||
@ -280,8 +281,8 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
|
||||
|
||||
big = xe_bo_create_pin_map(xe, tile, m->q->vm, SZ_4M,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_CREATE_PINNED_BIT);
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_FLAG_PINNED);
|
||||
if (IS_ERR(big)) {
|
||||
KUNIT_FAIL(test, "Failed to allocate bo: %li\n", PTR_ERR(big));
|
||||
goto vunmap;
|
||||
@ -289,8 +290,8 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
|
||||
|
||||
pt = xe_bo_create_pin_map(xe, tile, m->q->vm, XE_PAGE_SIZE,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_CREATE_PINNED_BIT);
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_FLAG_PINNED);
|
||||
if (IS_ERR(pt)) {
|
||||
KUNIT_FAIL(test, "Failed to allocate fake pt: %li\n",
|
||||
PTR_ERR(pt));
|
||||
@ -300,8 +301,8 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
|
||||
tiny = xe_bo_create_pin_map(xe, tile, m->q->vm,
|
||||
2 * SZ_4K,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_CREATE_PINNED_BIT);
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_FLAG_PINNED);
|
||||
if (IS_ERR(tiny)) {
|
||||
KUNIT_FAIL(test, "Failed to allocate fake pt: %li\n",
|
||||
PTR_ERR(pt));
|
||||
@ -423,17 +424,19 @@ static int migrate_test_run_device(struct xe_device *xe)
|
||||
struct xe_tile *tile;
|
||||
int id;
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
for_each_tile(tile, xe, id) {
|
||||
struct xe_migrate *m = tile->migrate;
|
||||
|
||||
kunit_info(test, "Testing tile id %d.\n", id);
|
||||
xe_vm_lock(m->q->vm, true);
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_migrate_sanity_test(m, test);
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_vm_unlock(m->q->vm);
|
||||
}
|
||||
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@ -18,8 +18,3 @@ static struct kunit_suite xe_migrate_test_suite = {
|
||||
};
|
||||
|
||||
kunit_test_suite(xe_migrate_test_suite);
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("xe_migrate kunit test");
|
||||
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
|
||||
|
||||
@ -10,10 +10,11 @@
|
||||
#include "tests/xe_pci_test.h"
|
||||
#include "tests/xe_test.h"
|
||||
|
||||
#include "xe_pci.h"
|
||||
#include "xe_device.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_mocs.h"
|
||||
#include "xe_device.h"
|
||||
#include "xe_pci.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
struct live_mocs {
|
||||
struct xe_mocs_info table;
|
||||
@ -28,6 +29,8 @@ static int live_mocs_init(struct live_mocs *arg, struct xe_gt *gt)
|
||||
|
||||
flags = get_mocs_settings(gt_to_xe(gt), &arg->table);
|
||||
|
||||
kunit_info(test, "gt %d", gt->info.id);
|
||||
kunit_info(test, "gt type %d", gt->info.type);
|
||||
kunit_info(test, "table size %d", arg->table.size);
|
||||
kunit_info(test, "table uc_index %d", arg->table.uc_index);
|
||||
kunit_info(test, "table n_entries %d", arg->table.n_entries);
|
||||
@ -38,69 +41,72 @@ static int live_mocs_init(struct live_mocs *arg, struct xe_gt *gt)
|
||||
static void read_l3cc_table(struct xe_gt *gt,
|
||||
const struct xe_mocs_info *info)
|
||||
{
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
u32 l3cc, l3cc_expected;
|
||||
unsigned int i;
|
||||
u32 l3cc;
|
||||
u32 reg_val;
|
||||
u32 ret;
|
||||
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
KUNIT_ASSERT_EQ_MSG(test, ret, 0, "Forcewake Failed.\n");
|
||||
mocs_dbg(>_to_xe(gt)->drm, "L3CC entries:%d\n", info->n_entries);
|
||||
for (i = 0;
|
||||
i < (info->n_entries + 1) / 2 ?
|
||||
(l3cc = l3cc_combine(get_entry_l3cc(info, 2 * i),
|
||||
get_entry_l3cc(info, 2 * i + 1))), 1 : 0;
|
||||
i++) {
|
||||
if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1250)
|
||||
reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_LNCFCMOCS(i));
|
||||
else
|
||||
reg_val = xe_mmio_read32(gt, XELP_LNCFCMOCS(i));
|
||||
mocs_dbg(>_to_xe(gt)->drm, "%d 0x%x 0x%x 0x%x\n", i,
|
||||
XELP_LNCFCMOCS(i).addr, reg_val, l3cc);
|
||||
if (reg_val != l3cc)
|
||||
KUNIT_FAIL(test, "l3cc reg 0x%x has incorrect val.\n",
|
||||
XELP_LNCFCMOCS(i).addr);
|
||||
|
||||
for (i = 0; i < info->n_entries; i++) {
|
||||
if (!(i & 1)) {
|
||||
if (regs_are_mcr(gt))
|
||||
reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_LNCFCMOCS(i >> 1));
|
||||
else
|
||||
reg_val = xe_mmio_read32(gt, XELP_LNCFCMOCS(i >> 1));
|
||||
|
||||
mocs_dbg(gt, "reg_val=0x%x\n", reg_val);
|
||||
} else {
|
||||
/* Just re-use value read on previous iteration */
|
||||
reg_val >>= 16;
|
||||
}
|
||||
|
||||
l3cc_expected = get_entry_l3cc(info, i);
|
||||
l3cc = reg_val & 0xffff;
|
||||
|
||||
mocs_dbg(gt, "[%u] expected=0x%x actual=0x%x\n",
|
||||
i, l3cc_expected, l3cc);
|
||||
|
||||
KUNIT_EXPECT_EQ_MSG(test, l3cc_expected, l3cc,
|
||||
"l3cc idx=%u has incorrect val.\n", i);
|
||||
}
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
}
|
||||
|
||||
static void read_mocs_table(struct xe_gt *gt,
|
||||
const struct xe_mocs_info *info)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
u32 mocs, mocs_expected;
|
||||
unsigned int i;
|
||||
u32 mocs;
|
||||
u32 reg_val;
|
||||
u32 ret;
|
||||
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
KUNIT_EXPECT_TRUE_MSG(test, info->unused_entries_index,
|
||||
"Unused entries index should have been defined\n");
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
ret = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
KUNIT_ASSERT_EQ_MSG(test, ret, 0, "Forcewake Failed.\n");
|
||||
mocs_dbg(>_to_xe(gt)->drm, "Global MOCS entries:%d\n", info->n_entries);
|
||||
drm_WARN_ONCE(&xe->drm, !info->unused_entries_index,
|
||||
"Unused entries index should have been defined\n");
|
||||
for (i = 0;
|
||||
i < info->n_entries ? (mocs = get_entry_control(info, i)), 1 : 0;
|
||||
i++) {
|
||||
if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1250)
|
||||
|
||||
for (i = 0; i < info->n_entries; i++) {
|
||||
if (regs_are_mcr(gt))
|
||||
reg_val = xe_gt_mcr_unicast_read_any(gt, XEHP_GLOBAL_MOCS(i));
|
||||
else
|
||||
reg_val = xe_mmio_read32(gt, XELP_GLOBAL_MOCS(i));
|
||||
mocs_dbg(>_to_xe(gt)->drm, "%d 0x%x 0x%x 0x%x\n", i,
|
||||
XELP_GLOBAL_MOCS(i).addr, reg_val, mocs);
|
||||
if (reg_val != mocs)
|
||||
KUNIT_FAIL(test, "mocs reg 0x%x has incorrect val.\n",
|
||||
XELP_GLOBAL_MOCS(i).addr);
|
||||
|
||||
mocs_expected = get_entry_control(info, i);
|
||||
mocs = reg_val;
|
||||
|
||||
mocs_dbg(gt, "[%u] expected=0x%x actual=0x%x\n",
|
||||
i, mocs_expected, mocs);
|
||||
|
||||
KUNIT_EXPECT_EQ_MSG(test, mocs_expected, mocs,
|
||||
"mocs reg 0x%x has incorrect val.\n", i);
|
||||
}
|
||||
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
}
|
||||
|
||||
static int mocs_kernel_test_run_device(struct xe_device *xe)
|
||||
@ -113,6 +119,8 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
|
||||
unsigned int flags;
|
||||
int id;
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
for_each_gt(gt, xe, id) {
|
||||
flags = live_mocs_init(&mocs, gt);
|
||||
if (flags & HAS_GLOBAL_MOCS)
|
||||
@ -120,6 +128,9 @@ static int mocs_kernel_test_run_device(struct xe_device *xe)
|
||||
if (flags & HAS_LNCF_MOCS)
|
||||
read_l3cc_table(gt, &mocs.table);
|
||||
}
|
||||
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -139,6 +150,8 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
|
||||
int id;
|
||||
struct kunit *test = xe_cur_kunit();
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
for_each_gt(gt, xe, id) {
|
||||
flags = live_mocs_init(&mocs, gt);
|
||||
kunit_info(test, "mocs_reset_test before reset\n");
|
||||
@ -156,6 +169,9 @@ static int mocs_reset_test_run_device(struct xe_device *xe)
|
||||
if (flags & HAS_LNCF_MOCS)
|
||||
read_l3cc_table(gt, &mocs.table);
|
||||
}
|
||||
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@ -19,8 +19,3 @@ static struct kunit_suite xe_mocs_test_suite = {
|
||||
};
|
||||
|
||||
kunit_test_suite(xe_mocs_test_suite);
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("xe_mocs kunit test");
|
||||
MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
|
||||
|
||||
@ -71,6 +71,7 @@ static const struct platform_test_case cases[] = {
|
||||
SUBPLATFORM_CASE(DG2, G12, A1),
|
||||
GMDID_CASE(METEORLAKE, 1270, A0, 1300, A0),
|
||||
GMDID_CASE(METEORLAKE, 1271, A0, 1300, A0),
|
||||
GMDID_CASE(METEORLAKE, 1274, A0, 1300, A0),
|
||||
GMDID_CASE(LUNARLAKE, 2004, A0, 2000, A0),
|
||||
GMDID_CASE(LUNARLAKE, 2004, B0, 2000, A0),
|
||||
};
|
||||
|
||||
@ -86,7 +86,8 @@ struct xe_sched_job *xe_bb_create_migration_job(struct xe_exec_queue *q,
|
||||
};
|
||||
|
||||
xe_gt_assert(q->gt, second_idx <= bb->len);
|
||||
xe_gt_assert(q->gt, q->vm->flags & XE_VM_FLAG_MIGRATION);
|
||||
xe_gt_assert(q->gt, xe_sched_job_is_migration(q));
|
||||
xe_gt_assert(q->gt, q->width == 1);
|
||||
|
||||
return __xe_bb_create_job(q, bb, addr);
|
||||
}
|
||||
@ -96,7 +97,8 @@ struct xe_sched_job *xe_bb_create_job(struct xe_exec_queue *q,
|
||||
{
|
||||
u64 addr = xe_sa_bo_gpu_addr(bb->bo);
|
||||
|
||||
xe_gt_assert(q->gt, !(q->vm && q->vm->flags & XE_VM_FLAG_MIGRATION));
|
||||
xe_gt_assert(q->gt, !xe_sched_job_is_migration(q));
|
||||
xe_gt_assert(q->gt, q->width == 1);
|
||||
return __xe_bb_create_job(q, bb, &addr);
|
||||
}
|
||||
|
||||
|
||||
@ -22,6 +22,7 @@
|
||||
#include "xe_gt.h"
|
||||
#include "xe_map.h"
|
||||
#include "xe_migrate.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_preempt_fence.h"
|
||||
#include "xe_res_cursor.h"
|
||||
#include "xe_trace.h"
|
||||
@ -111,7 +112,7 @@ bool xe_bo_is_stolen_devmem(struct xe_bo *bo)
|
||||
|
||||
static bool xe_bo_is_user(struct xe_bo *bo)
|
||||
{
|
||||
return bo->flags & XE_BO_CREATE_USER_BIT;
|
||||
return bo->flags & XE_BO_FLAG_USER;
|
||||
}
|
||||
|
||||
static struct xe_migrate *
|
||||
@ -137,7 +138,7 @@ static struct xe_mem_region *res_to_mem_region(struct ttm_resource *res)
|
||||
static void try_add_system(struct xe_device *xe, struct xe_bo *bo,
|
||||
u32 bo_flags, u32 *c)
|
||||
{
|
||||
if (bo_flags & XE_BO_CREATE_SYSTEM_BIT) {
|
||||
if (bo_flags & XE_BO_FLAG_SYSTEM) {
|
||||
xe_assert(xe, *c < ARRAY_SIZE(bo->placements));
|
||||
|
||||
bo->placements[*c] = (struct ttm_place) {
|
||||
@ -164,12 +165,12 @@ static void add_vram(struct xe_device *xe, struct xe_bo *bo,
|
||||
* For eviction / restore on suspend / resume objects
|
||||
* pinned in VRAM must be contiguous
|
||||
*/
|
||||
if (bo_flags & (XE_BO_CREATE_PINNED_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT))
|
||||
if (bo_flags & (XE_BO_FLAG_PINNED |
|
||||
XE_BO_FLAG_GGTT))
|
||||
place.flags |= TTM_PL_FLAG_CONTIGUOUS;
|
||||
|
||||
if (io_size < vram->usable_size) {
|
||||
if (bo_flags & XE_BO_NEEDS_CPU_ACCESS) {
|
||||
if (bo_flags & XE_BO_FLAG_NEEDS_CPU_ACCESS) {
|
||||
place.fpfn = 0;
|
||||
place.lpfn = io_size >> PAGE_SHIFT;
|
||||
} else {
|
||||
@ -183,22 +184,22 @@ static void add_vram(struct xe_device *xe, struct xe_bo *bo,
|
||||
static void try_add_vram(struct xe_device *xe, struct xe_bo *bo,
|
||||
u32 bo_flags, u32 *c)
|
||||
{
|
||||
if (bo_flags & XE_BO_CREATE_VRAM0_BIT)
|
||||
if (bo_flags & XE_BO_FLAG_VRAM0)
|
||||
add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM0, c);
|
||||
if (bo_flags & XE_BO_CREATE_VRAM1_BIT)
|
||||
if (bo_flags & XE_BO_FLAG_VRAM1)
|
||||
add_vram(xe, bo, bo->placements, bo_flags, XE_PL_VRAM1, c);
|
||||
}
|
||||
|
||||
static void try_add_stolen(struct xe_device *xe, struct xe_bo *bo,
|
||||
u32 bo_flags, u32 *c)
|
||||
{
|
||||
if (bo_flags & XE_BO_CREATE_STOLEN_BIT) {
|
||||
if (bo_flags & XE_BO_FLAG_STOLEN) {
|
||||
xe_assert(xe, *c < ARRAY_SIZE(bo->placements));
|
||||
|
||||
bo->placements[*c] = (struct ttm_place) {
|
||||
.mem_type = XE_PL_STOLEN,
|
||||
.flags = bo_flags & (XE_BO_CREATE_PINNED_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT) ?
|
||||
.flags = bo_flags & (XE_BO_FLAG_PINNED |
|
||||
XE_BO_FLAG_GGTT) ?
|
||||
TTM_PL_FLAG_CONTIGUOUS : 0,
|
||||
};
|
||||
*c += 1;
|
||||
@ -339,7 +340,7 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
|
||||
break;
|
||||
}
|
||||
|
||||
WARN_ON((bo->flags & XE_BO_CREATE_USER_BIT) && !bo->cpu_caching);
|
||||
WARN_ON((bo->flags & XE_BO_FLAG_USER) && !bo->cpu_caching);
|
||||
|
||||
/*
|
||||
* Display scanout is always non-coherent with the CPU cache.
|
||||
@ -347,8 +348,8 @@ static struct ttm_tt *xe_ttm_tt_create(struct ttm_buffer_object *ttm_bo,
|
||||
* For Xe_LPG and beyond, PPGTT PTE lookups are also non-coherent and
|
||||
* require a CPU:WC mapping.
|
||||
*/
|
||||
if ((!bo->cpu_caching && bo->flags & XE_BO_SCANOUT_BIT) ||
|
||||
(xe->info.graphics_verx100 >= 1270 && bo->flags & XE_BO_PAGETABLE))
|
||||
if ((!bo->cpu_caching && bo->flags & XE_BO_FLAG_SCANOUT) ||
|
||||
(xe->info.graphics_verx100 >= 1270 && bo->flags & XE_BO_FLAG_PAGETABLE))
|
||||
caching = ttm_write_combined;
|
||||
|
||||
err = ttm_tt_init(&tt->ttm, &bo->ttm, page_flags, caching, extra_pages);
|
||||
@ -715,7 +716,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
|
||||
|
||||
xe_assert(xe, migrate);
|
||||
trace_xe_bo_move(bo, new_mem->mem_type, old_mem_type, move_lacks_source);
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get_noresume(xe);
|
||||
|
||||
if (xe_bo_is_pinned(bo) && !xe_bo_is_user(bo)) {
|
||||
/*
|
||||
@ -739,7 +740,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
|
||||
|
||||
if (XE_WARN_ON(new_mem->start == XE_BO_INVALID_OFFSET)) {
|
||||
ret = -EINVAL;
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@ -757,7 +758,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
|
||||
new_mem, handle_system_ccs);
|
||||
if (IS_ERR(fence)) {
|
||||
ret = PTR_ERR(fence);
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
goto out;
|
||||
}
|
||||
if (!move_lacks_source) {
|
||||
@ -782,7 +783,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
|
||||
dma_fence_put(fence);
|
||||
}
|
||||
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
out:
|
||||
return ret;
|
||||
@ -794,7 +795,6 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
|
||||
* @bo: The buffer object to move.
|
||||
*
|
||||
* On successful completion, the object memory will be moved to sytem memory.
|
||||
* This function blocks until the object has been fully moved.
|
||||
*
|
||||
* This is needed to for special handling of pinned VRAM object during
|
||||
* suspend-resume.
|
||||
@ -851,9 +851,6 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
|
||||
if (ret)
|
||||
goto err_res_free;
|
||||
|
||||
dma_resv_wait_timeout(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL,
|
||||
false, MAX_SCHEDULE_TIMEOUT);
|
||||
|
||||
return 0;
|
||||
|
||||
err_res_free:
|
||||
@ -866,7 +863,6 @@ int xe_bo_evict_pinned(struct xe_bo *bo)
|
||||
* @bo: The buffer object to move.
|
||||
*
|
||||
* On successful completion, the object memory will be moved back to VRAM.
|
||||
* This function blocks until the object has been fully moved.
|
||||
*
|
||||
* This is needed to for special handling of pinned VRAM object during
|
||||
* suspend-resume.
|
||||
@ -908,9 +904,6 @@ int xe_bo_restore_pinned(struct xe_bo *bo)
|
||||
if (ret)
|
||||
goto err_res_free;
|
||||
|
||||
dma_resv_wait_timeout(bo->ttm.base.resv, DMA_RESV_USAGE_KERNEL,
|
||||
false, MAX_SCHEDULE_TIMEOUT);
|
||||
|
||||
return 0;
|
||||
|
||||
err_res_free:
|
||||
@ -1110,12 +1103,12 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
|
||||
struct drm_device *ddev = tbo->base.dev;
|
||||
struct xe_device *xe = to_xe_device(ddev);
|
||||
struct xe_bo *bo = ttm_to_xe_bo(tbo);
|
||||
bool needs_rpm = bo->flags & XE_BO_CREATE_VRAM_MASK;
|
||||
bool needs_rpm = bo->flags & XE_BO_FLAG_VRAM_MASK;
|
||||
vm_fault_t ret;
|
||||
int idx;
|
||||
|
||||
if (needs_rpm)
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
ret = ttm_bo_vm_reserve(tbo, vmf);
|
||||
if (ret)
|
||||
@ -1146,7 +1139,7 @@ static vm_fault_t xe_gem_fault(struct vm_fault *vmf)
|
||||
dma_resv_unlock(tbo->base.resv);
|
||||
out:
|
||||
if (needs_rpm)
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -1223,18 +1216,19 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
if (flags & (XE_BO_CREATE_VRAM_MASK | XE_BO_CREATE_STOLEN_BIT) &&
|
||||
!(flags & XE_BO_CREATE_IGNORE_MIN_PAGE_SIZE_BIT) &&
|
||||
xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K) {
|
||||
if (flags & (XE_BO_FLAG_VRAM_MASK | XE_BO_FLAG_STOLEN) &&
|
||||
!(flags & XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE) &&
|
||||
((xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K) ||
|
||||
(flags & XE_BO_NEEDS_64K))) {
|
||||
aligned_size = ALIGN(size, SZ_64K);
|
||||
if (type != ttm_bo_type_device)
|
||||
size = ALIGN(size, SZ_64K);
|
||||
flags |= XE_BO_INTERNAL_64K;
|
||||
flags |= XE_BO_FLAG_INTERNAL_64K;
|
||||
alignment = SZ_64K >> PAGE_SHIFT;
|
||||
|
||||
} else {
|
||||
aligned_size = ALIGN(size, SZ_4K);
|
||||
flags &= ~XE_BO_INTERNAL_64K;
|
||||
flags &= ~XE_BO_FLAG_INTERNAL_64K;
|
||||
alignment = SZ_4K >> PAGE_SHIFT;
|
||||
}
|
||||
|
||||
@ -1263,11 +1257,11 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
|
||||
drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
|
||||
|
||||
if (resv) {
|
||||
ctx.allow_res_evict = !(flags & XE_BO_CREATE_NO_RESV_EVICT);
|
||||
ctx.allow_res_evict = !(flags & XE_BO_FLAG_NO_RESV_EVICT);
|
||||
ctx.resv = resv;
|
||||
}
|
||||
|
||||
if (!(flags & XE_BO_FIXED_PLACEMENT_BIT)) {
|
||||
if (!(flags & XE_BO_FLAG_FIXED_PLACEMENT)) {
|
||||
err = __xe_bo_placement_for_flags(xe, bo, bo->flags);
|
||||
if (WARN_ON(err)) {
|
||||
xe_ttm_bo_destroy(&bo->ttm);
|
||||
@ -1277,7 +1271,7 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
|
||||
|
||||
/* Defer populating type_sg bos */
|
||||
placement = (type == ttm_bo_type_sg ||
|
||||
bo->flags & XE_BO_DEFER_BACKING) ? &sys_placement :
|
||||
bo->flags & XE_BO_FLAG_DEFER_BACKING) ? &sys_placement :
|
||||
&bo->placement;
|
||||
err = ttm_bo_init_reserved(&xe->ttm, &bo->ttm, type,
|
||||
placement, alignment,
|
||||
@ -1332,21 +1326,21 @@ static int __xe_bo_fixed_placement(struct xe_device *xe,
|
||||
{
|
||||
struct ttm_place *place = bo->placements;
|
||||
|
||||
if (flags & (XE_BO_CREATE_USER_BIT|XE_BO_CREATE_SYSTEM_BIT))
|
||||
if (flags & (XE_BO_FLAG_USER | XE_BO_FLAG_SYSTEM))
|
||||
return -EINVAL;
|
||||
|
||||
place->flags = TTM_PL_FLAG_CONTIGUOUS;
|
||||
place->fpfn = start >> PAGE_SHIFT;
|
||||
place->lpfn = end >> PAGE_SHIFT;
|
||||
|
||||
switch (flags & (XE_BO_CREATE_STOLEN_BIT | XE_BO_CREATE_VRAM_MASK)) {
|
||||
case XE_BO_CREATE_VRAM0_BIT:
|
||||
switch (flags & (XE_BO_FLAG_STOLEN | XE_BO_FLAG_VRAM_MASK)) {
|
||||
case XE_BO_FLAG_VRAM0:
|
||||
place->mem_type = XE_PL_VRAM0;
|
||||
break;
|
||||
case XE_BO_CREATE_VRAM1_BIT:
|
||||
case XE_BO_FLAG_VRAM1:
|
||||
place->mem_type = XE_PL_VRAM1;
|
||||
break;
|
||||
case XE_BO_CREATE_STOLEN_BIT:
|
||||
case XE_BO_FLAG_STOLEN:
|
||||
place->mem_type = XE_PL_STOLEN;
|
||||
break;
|
||||
|
||||
@ -1380,7 +1374,7 @@ __xe_bo_create_locked(struct xe_device *xe,
|
||||
if (IS_ERR(bo))
|
||||
return bo;
|
||||
|
||||
flags |= XE_BO_FIXED_PLACEMENT_BIT;
|
||||
flags |= XE_BO_FLAG_FIXED_PLACEMENT;
|
||||
err = __xe_bo_fixed_placement(xe, bo, flags, start, end, size);
|
||||
if (err) {
|
||||
xe_bo_free(bo);
|
||||
@ -1390,7 +1384,7 @@ __xe_bo_create_locked(struct xe_device *xe,
|
||||
|
||||
bo = ___xe_bo_create_locked(xe, bo, tile, vm ? xe_vm_resv(vm) : NULL,
|
||||
vm && !xe_vm_in_fault_mode(vm) &&
|
||||
flags & XE_BO_CREATE_USER_BIT ?
|
||||
flags & XE_BO_FLAG_USER ?
|
||||
&vm->lru_bulk_move : NULL, size,
|
||||
cpu_caching, type, flags);
|
||||
if (IS_ERR(bo))
|
||||
@ -1407,13 +1401,13 @@ __xe_bo_create_locked(struct xe_device *xe,
|
||||
xe_vm_get(vm);
|
||||
bo->vm = vm;
|
||||
|
||||
if (bo->flags & XE_BO_CREATE_GGTT_BIT) {
|
||||
if (!tile && flags & XE_BO_CREATE_STOLEN_BIT)
|
||||
if (bo->flags & XE_BO_FLAG_GGTT) {
|
||||
if (!tile && flags & XE_BO_FLAG_STOLEN)
|
||||
tile = xe_device_get_root_tile(xe);
|
||||
|
||||
xe_assert(xe, tile);
|
||||
|
||||
if (flags & XE_BO_FIXED_PLACEMENT_BIT) {
|
||||
if (flags & XE_BO_FLAG_FIXED_PLACEMENT) {
|
||||
err = xe_ggtt_insert_bo_at(tile->mem.ggtt, bo,
|
||||
start + bo->size, U64_MAX);
|
||||
} else {
|
||||
@ -1456,7 +1450,7 @@ struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile,
|
||||
{
|
||||
struct xe_bo *bo = __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL,
|
||||
cpu_caching, type,
|
||||
flags | XE_BO_CREATE_USER_BIT);
|
||||
flags | XE_BO_FLAG_USER);
|
||||
if (!IS_ERR(bo))
|
||||
xe_bo_unlock_vm_held(bo);
|
||||
|
||||
@ -1485,12 +1479,12 @@ struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile
|
||||
u64 start = offset == ~0ull ? 0 : offset;
|
||||
u64 end = offset == ~0ull ? offset : start + size;
|
||||
|
||||
if (flags & XE_BO_CREATE_STOLEN_BIT &&
|
||||
if (flags & XE_BO_FLAG_STOLEN &&
|
||||
xe_ttm_stolen_cpu_access_needs_ggtt(xe))
|
||||
flags |= XE_BO_CREATE_GGTT_BIT;
|
||||
flags |= XE_BO_FLAG_GGTT;
|
||||
|
||||
bo = xe_bo_create_locked_range(xe, tile, vm, size, start, end, type,
|
||||
flags | XE_BO_NEEDS_CPU_ACCESS);
|
||||
flags | XE_BO_FLAG_NEEDS_CPU_ACCESS);
|
||||
if (IS_ERR(bo))
|
||||
return bo;
|
||||
|
||||
@ -1587,13 +1581,15 @@ struct xe_bo *xe_managed_bo_create_from_data(struct xe_device *xe, struct xe_til
|
||||
int xe_managed_bo_reinit_in_vram(struct xe_device *xe, struct xe_tile *tile, struct xe_bo **src)
|
||||
{
|
||||
struct xe_bo *bo;
|
||||
u32 dst_flags = XE_BO_FLAG_VRAM_IF_DGFX(tile) | XE_BO_FLAG_GGTT;
|
||||
|
||||
dst_flags |= (*src)->flags & XE_BO_FLAG_GGTT_INVALIDATE;
|
||||
|
||||
xe_assert(xe, IS_DGFX(xe));
|
||||
xe_assert(xe, !(*src)->vmap.is_iomem);
|
||||
|
||||
bo = xe_managed_bo_create_from_data(xe, tile, (*src)->vmap.vaddr, (*src)->size,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
bo = xe_managed_bo_create_from_data(xe, tile, (*src)->vmap.vaddr,
|
||||
(*src)->size, dst_flags);
|
||||
if (IS_ERR(bo))
|
||||
return PTR_ERR(bo);
|
||||
|
||||
@ -1668,8 +1664,8 @@ int xe_bo_pin(struct xe_bo *bo)
|
||||
xe_assert(xe, !xe_bo_is_user(bo));
|
||||
|
||||
/* Pinned object must be in GGTT or have pinned flag */
|
||||
xe_assert(xe, bo->flags & (XE_BO_CREATE_PINNED_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT));
|
||||
xe_assert(xe, bo->flags & (XE_BO_FLAG_PINNED |
|
||||
XE_BO_FLAG_GGTT));
|
||||
|
||||
/*
|
||||
* No reason we can't support pinning imported dma-bufs we just don't
|
||||
@ -1690,7 +1686,7 @@ int xe_bo_pin(struct xe_bo *bo)
|
||||
* during suspend / resume (force restore to same physical address).
|
||||
*/
|
||||
if (IS_DGFX(xe) && !(IS_ENABLED(CONFIG_DRM_XE_DEBUG) &&
|
||||
bo->flags & XE_BO_INTERNAL_TEST)) {
|
||||
bo->flags & XE_BO_FLAG_INTERNAL_TEST)) {
|
||||
struct ttm_place *place = &(bo->placements[0]);
|
||||
|
||||
if (mem_type_is_vram(place->mem_type)) {
|
||||
@ -1758,7 +1754,7 @@ void xe_bo_unpin(struct xe_bo *bo)
|
||||
xe_assert(xe, xe_bo_is_pinned(bo));
|
||||
|
||||
if (IS_DGFX(xe) && !(IS_ENABLED(CONFIG_DRM_XE_DEBUG) &&
|
||||
bo->flags & XE_BO_INTERNAL_TEST)) {
|
||||
bo->flags & XE_BO_FLAG_INTERNAL_TEST)) {
|
||||
struct ttm_place *place = &(bo->placements[0]);
|
||||
|
||||
if (mem_type_is_vram(place->mem_type)) {
|
||||
@ -1861,7 +1857,7 @@ int xe_bo_vmap(struct xe_bo *bo)
|
||||
|
||||
xe_bo_assert_held(bo);
|
||||
|
||||
if (!(bo->flags & XE_BO_NEEDS_CPU_ACCESS))
|
||||
if (!(bo->flags & XE_BO_FLAG_NEEDS_CPU_ACCESS))
|
||||
return -EINVAL;
|
||||
|
||||
if (!iosys_map_is_null(&bo->vmap))
|
||||
@ -1943,29 +1939,29 @@ int xe_gem_create_ioctl(struct drm_device *dev, void *data,
|
||||
|
||||
bo_flags = 0;
|
||||
if (args->flags & DRM_XE_GEM_CREATE_FLAG_DEFER_BACKING)
|
||||
bo_flags |= XE_BO_DEFER_BACKING;
|
||||
bo_flags |= XE_BO_FLAG_DEFER_BACKING;
|
||||
|
||||
if (args->flags & DRM_XE_GEM_CREATE_FLAG_SCANOUT)
|
||||
bo_flags |= XE_BO_SCANOUT_BIT;
|
||||
bo_flags |= XE_BO_FLAG_SCANOUT;
|
||||
|
||||
bo_flags |= args->placement << (ffs(XE_BO_CREATE_SYSTEM_BIT) - 1);
|
||||
bo_flags |= args->placement << (ffs(XE_BO_FLAG_SYSTEM) - 1);
|
||||
|
||||
if (args->flags & DRM_XE_GEM_CREATE_FLAG_NEEDS_VISIBLE_VRAM) {
|
||||
if (XE_IOCTL_DBG(xe, !(bo_flags & XE_BO_CREATE_VRAM_MASK)))
|
||||
if (XE_IOCTL_DBG(xe, !(bo_flags & XE_BO_FLAG_VRAM_MASK)))
|
||||
return -EINVAL;
|
||||
|
||||
bo_flags |= XE_BO_NEEDS_CPU_ACCESS;
|
||||
bo_flags |= XE_BO_FLAG_NEEDS_CPU_ACCESS;
|
||||
}
|
||||
|
||||
if (XE_IOCTL_DBG(xe, !args->cpu_caching ||
|
||||
args->cpu_caching > DRM_XE_GEM_CPU_CACHING_WC))
|
||||
return -EINVAL;
|
||||
|
||||
if (XE_IOCTL_DBG(xe, bo_flags & XE_BO_CREATE_VRAM_MASK &&
|
||||
if (XE_IOCTL_DBG(xe, bo_flags & XE_BO_FLAG_VRAM_MASK &&
|
||||
args->cpu_caching != DRM_XE_GEM_CPU_CACHING_WC))
|
||||
return -EINVAL;
|
||||
|
||||
if (XE_IOCTL_DBG(xe, bo_flags & XE_BO_SCANOUT_BIT &&
|
||||
if (XE_IOCTL_DBG(xe, bo_flags & XE_BO_FLAG_SCANOUT &&
|
||||
args->cpu_caching == DRM_XE_GEM_CPU_CACHING_WB))
|
||||
return -EINVAL;
|
||||
|
||||
@ -2206,6 +2202,9 @@ bool xe_bo_needs_ccs_pages(struct xe_bo *bo)
|
||||
{
|
||||
struct xe_device *xe = xe_bo_device(bo);
|
||||
|
||||
if (GRAPHICS_VER(xe) >= 20 && IS_DGFX(xe))
|
||||
return false;
|
||||
|
||||
if (!xe_device_has_flat_ccs(xe) || bo->ttm.type != ttm_bo_type_device)
|
||||
return false;
|
||||
|
||||
@ -2214,7 +2213,7 @@ bool xe_bo_needs_ccs_pages(struct xe_bo *bo)
|
||||
* can't be used since there's no CCS storage associated with
|
||||
* non-VRAM addresses.
|
||||
*/
|
||||
if (IS_DGFX(xe) && (bo->flags & XE_BO_CREATE_SYSTEM_BIT))
|
||||
if (IS_DGFX(xe) && (bo->flags & XE_BO_FLAG_SYSTEM))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
@ -2283,9 +2282,9 @@ int xe_bo_dumb_create(struct drm_file *file_priv,
|
||||
bo = xe_bo_create_user(xe, NULL, NULL, args->size,
|
||||
DRM_XE_GEM_CPU_CACHING_WC,
|
||||
ttm_bo_type_device,
|
||||
XE_BO_CREATE_VRAM_IF_DGFX(xe_device_get_root_tile(xe)) |
|
||||
XE_BO_CREATE_USER_BIT | XE_BO_SCANOUT_BIT |
|
||||
XE_BO_NEEDS_CPU_ACCESS);
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(xe_device_get_root_tile(xe)) |
|
||||
XE_BO_FLAG_SCANOUT |
|
||||
XE_BO_FLAG_NEEDS_CPU_ACCESS);
|
||||
if (IS_ERR(bo))
|
||||
return PTR_ERR(bo);
|
||||
|
||||
|
||||
@ -13,48 +13,34 @@
|
||||
#include "xe_vm_types.h"
|
||||
#include "xe_vm.h"
|
||||
|
||||
/**
|
||||
* xe_vm_assert_held(vm) - Assert that the vm's reservation object is held.
|
||||
* @vm: The vm
|
||||
*/
|
||||
#define xe_vm_assert_held(vm) dma_resv_assert_held(xe_vm_resv(vm))
|
||||
|
||||
|
||||
|
||||
#define XE_DEFAULT_GTT_SIZE_MB 3072ULL /* 3GB by default */
|
||||
|
||||
#define XE_BO_CREATE_USER_BIT BIT(0)
|
||||
#define XE_BO_FLAG_USER BIT(0)
|
||||
/* The bits below need to be contiguous, or things break */
|
||||
#define XE_BO_CREATE_SYSTEM_BIT BIT(1)
|
||||
#define XE_BO_CREATE_VRAM0_BIT BIT(2)
|
||||
#define XE_BO_CREATE_VRAM1_BIT BIT(3)
|
||||
#define XE_BO_CREATE_VRAM_MASK (XE_BO_CREATE_VRAM0_BIT | \
|
||||
XE_BO_CREATE_VRAM1_BIT)
|
||||
#define XE_BO_FLAG_SYSTEM BIT(1)
|
||||
#define XE_BO_FLAG_VRAM0 BIT(2)
|
||||
#define XE_BO_FLAG_VRAM1 BIT(3)
|
||||
#define XE_BO_FLAG_VRAM_MASK (XE_BO_FLAG_VRAM0 | XE_BO_FLAG_VRAM1)
|
||||
/* -- */
|
||||
#define XE_BO_CREATE_STOLEN_BIT BIT(4)
|
||||
#define XE_BO_CREATE_VRAM_IF_DGFX(tile) \
|
||||
(IS_DGFX(tile_to_xe(tile)) ? XE_BO_CREATE_VRAM0_BIT << (tile)->id : \
|
||||
XE_BO_CREATE_SYSTEM_BIT)
|
||||
#define XE_BO_CREATE_GGTT_BIT BIT(5)
|
||||
#define XE_BO_CREATE_IGNORE_MIN_PAGE_SIZE_BIT BIT(6)
|
||||
#define XE_BO_CREATE_PINNED_BIT BIT(7)
|
||||
#define XE_BO_CREATE_NO_RESV_EVICT BIT(8)
|
||||
#define XE_BO_DEFER_BACKING BIT(9)
|
||||
#define XE_BO_SCANOUT_BIT BIT(10)
|
||||
#define XE_BO_FIXED_PLACEMENT_BIT BIT(11)
|
||||
#define XE_BO_PAGETABLE BIT(12)
|
||||
#define XE_BO_NEEDS_CPU_ACCESS BIT(13)
|
||||
#define XE_BO_NEEDS_UC BIT(14)
|
||||
#define XE_BO_FLAG_STOLEN BIT(4)
|
||||
#define XE_BO_FLAG_VRAM_IF_DGFX(tile) (IS_DGFX(tile_to_xe(tile)) ? \
|
||||
XE_BO_FLAG_VRAM0 << (tile)->id : \
|
||||
XE_BO_FLAG_SYSTEM)
|
||||
#define XE_BO_FLAG_GGTT BIT(5)
|
||||
#define XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE BIT(6)
|
||||
#define XE_BO_FLAG_PINNED BIT(7)
|
||||
#define XE_BO_FLAG_NO_RESV_EVICT BIT(8)
|
||||
#define XE_BO_FLAG_DEFER_BACKING BIT(9)
|
||||
#define XE_BO_FLAG_SCANOUT BIT(10)
|
||||
#define XE_BO_FLAG_FIXED_PLACEMENT BIT(11)
|
||||
#define XE_BO_FLAG_PAGETABLE BIT(12)
|
||||
#define XE_BO_FLAG_NEEDS_CPU_ACCESS BIT(13)
|
||||
#define XE_BO_FLAG_NEEDS_UC BIT(14)
|
||||
#define XE_BO_NEEDS_64K BIT(15)
|
||||
#define XE_BO_FLAG_GGTT_INVALIDATE BIT(16)
|
||||
/* this one is trigger internally only */
|
||||
#define XE_BO_INTERNAL_TEST BIT(30)
|
||||
#define XE_BO_INTERNAL_64K BIT(31)
|
||||
|
||||
#define XELPG_PPGTT_PTE_PAT3 BIT_ULL(62)
|
||||
#define XE2_PPGTT_PTE_PAT4 BIT_ULL(61)
|
||||
#define XE_PPGTT_PDE_PDPE_PAT2 BIT_ULL(12)
|
||||
#define XE_PPGTT_PTE_PAT2 BIT_ULL(7)
|
||||
#define XE_PPGTT_PTE_PAT1 BIT_ULL(4)
|
||||
#define XE_PPGTT_PTE_PAT0 BIT_ULL(3)
|
||||
#define XE_BO_FLAG_INTERNAL_TEST BIT(30)
|
||||
#define XE_BO_FLAG_INTERNAL_64K BIT(31)
|
||||
|
||||
#define XE_PTE_SHIFT 12
|
||||
#define XE_PAGE_SIZE (1 << XE_PTE_SHIFT)
|
||||
@ -68,20 +54,6 @@
|
||||
#define XE_64K_PTE_MASK (XE_64K_PAGE_SIZE - 1)
|
||||
#define XE_64K_PDE_MASK (XE_PDE_MASK >> 4)
|
||||
|
||||
#define XE_PDE_PS_2M BIT_ULL(7)
|
||||
#define XE_PDPE_PS_1G BIT_ULL(7)
|
||||
#define XE_PDE_IPS_64K BIT_ULL(11)
|
||||
|
||||
#define XE_GGTT_PTE_DM BIT_ULL(1)
|
||||
#define XE_USM_PPGTT_PTE_AE BIT_ULL(10)
|
||||
#define XE_PPGTT_PTE_DM BIT_ULL(11)
|
||||
#define XE_PDE_64K BIT_ULL(6)
|
||||
#define XE_PTE_PS64 BIT_ULL(8)
|
||||
#define XE_PTE_NULL BIT_ULL(9)
|
||||
|
||||
#define XE_PAGE_PRESENT BIT_ULL(0)
|
||||
#define XE_PAGE_RW BIT_ULL(1)
|
||||
|
||||
#define XE_PL_SYSTEM TTM_PL_SYSTEM
|
||||
#define XE_PL_TT TTM_PL_TT
|
||||
#define XE_PL_VRAM0 TTM_PL_VRAM
|
||||
|
||||
@ -146,7 +146,7 @@ int xe_bo_restore_kernel(struct xe_device *xe)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (bo->flags & XE_BO_CREATE_GGTT_BIT) {
|
||||
if (bo->flags & XE_BO_FLAG_GGTT) {
|
||||
struct xe_tile *tile = bo->tile;
|
||||
|
||||
mutex_lock(&tile->mem.ggtt->lock);
|
||||
@ -220,7 +220,7 @@ int xe_bo_restore_user(struct xe_device *xe)
|
||||
list_splice_tail(&still_in_list, &xe->pinned.external_vram);
|
||||
spin_unlock(&xe->pinned.lock);
|
||||
|
||||
/* Wait for validate to complete */
|
||||
/* Wait for restore to complete */
|
||||
for_each_tile(tile, xe, id)
|
||||
xe_tile_migrate_wait(tile);
|
||||
|
||||
|
||||
@ -12,6 +12,8 @@
|
||||
#include "xe_bo.h"
|
||||
#include "xe_device.h"
|
||||
#include "xe_gt_debugfs.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_sriov.h"
|
||||
#include "xe_step.h"
|
||||
|
||||
#ifdef CONFIG_DRM_XE_DEBUG
|
||||
@ -37,6 +39,8 @@ static int info(struct seq_file *m, void *data)
|
||||
struct xe_gt *gt;
|
||||
u8 id;
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
|
||||
drm_printf(&p, "graphics_verx100 %d\n", xe->info.graphics_verx100);
|
||||
drm_printf(&p, "media_verx100 %d\n", xe->info.media_verx100);
|
||||
drm_printf(&p, "stepping G:%s M:%s D:%s B:%s\n",
|
||||
@ -63,11 +67,22 @@ static int info(struct seq_file *m, void *data)
|
||||
gt->info.engine_mask);
|
||||
}
|
||||
|
||||
xe_pm_runtime_put(xe);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sriov_info(struct seq_file *m, void *data)
|
||||
{
|
||||
struct xe_device *xe = node_to_xe(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
xe_sriov_print_info(xe, &p);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct drm_info_list debugfs_list[] = {
|
||||
{"info", info, 0},
|
||||
{ .name = "sriov_info", .show = sriov_info, },
|
||||
};
|
||||
|
||||
static int forcewake_open(struct inode *inode, struct file *file)
|
||||
@ -76,8 +91,7 @@ static int forcewake_open(struct inode *inode, struct file *file)
|
||||
struct xe_gt *gt;
|
||||
u8 id;
|
||||
|
||||
xe_device_mem_access_get(xe);
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
for_each_gt(gt, xe, id)
|
||||
XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
|
||||
@ -92,8 +106,7 @@ static int forcewake_release(struct inode *inode, struct file *file)
|
||||
|
||||
for_each_gt(gt, xe, id)
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -127,7 +140,7 @@ void xe_debugfs_register(struct xe_device *xe)
|
||||
if (man) {
|
||||
char name[16];
|
||||
|
||||
sprintf(name, "vram%d_mm", mem_type - XE_PL_VRAM0);
|
||||
snprintf(name, sizeof(name), "vram%d_mm", mem_type - XE_PL_VRAM0);
|
||||
ttm_resource_manager_create_debugfs(man, root, name);
|
||||
}
|
||||
}
|
||||
|
||||
@ -9,10 +9,13 @@
|
||||
#include <linux/devcoredump.h>
|
||||
#include <generated/utsrelease.h>
|
||||
|
||||
#include <drm/drm_managed.h>
|
||||
|
||||
#include "xe_device.h"
|
||||
#include "xe_exec_queue.h"
|
||||
#include "xe_force_wake.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_guc_ct.h"
|
||||
#include "xe_guc_submit.h"
|
||||
#include "xe_hw_engine.h"
|
||||
@ -64,9 +67,11 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
|
||||
{
|
||||
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
|
||||
|
||||
xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
|
||||
if (ss->vm)
|
||||
xe_vm_snapshot_capture_delayed(ss->vm);
|
||||
/* keep going if fw fails as we still want to save the memory and SW data */
|
||||
if (xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL))
|
||||
xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
|
||||
xe_vm_snapshot_capture_delayed(ss->vm);
|
||||
xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
|
||||
xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
|
||||
}
|
||||
|
||||
@ -74,17 +79,19 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
|
||||
size_t count, void *data, size_t datalen)
|
||||
{
|
||||
struct xe_devcoredump *coredump = data;
|
||||
struct xe_device *xe = coredump_to_xe(coredump);
|
||||
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
|
||||
struct xe_device *xe;
|
||||
struct xe_devcoredump_snapshot *ss;
|
||||
struct drm_printer p;
|
||||
struct drm_print_iterator iter;
|
||||
struct timespec64 ts;
|
||||
int i;
|
||||
|
||||
/* Our device is gone already... */
|
||||
if (!data || !coredump_to_xe(coredump))
|
||||
if (!coredump)
|
||||
return -ENODEV;
|
||||
|
||||
xe = coredump_to_xe(coredump);
|
||||
ss = &coredump->snapshot;
|
||||
|
||||
/* Ensure delayed work is captured before continuing */
|
||||
flush_work(&ss->work);
|
||||
|
||||
@ -117,10 +124,8 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
|
||||
if (coredump->snapshot.hwe[i])
|
||||
xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i],
|
||||
&p);
|
||||
if (coredump->snapshot.vm) {
|
||||
drm_printf(&p, "\n**** VM state ****\n");
|
||||
xe_vm_snapshot_print(coredump->snapshot.vm, &p);
|
||||
}
|
||||
drm_printf(&p, "\n**** VM state ****\n");
|
||||
xe_vm_snapshot_print(coredump->snapshot.vm, &p);
|
||||
|
||||
return count - iter.remain;
|
||||
}
|
||||
@ -180,10 +185,12 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
|
||||
}
|
||||
}
|
||||
|
||||
xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
|
||||
/* keep going if fw fails as we still want to save the memory and SW data */
|
||||
if (xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL))
|
||||
xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
|
||||
|
||||
coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
|
||||
coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(job);
|
||||
coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(q);
|
||||
coredump->snapshot.job = xe_sched_job_snapshot_capture(job);
|
||||
coredump->snapshot.vm = xe_vm_snapshot_capture(q->vm);
|
||||
|
||||
@ -196,8 +203,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
|
||||
coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe);
|
||||
}
|
||||
|
||||
if (ss->vm)
|
||||
queue_work(system_unbound_wq, &ss->work);
|
||||
queue_work(system_unbound_wq, &ss->work);
|
||||
|
||||
xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
|
||||
dma_fence_end_signalling(cookie);
|
||||
@ -231,5 +237,14 @@ void xe_devcoredump(struct xe_sched_job *job)
|
||||
dev_coredumpm(xe->drm.dev, THIS_MODULE, coredump, 0, GFP_KERNEL,
|
||||
xe_devcoredump_read, xe_devcoredump_free);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void xe_driver_devcoredump_fini(struct drm_device *drm, void *arg)
|
||||
{
|
||||
dev_coredump_put(drm->dev);
|
||||
}
|
||||
|
||||
int xe_devcoredump_init(struct xe_device *xe)
|
||||
{
|
||||
return drmm_add_action_or_reset(&xe->drm, xe_driver_devcoredump_fini, xe);
|
||||
}
|
||||
#endif
|
||||
|
||||
@ -11,10 +11,16 @@ struct xe_sched_job;
|
||||
|
||||
#ifdef CONFIG_DEV_COREDUMP
|
||||
void xe_devcoredump(struct xe_sched_job *job);
|
||||
int xe_devcoredump_init(struct xe_device *xe);
|
||||
#else
|
||||
static inline void xe_devcoredump(struct xe_sched_job *job)
|
||||
{
|
||||
}
|
||||
|
||||
static inline int xe_devcoredump_init(struct xe_device *xe)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
@ -20,6 +20,7 @@
|
||||
#include "regs/xe_regs.h"
|
||||
#include "xe_bo.h"
|
||||
#include "xe_debugfs.h"
|
||||
#include "xe_devcoredump.h"
|
||||
#include "xe_dma_buf.h"
|
||||
#include "xe_drm_client.h"
|
||||
#include "xe_drv.h"
|
||||
@ -45,12 +46,6 @@
|
||||
#include "xe_vm.h"
|
||||
#include "xe_wait_user_fence.h"
|
||||
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
struct lockdep_map xe_device_mem_access_lockdep_map = {
|
||||
.name = "xe_device_mem_access_lockdep_map"
|
||||
};
|
||||
#endif
|
||||
|
||||
static int xe_file_open(struct drm_device *dev, struct drm_file *file)
|
||||
{
|
||||
struct xe_device *xe = to_xe_device(dev);
|
||||
@ -136,15 +131,48 @@ static const struct drm_ioctl_desc xe_ioctls[] = {
|
||||
DRM_RENDER_ALLOW),
|
||||
};
|
||||
|
||||
static long xe_drm_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct drm_file *file_priv = file->private_data;
|
||||
struct xe_device *xe = to_xe_device(file_priv->minor->dev);
|
||||
long ret;
|
||||
|
||||
ret = xe_pm_runtime_get_ioctl(xe);
|
||||
if (ret >= 0)
|
||||
ret = drm_ioctl(file, cmd, arg);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_COMPAT
|
||||
static long xe_drm_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct drm_file *file_priv = file->private_data;
|
||||
struct xe_device *xe = to_xe_device(file_priv->minor->dev);
|
||||
long ret;
|
||||
|
||||
ret = xe_pm_runtime_get_ioctl(xe);
|
||||
if (ret >= 0)
|
||||
ret = drm_compat_ioctl(file, cmd, arg);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
/* similarly to drm_compat_ioctl, let's it be assigned to .compat_ioct unconditionally */
|
||||
#define xe_drm_compat_ioctl NULL
|
||||
#endif
|
||||
|
||||
static const struct file_operations xe_driver_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = drm_open,
|
||||
.release = drm_release_noglobal,
|
||||
.unlocked_ioctl = drm_ioctl,
|
||||
.unlocked_ioctl = xe_drm_ioctl,
|
||||
.mmap = drm_gem_mmap,
|
||||
.poll = drm_poll,
|
||||
.read = drm_read,
|
||||
.compat_ioctl = drm_compat_ioctl,
|
||||
.compat_ioctl = xe_drm_compat_ioctl,
|
||||
.llseek = noop_llseek,
|
||||
#ifdef CONFIG_PROC_FS
|
||||
.show_fdinfo = drm_show_fdinfo,
|
||||
@ -389,8 +417,70 @@ static int xe_set_dma_info(struct xe_device *xe)
|
||||
return err;
|
||||
}
|
||||
|
||||
/*
|
||||
* Initialize MMIO resources that don't require any knowledge about tile count.
|
||||
static bool verify_lmem_ready(struct xe_gt *gt)
|
||||
{
|
||||
u32 val = xe_mmio_read32(gt, GU_CNTL) & LMEM_INIT;
|
||||
|
||||
return !!val;
|
||||
}
|
||||
|
||||
static int wait_for_lmem_ready(struct xe_device *xe)
|
||||
{
|
||||
struct xe_gt *gt = xe_root_mmio_gt(xe);
|
||||
unsigned long timeout, start;
|
||||
|
||||
if (!IS_DGFX(xe))
|
||||
return 0;
|
||||
|
||||
if (IS_SRIOV_VF(xe))
|
||||
return 0;
|
||||
|
||||
if (verify_lmem_ready(gt))
|
||||
return 0;
|
||||
|
||||
drm_dbg(&xe->drm, "Waiting for lmem initialization\n");
|
||||
|
||||
start = jiffies;
|
||||
timeout = start + msecs_to_jiffies(60 * 1000); /* 60 sec! */
|
||||
|
||||
do {
|
||||
if (signal_pending(current))
|
||||
return -EINTR;
|
||||
|
||||
/*
|
||||
* The boot firmware initializes local memory and
|
||||
* assesses its health. If memory training fails,
|
||||
* the punit will have been instructed to keep the GT powered
|
||||
* down.we won't be able to communicate with it
|
||||
*
|
||||
* If the status check is done before punit updates the register,
|
||||
* it can lead to the system being unusable.
|
||||
* use a timeout and defer the probe to prevent this.
|
||||
*/
|
||||
if (time_after(jiffies, timeout)) {
|
||||
drm_dbg(&xe->drm, "lmem not initialized by firmware\n");
|
||||
return -EPROBE_DEFER;
|
||||
}
|
||||
|
||||
msleep(20);
|
||||
|
||||
} while (!verify_lmem_ready(gt));
|
||||
|
||||
drm_dbg(&xe->drm, "lmem ready after %ums",
|
||||
jiffies_to_msecs(jiffies - start));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_device_probe_early: Device early probe
|
||||
* @xe: xe device instance
|
||||
*
|
||||
* Initialize MMIO resources that don't require any
|
||||
* knowledge about tile count. Also initialize pcode and
|
||||
* check vram initialization on root tile.
|
||||
*
|
||||
* Return: 0 on success, error code on failure
|
||||
*/
|
||||
int xe_device_probe_early(struct xe_device *xe)
|
||||
{
|
||||
@ -400,7 +490,13 @@ int xe_device_probe_early(struct xe_device *xe)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = xe_mmio_root_tile_init(xe);
|
||||
xe_sriov_probe_early(xe);
|
||||
|
||||
err = xe_pcode_probe_early(xe);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = wait_for_lmem_ready(xe);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
@ -478,15 +574,15 @@ int xe_device_probe(struct xe_device *xe)
|
||||
return err;
|
||||
}
|
||||
|
||||
err = xe_devcoredump_init(xe);
|
||||
if (err)
|
||||
return err;
|
||||
err = drmm_add_action_or_reset(&xe->drm, xe_driver_flr_fini, xe);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
for_each_gt(gt, xe, id) {
|
||||
err = xe_pcode_probe(gt);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
for_each_gt(gt, xe, id)
|
||||
xe_pcode_init(gt);
|
||||
|
||||
err = xe_display_init_noirq(xe);
|
||||
if (err)
|
||||
@ -553,11 +649,7 @@ int xe_device_probe(struct xe_device *xe)
|
||||
|
||||
xe_hwmon_register(xe);
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, xe_device_sanitize, xe);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
return drmm_add_action_or_reset(&xe->drm, xe_device_sanitize, xe);
|
||||
|
||||
err_fini_display:
|
||||
xe_display_driver_remove(xe);
|
||||
@ -621,87 +713,20 @@ u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size)
|
||||
DIV_ROUND_UP_ULL(size, NUM_BYTES_PER_CCS_BYTE(xe)) : 0;
|
||||
}
|
||||
|
||||
bool xe_device_mem_access_ongoing(struct xe_device *xe)
|
||||
{
|
||||
if (xe_pm_read_callback_task(xe) != NULL)
|
||||
return true;
|
||||
|
||||
return atomic_read(&xe->mem_access.ref);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_device_assert_mem_access - Inspect the current runtime_pm state.
|
||||
* @xe: xe device instance
|
||||
*
|
||||
* To be used before any kind of memory access. It will splat a debug warning
|
||||
* if the device is currently sleeping. But it doesn't guarantee in any way
|
||||
* that the device is going to remain awake. Xe PM runtime get and put
|
||||
* functions might be added to the outer bound of the memory access, while
|
||||
* this check is intended for inner usage to splat some warning if the worst
|
||||
* case has just happened.
|
||||
*/
|
||||
void xe_device_assert_mem_access(struct xe_device *xe)
|
||||
{
|
||||
XE_WARN_ON(!xe_device_mem_access_ongoing(xe));
|
||||
}
|
||||
|
||||
bool xe_device_mem_access_get_if_ongoing(struct xe_device *xe)
|
||||
{
|
||||
bool active;
|
||||
|
||||
if (xe_pm_read_callback_task(xe) == current)
|
||||
return true;
|
||||
|
||||
active = xe_pm_runtime_get_if_active(xe);
|
||||
if (active) {
|
||||
int ref = atomic_inc_return(&xe->mem_access.ref);
|
||||
|
||||
xe_assert(xe, ref != S32_MAX);
|
||||
}
|
||||
|
||||
return active;
|
||||
}
|
||||
|
||||
void xe_device_mem_access_get(struct xe_device *xe)
|
||||
{
|
||||
int ref;
|
||||
|
||||
/*
|
||||
* This looks racy, but should be fine since the pm_callback_task only
|
||||
* transitions from NULL -> current (and back to NULL again), during the
|
||||
* runtime_resume() or runtime_suspend() callbacks, for which there can
|
||||
* only be a single one running for our device. We only need to prevent
|
||||
* recursively calling the runtime_get or runtime_put from those
|
||||
* callbacks, as well as preventing triggering any access_ongoing
|
||||
* asserts.
|
||||
*/
|
||||
if (xe_pm_read_callback_task(xe) == current)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Since the resume here is synchronous it can be quite easy to deadlock
|
||||
* if we are not careful. Also in practice it might be quite timing
|
||||
* sensitive to ever see the 0 -> 1 transition with the callers locks
|
||||
* held, so deadlocks might exist but are hard for lockdep to ever see.
|
||||
* With this in mind, help lockdep learn about the potentially scary
|
||||
* stuff that can happen inside the runtime_resume callback by acquiring
|
||||
* a dummy lock (it doesn't protect anything and gets compiled out on
|
||||
* non-debug builds). Lockdep then only needs to see the
|
||||
* mem_access_lockdep_map -> runtime_resume callback once, and then can
|
||||
* hopefully validate all the (callers_locks) -> mem_access_lockdep_map.
|
||||
* For example if the (callers_locks) are ever grabbed in the
|
||||
* runtime_resume callback, lockdep should give us a nice splat.
|
||||
*/
|
||||
lock_map_acquire(&xe_device_mem_access_lockdep_map);
|
||||
lock_map_release(&xe_device_mem_access_lockdep_map);
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
ref = atomic_inc_return(&xe->mem_access.ref);
|
||||
|
||||
xe_assert(xe, ref != S32_MAX);
|
||||
|
||||
}
|
||||
|
||||
void xe_device_mem_access_put(struct xe_device *xe)
|
||||
{
|
||||
int ref;
|
||||
|
||||
if (xe_pm_read_callback_task(xe) == current)
|
||||
return;
|
||||
|
||||
ref = atomic_dec_return(&xe->mem_access.ref);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
xe_assert(xe, ref >= 0);
|
||||
xe_assert(xe, !xe_pm_runtime_suspended(xe));
|
||||
}
|
||||
|
||||
void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p)
|
||||
|
||||
@ -16,10 +16,6 @@ struct xe_file;
|
||||
#include "xe_force_wake.h"
|
||||
#include "xe_macros.h"
|
||||
|
||||
#ifdef CONFIG_LOCKDEP
|
||||
extern struct lockdep_map xe_device_mem_access_lockdep_map;
|
||||
#endif
|
||||
|
||||
static inline struct xe_device *to_xe_device(const struct drm_device *dev)
|
||||
{
|
||||
return container_of(dev, struct xe_device, drm);
|
||||
@ -137,12 +133,7 @@ static inline struct xe_force_wake *gt_to_fw(struct xe_gt *gt)
|
||||
return >->mmio.fw;
|
||||
}
|
||||
|
||||
void xe_device_mem_access_get(struct xe_device *xe);
|
||||
bool xe_device_mem_access_get_if_ongoing(struct xe_device *xe);
|
||||
void xe_device_mem_access_put(struct xe_device *xe);
|
||||
|
||||
void xe_device_assert_mem_access(struct xe_device *xe);
|
||||
bool xe_device_mem_access_ongoing(struct xe_device *xe);
|
||||
|
||||
static inline bool xe_device_in_fault_mode(struct xe_device *xe)
|
||||
{
|
||||
|
||||
@ -35,7 +35,9 @@ vram_d3cold_threshold_show(struct device *dev,
|
||||
if (!xe)
|
||||
return -EINVAL;
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
ret = sysfs_emit(buf, "%d\n", xe->d3cold.vram_threshold);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return ret;
|
||||
}
|
||||
@ -58,7 +60,9 @@ vram_d3cold_threshold_store(struct device *dev, struct device_attribute *attr,
|
||||
|
||||
drm_dbg(&xe->drm, "vram_d3cold_threshold: %u\n", vram_d3cold_threshold);
|
||||
|
||||
xe_pm_runtime_get(xe);
|
||||
ret = xe_pm_set_vram_threshold(xe, vram_d3cold_threshold);
|
||||
xe_pm_runtime_put(xe);
|
||||
|
||||
return ret ?: count;
|
||||
}
|
||||
@ -72,18 +76,14 @@ static void xe_device_sysfs_fini(struct drm_device *drm, void *arg)
|
||||
sysfs_remove_file(&xe->drm.dev->kobj, &dev_attr_vram_d3cold_threshold.attr);
|
||||
}
|
||||
|
||||
void xe_device_sysfs_init(struct xe_device *xe)
|
||||
int xe_device_sysfs_init(struct xe_device *xe)
|
||||
{
|
||||
struct device *dev = xe->drm.dev;
|
||||
int ret;
|
||||
|
||||
ret = sysfs_create_file(&dev->kobj, &dev_attr_vram_d3cold_threshold.attr);
|
||||
if (ret) {
|
||||
drm_warn(&xe->drm, "Failed to create sysfs file\n");
|
||||
return;
|
||||
}
|
||||
|
||||
ret = drmm_add_action_or_reset(&xe->drm, xe_device_sysfs_fini, xe);
|
||||
if (ret)
|
||||
drm_warn(&xe->drm, "Failed to add sysfs fini drm action\n");
|
||||
return ret;
|
||||
|
||||
return drmm_add_action_or_reset(&xe->drm, xe_device_sysfs_fini, xe);
|
||||
}
|
||||
|
||||
@ -8,6 +8,6 @@
|
||||
|
||||
struct xe_device;
|
||||
|
||||
void xe_device_sysfs_init(struct xe_device *xe);
|
||||
int xe_device_sysfs_init(struct xe_device *xe);
|
||||
|
||||
#endif
|
||||
|
||||
@ -321,6 +321,10 @@ struct xe_device {
|
||||
struct {
|
||||
/** @sriov.__mode: SR-IOV mode (Don't access directly!) */
|
||||
enum xe_sriov_mode __mode;
|
||||
|
||||
/** @sriov.pf: PF specific data */
|
||||
struct xe_device_pf pf;
|
||||
|
||||
/** @sriov.wq: workqueue used by the virtualization workers */
|
||||
struct workqueue_struct *wq;
|
||||
} sriov;
|
||||
@ -380,9 +384,6 @@ struct xe_device {
|
||||
* triggering additional actions when they occur.
|
||||
*/
|
||||
struct {
|
||||
/** @mem_access.ref: ref count of memory accesses */
|
||||
atomic_t ref;
|
||||
|
||||
/**
|
||||
* @mem_access.vram_userfault: Encapsulate vram_userfault
|
||||
* related stuff
|
||||
|
||||
@ -16,6 +16,7 @@
|
||||
#include "tests/xe_test.h"
|
||||
#include "xe_bo.h"
|
||||
#include "xe_device.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_ttm_vram_mgr.h"
|
||||
#include "xe_vm.h"
|
||||
|
||||
@ -33,7 +34,7 @@ static int xe_dma_buf_attach(struct dma_buf *dmabuf,
|
||||
if (!attach->peer2peer && !xe_bo_can_migrate(gem_to_xe_bo(obj), XE_PL_TT))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
xe_device_mem_access_get(to_xe_device(obj->dev));
|
||||
xe_pm_runtime_get(to_xe_device(obj->dev));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -42,7 +43,7 @@ static void xe_dma_buf_detach(struct dma_buf *dmabuf,
|
||||
{
|
||||
struct drm_gem_object *obj = attach->dmabuf->priv;
|
||||
|
||||
xe_device_mem_access_put(to_xe_device(obj->dev));
|
||||
xe_pm_runtime_put(to_xe_device(obj->dev));
|
||||
}
|
||||
|
||||
static int xe_dma_buf_pin(struct dma_buf_attachment *attach)
|
||||
@ -216,7 +217,7 @@ xe_dma_buf_init_obj(struct drm_device *dev, struct xe_bo *storage,
|
||||
dma_resv_lock(resv, NULL);
|
||||
bo = ___xe_bo_create_locked(xe, storage, NULL, resv, NULL, dma_buf->size,
|
||||
0, /* Will require 1way or 2way for vm_bind */
|
||||
ttm_bo_type_sg, XE_BO_CREATE_SYSTEM_BIT);
|
||||
ttm_bo_type_sg, XE_BO_FLAG_SYSTEM);
|
||||
if (IS_ERR(bo)) {
|
||||
ret = PTR_ERR(bo);
|
||||
goto error;
|
||||
|
||||
@ -78,7 +78,7 @@ void xe_drm_client_add_bo(struct xe_drm_client *client,
|
||||
|
||||
spin_lock(&client->bos_lock);
|
||||
bo->client = xe_drm_client_get(client);
|
||||
list_add_tail_rcu(&bo->client_link, &client->bos_list);
|
||||
list_add_tail(&bo->client_link, &client->bos_list);
|
||||
spin_unlock(&client->bos_lock);
|
||||
}
|
||||
|
||||
@ -96,7 +96,7 @@ void xe_drm_client_remove_bo(struct xe_bo *bo)
|
||||
struct xe_drm_client *client = bo->client;
|
||||
|
||||
spin_lock(&client->bos_lock);
|
||||
list_del_rcu(&bo->client_link);
|
||||
list_del(&bo->client_link);
|
||||
spin_unlock(&client->bos_lock);
|
||||
|
||||
xe_drm_client_put(client);
|
||||
@ -154,8 +154,8 @@ static void show_meminfo(struct drm_printer *p, struct drm_file *file)
|
||||
|
||||
/* Internal objects. */
|
||||
spin_lock(&client->bos_lock);
|
||||
list_for_each_entry_rcu(bo, &client->bos_list, client_link) {
|
||||
if (!bo || !kref_get_unless_zero(&bo->ttm.base.refcount))
|
||||
list_for_each_entry(bo, &client->bos_list, client_link) {
|
||||
if (!kref_get_unless_zero(&bo->ttm.base.refcount))
|
||||
continue;
|
||||
bo_meminfo(bo, stats);
|
||||
xe_bo_put(bo);
|
||||
|
||||
@ -216,7 +216,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
|
||||
goto err_unlock_list;
|
||||
}
|
||||
for (i = 0; i < num_syncs; i++)
|
||||
xe_sync_entry_signal(&syncs[i], NULL, fence);
|
||||
xe_sync_entry_signal(&syncs[i], fence);
|
||||
xe_exec_queue_last_fence_set(q, vm, fence);
|
||||
dma_fence_put(fence);
|
||||
}
|
||||
@ -294,9 +294,10 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
|
||||
drm_gpuvm_resv_add_fence(&vm->gpuvm, exec, &job->drm.s_fence->finished,
|
||||
DMA_RESV_USAGE_BOOKKEEP, DMA_RESV_USAGE_WRITE);
|
||||
|
||||
for (i = 0; i < num_syncs; i++)
|
||||
xe_sync_entry_signal(&syncs[i], job,
|
||||
&job->drm.s_fence->finished);
|
||||
for (i = 0; i < num_syncs; i++) {
|
||||
xe_sync_entry_signal(&syncs[i], &job->drm.s_fence->finished);
|
||||
xe_sched_job_init_user_fence(job, &syncs[i]);
|
||||
}
|
||||
|
||||
if (xe_exec_queue_is_lr(q))
|
||||
q->ring_ops->emit_job(job);
|
||||
@ -320,10 +321,7 @@ int xe_exec_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
|
||||
err_exec:
|
||||
drm_exec_fini(exec);
|
||||
err_unlock_list:
|
||||
if (write_locked)
|
||||
up_write(&vm->lock);
|
||||
else
|
||||
up_read(&vm->lock);
|
||||
up_read(&vm->lock);
|
||||
if (err == -EAGAIN && !skip_retry)
|
||||
goto retry;
|
||||
err_syncs:
|
||||
|
||||
@ -31,7 +31,14 @@ enum xe_exec_queue_sched_prop {
|
||||
};
|
||||
|
||||
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
|
||||
u64 extensions, int ext_number, bool create);
|
||||
u64 extensions, int ext_number);
|
||||
|
||||
static void __xe_exec_queue_free(struct xe_exec_queue *q)
|
||||
{
|
||||
if (q->vm)
|
||||
xe_vm_put(q->vm);
|
||||
kfree(q);
|
||||
}
|
||||
|
||||
static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
|
||||
struct xe_vm *vm,
|
||||
@ -74,21 +81,21 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
|
||||
else
|
||||
q->sched_props.priority = XE_EXEC_QUEUE_PRIORITY_NORMAL;
|
||||
|
||||
if (vm)
|
||||
q->vm = xe_vm_get(vm);
|
||||
|
||||
if (extensions) {
|
||||
/*
|
||||
* may set q->usm, must come before xe_lrc_init(),
|
||||
* may overwrite q->sched_props, must come before q->ops->init()
|
||||
*/
|
||||
err = exec_queue_user_extensions(xe, q, extensions, 0, true);
|
||||
err = exec_queue_user_extensions(xe, q, extensions, 0);
|
||||
if (err) {
|
||||
kfree(q);
|
||||
__xe_exec_queue_free(q);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
}
|
||||
|
||||
if (vm)
|
||||
q->vm = xe_vm_get(vm);
|
||||
|
||||
if (xe_exec_queue_is_parallel(q)) {
|
||||
q->parallel.composite_fence_ctx = dma_fence_context_alloc(1);
|
||||
q->parallel.composite_fence_seqno = XE_FENCE_INITIAL_SEQNO;
|
||||
@ -97,13 +104,6 @@ static struct xe_exec_queue *__xe_exec_queue_alloc(struct xe_device *xe,
|
||||
return q;
|
||||
}
|
||||
|
||||
static void __xe_exec_queue_free(struct xe_exec_queue *q)
|
||||
{
|
||||
if (q->vm)
|
||||
xe_vm_put(q->vm);
|
||||
kfree(q);
|
||||
}
|
||||
|
||||
static int __xe_exec_queue_init(struct xe_exec_queue *q)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(q->gt);
|
||||
@ -128,7 +128,7 @@ static int __xe_exec_queue_init(struct xe_exec_queue *q)
|
||||
* already grabbed the rpm ref outside any sensitive locks.
|
||||
*/
|
||||
if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
|
||||
drm_WARN_ON(&xe->drm, !xe_device_mem_access_get_if_ongoing(xe));
|
||||
xe_pm_runtime_get_noresume(xe);
|
||||
|
||||
return 0;
|
||||
|
||||
@ -217,7 +217,7 @@ void xe_exec_queue_fini(struct xe_exec_queue *q)
|
||||
for (i = 0; i < q->width; ++i)
|
||||
xe_lrc_finish(q->lrc + i);
|
||||
if (!(q->flags & EXEC_QUEUE_FLAG_PERMANENT) && (q->flags & EXEC_QUEUE_FLAG_VM || !q->vm))
|
||||
xe_device_mem_access_put(gt_to_xe(q->gt));
|
||||
xe_pm_runtime_put(gt_to_xe(q->gt));
|
||||
__xe_exec_queue_free(q);
|
||||
}
|
||||
|
||||
@ -225,22 +225,22 @@ void xe_exec_queue_assign_name(struct xe_exec_queue *q, u32 instance)
|
||||
{
|
||||
switch (q->class) {
|
||||
case XE_ENGINE_CLASS_RENDER:
|
||||
sprintf(q->name, "rcs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "rcs%d", instance);
|
||||
break;
|
||||
case XE_ENGINE_CLASS_VIDEO_DECODE:
|
||||
sprintf(q->name, "vcs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "vcs%d", instance);
|
||||
break;
|
||||
case XE_ENGINE_CLASS_VIDEO_ENHANCE:
|
||||
sprintf(q->name, "vecs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "vecs%d", instance);
|
||||
break;
|
||||
case XE_ENGINE_CLASS_COPY:
|
||||
sprintf(q->name, "bcs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "bcs%d", instance);
|
||||
break;
|
||||
case XE_ENGINE_CLASS_COMPUTE:
|
||||
sprintf(q->name, "ccs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "ccs%d", instance);
|
||||
break;
|
||||
case XE_ENGINE_CLASS_OTHER:
|
||||
sprintf(q->name, "gsccs%d", instance);
|
||||
snprintf(q->name, sizeof(q->name), "gsccs%d", instance);
|
||||
break;
|
||||
default:
|
||||
XE_WARN_ON(q->class);
|
||||
@ -268,7 +268,7 @@ xe_exec_queue_device_get_max_priority(struct xe_device *xe)
|
||||
}
|
||||
|
||||
static int exec_queue_set_priority(struct xe_device *xe, struct xe_exec_queue *q,
|
||||
u64 value, bool create)
|
||||
u64 value)
|
||||
{
|
||||
if (XE_IOCTL_DBG(xe, value > XE_EXEC_QUEUE_PRIORITY_HIGH))
|
||||
return -EINVAL;
|
||||
@ -276,9 +276,6 @@ static int exec_queue_set_priority(struct xe_device *xe, struct xe_exec_queue *q
|
||||
if (XE_IOCTL_DBG(xe, value > xe_exec_queue_device_get_max_priority(xe)))
|
||||
return -EPERM;
|
||||
|
||||
if (!create)
|
||||
return q->ops->set_priority(q, value);
|
||||
|
||||
q->sched_props.priority = value;
|
||||
return 0;
|
||||
}
|
||||
@ -336,7 +333,7 @@ xe_exec_queue_get_prop_minmax(struct xe_hw_engine_class_intf *eclass,
|
||||
}
|
||||
|
||||
static int exec_queue_set_timeslice(struct xe_device *xe, struct xe_exec_queue *q,
|
||||
u64 value, bool create)
|
||||
u64 value)
|
||||
{
|
||||
u32 min = 0, max = 0;
|
||||
|
||||
@ -347,16 +344,13 @@ static int exec_queue_set_timeslice(struct xe_device *xe, struct xe_exec_queue *
|
||||
!xe_hw_engine_timeout_in_range(value, min, max))
|
||||
return -EINVAL;
|
||||
|
||||
if (!create)
|
||||
return q->ops->set_timeslice(q, value);
|
||||
|
||||
q->sched_props.timeslice_us = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
typedef int (*xe_exec_queue_set_property_fn)(struct xe_device *xe,
|
||||
struct xe_exec_queue *q,
|
||||
u64 value, bool create);
|
||||
u64 value);
|
||||
|
||||
static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
|
||||
[DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY] = exec_queue_set_priority,
|
||||
@ -365,8 +359,7 @@ static const xe_exec_queue_set_property_fn exec_queue_set_property_funcs[] = {
|
||||
|
||||
static int exec_queue_user_ext_set_property(struct xe_device *xe,
|
||||
struct xe_exec_queue *q,
|
||||
u64 extension,
|
||||
bool create)
|
||||
u64 extension)
|
||||
{
|
||||
u64 __user *address = u64_to_user_ptr(extension);
|
||||
struct drm_xe_ext_set_property ext;
|
||||
@ -388,21 +381,20 @@ static int exec_queue_user_ext_set_property(struct xe_device *xe,
|
||||
if (!exec_queue_set_property_funcs[idx])
|
||||
return -EINVAL;
|
||||
|
||||
return exec_queue_set_property_funcs[idx](xe, q, ext.value, create);
|
||||
return exec_queue_set_property_funcs[idx](xe, q, ext.value);
|
||||
}
|
||||
|
||||
typedef int (*xe_exec_queue_user_extension_fn)(struct xe_device *xe,
|
||||
struct xe_exec_queue *q,
|
||||
u64 extension,
|
||||
bool create);
|
||||
u64 extension);
|
||||
|
||||
static const xe_exec_queue_set_property_fn exec_queue_user_extension_funcs[] = {
|
||||
static const xe_exec_queue_user_extension_fn exec_queue_user_extension_funcs[] = {
|
||||
[DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY] = exec_queue_user_ext_set_property,
|
||||
};
|
||||
|
||||
#define MAX_USER_EXTENSIONS 16
|
||||
static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue *q,
|
||||
u64 extensions, int ext_number, bool create)
|
||||
u64 extensions, int ext_number)
|
||||
{
|
||||
u64 __user *address = u64_to_user_ptr(extensions);
|
||||
struct drm_xe_user_extension ext;
|
||||
@ -423,13 +415,13 @@ static int exec_queue_user_extensions(struct xe_device *xe, struct xe_exec_queue
|
||||
|
||||
idx = array_index_nospec(ext.name,
|
||||
ARRAY_SIZE(exec_queue_user_extension_funcs));
|
||||
err = exec_queue_user_extension_funcs[idx](xe, q, extensions, create);
|
||||
err = exec_queue_user_extension_funcs[idx](xe, q, extensions);
|
||||
if (XE_IOCTL_DBG(xe, err))
|
||||
return err;
|
||||
|
||||
if (ext.next_extension)
|
||||
return exec_queue_user_extensions(xe, q, ext.next_extension,
|
||||
++ext_number, create);
|
||||
++ext_number);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -597,7 +589,7 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
|
||||
return -EINVAL;
|
||||
|
||||
/* The migration vm doesn't hold rpm ref */
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get_noresume(xe);
|
||||
|
||||
flags = EXEC_QUEUE_FLAG_VM | (id ? EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD : 0);
|
||||
|
||||
@ -606,7 +598,7 @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
|
||||
args->width, hwe, flags,
|
||||
args->extensions);
|
||||
|
||||
xe_device_mem_access_put(xe); /* now held by engine */
|
||||
xe_pm_runtime_put(xe); /* now held by engine */
|
||||
|
||||
xe_vm_put(migrate_vm);
|
||||
if (IS_ERR(new)) {
|
||||
|
||||
@ -76,14 +76,12 @@ struct xe_exec_queue {
|
||||
#define EXEC_QUEUE_FLAG_KERNEL BIT(1)
|
||||
/* kernel engine only destroyed at driver unload */
|
||||
#define EXEC_QUEUE_FLAG_PERMANENT BIT(2)
|
||||
/* queue keeps running pending jobs after destroy ioctl */
|
||||
#define EXEC_QUEUE_FLAG_PERSISTENT BIT(3)
|
||||
/* for VM jobs. Caller needs to hold rpm ref when creating queue with this flag */
|
||||
#define EXEC_QUEUE_FLAG_VM BIT(4)
|
||||
#define EXEC_QUEUE_FLAG_VM BIT(3)
|
||||
/* child of VM queue for multi-tile VM jobs */
|
||||
#define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD BIT(5)
|
||||
#define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD BIT(4)
|
||||
/* kernel exec_queue only, set priority to highest level */
|
||||
#define EXEC_QUEUE_FLAG_HIGH_PRIORITY BIT(6)
|
||||
#define EXEC_QUEUE_FLAG_HIGH_PRIORITY BIT(5)
|
||||
|
||||
/**
|
||||
* @flags: flags for this exec queue, should statically setup aside from ban
|
||||
|
||||
@ -5,12 +5,14 @@
|
||||
|
||||
#include "xe_ggtt.h"
|
||||
|
||||
#include <linux/io-64-nonatomic-lo-hi.h>
|
||||
#include <linux/sizes.h>
|
||||
|
||||
#include <drm/drm_managed.h>
|
||||
#include <drm/i915_drm.h>
|
||||
|
||||
#include "regs/xe_gt_regs.h"
|
||||
#include "regs/xe_gtt_defs.h"
|
||||
#include "regs/xe_regs.h"
|
||||
#include "xe_assert.h"
|
||||
#include "xe_bo.h"
|
||||
@ -19,16 +21,10 @@
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_gt_tlb_invalidation.h"
|
||||
#include "xe_map.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_sriov.h"
|
||||
#include "xe_wopcm.h"
|
||||
|
||||
#define XELPG_GGTT_PTE_PAT0 BIT_ULL(52)
|
||||
#define XELPG_GGTT_PTE_PAT1 BIT_ULL(53)
|
||||
|
||||
/* GuC addresses above GUC_GGTT_TOP also don't map through the GTT */
|
||||
#define GUC_GGTT_TOP 0xFEE00000
|
||||
|
||||
static u64 xelp_ggtt_pte_encode_bo(struct xe_bo *bo, u64 bo_offset,
|
||||
u16 pat_index)
|
||||
{
|
||||
@ -200,20 +196,20 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt)
|
||||
return drmm_add_action_or_reset(&xe->drm, ggtt_fini_early, ggtt);
|
||||
}
|
||||
|
||||
static void xe_ggtt_invalidate(struct xe_ggtt *ggtt);
|
||||
|
||||
static void xe_ggtt_initial_clear(struct xe_ggtt *ggtt)
|
||||
{
|
||||
struct drm_mm_node *hole;
|
||||
u64 start, end;
|
||||
|
||||
/* Display may have allocated inside ggtt, so be careful with clearing here */
|
||||
xe_device_mem_access_get(tile_to_xe(ggtt->tile));
|
||||
mutex_lock(&ggtt->lock);
|
||||
drm_mm_for_each_hole(hole, &ggtt->mm, start, end)
|
||||
xe_ggtt_clear(ggtt, start, end - start);
|
||||
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
mutex_unlock(&ggtt->lock);
|
||||
xe_device_mem_access_put(tile_to_xe(ggtt->tile));
|
||||
}
|
||||
|
||||
int xe_ggtt_init(struct xe_ggtt *ggtt)
|
||||
@ -227,11 +223,11 @@ int xe_ggtt_init(struct xe_ggtt *ggtt)
|
||||
* scratch entires, rather keep the scratch page in system memory on
|
||||
* platforms where 64K pages are needed for VRAM.
|
||||
*/
|
||||
flags = XE_BO_CREATE_PINNED_BIT;
|
||||
flags = XE_BO_FLAG_PINNED;
|
||||
if (ggtt->flags & XE_GGTT_FLAGS_64K)
|
||||
flags |= XE_BO_CREATE_SYSTEM_BIT;
|
||||
flags |= XE_BO_FLAG_SYSTEM;
|
||||
else
|
||||
flags |= XE_BO_CREATE_VRAM_IF_DGFX(ggtt->tile);
|
||||
flags |= XE_BO_FLAG_VRAM_IF_DGFX(ggtt->tile);
|
||||
|
||||
ggtt->scratch = xe_managed_bo_create_pin_map(xe, ggtt->tile, XE_PAGE_SIZE, flags);
|
||||
if (IS_ERR(ggtt->scratch)) {
|
||||
@ -249,51 +245,19 @@ int xe_ggtt_init(struct xe_ggtt *ggtt)
|
||||
return err;
|
||||
}
|
||||
|
||||
#define GUC_TLB_INV_CR XE_REG(0xcee8)
|
||||
#define GUC_TLB_INV_CR_INVALIDATE REG_BIT(0)
|
||||
#define PVC_GUC_TLB_INV_DESC0 XE_REG(0xcf7c)
|
||||
#define PVC_GUC_TLB_INV_DESC0_VALID REG_BIT(0)
|
||||
#define PVC_GUC_TLB_INV_DESC1 XE_REG(0xcf80)
|
||||
#define PVC_GUC_TLB_INV_DESC1_INVALIDATE REG_BIT(6)
|
||||
|
||||
static void ggtt_invalidate_gt_tlb(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
if (!gt)
|
||||
return;
|
||||
|
||||
/*
|
||||
* Invalidation can happen when there's no in-flight work keeping the
|
||||
* GT awake. We need to explicitly grab forcewake to ensure the GT
|
||||
* and GuC are accessible.
|
||||
*/
|
||||
xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
|
||||
/* TODO: vfunc for GuC vs. non-GuC */
|
||||
|
||||
if (gt->uc.guc.submission_state.enabled) {
|
||||
int seqno;
|
||||
|
||||
seqno = xe_gt_tlb_invalidation_guc(gt);
|
||||
xe_gt_assert(gt, seqno > 0);
|
||||
if (seqno > 0)
|
||||
xe_gt_tlb_invalidation_wait(gt, seqno);
|
||||
} else if (xe_device_uc_enabled(gt_to_xe(gt))) {
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
|
||||
if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
|
||||
xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1,
|
||||
PVC_GUC_TLB_INV_DESC1_INVALIDATE);
|
||||
xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0,
|
||||
PVC_GUC_TLB_INV_DESC0_VALID);
|
||||
} else
|
||||
xe_mmio_write32(gt, GUC_TLB_INV_CR,
|
||||
GUC_TLB_INV_CR_INVALIDATE);
|
||||
}
|
||||
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
err = xe_gt_tlb_invalidation_ggtt(gt);
|
||||
if (err)
|
||||
drm_warn(>_to_xe(gt)->drm, "xe_gt_tlb_invalidation_ggtt error=%d", err);
|
||||
}
|
||||
|
||||
void xe_ggtt_invalidate(struct xe_ggtt *ggtt)
|
||||
static void xe_ggtt_invalidate(struct xe_ggtt *ggtt)
|
||||
{
|
||||
/* Each GT in a tile has its own TLB to cache GGTT lookups */
|
||||
ggtt_invalidate_gt_tlb(ggtt->tile->primary_gt);
|
||||
@ -410,7 +374,7 @@ int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
|
||||
|
||||
void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
|
||||
{
|
||||
u16 cache_mode = bo->flags & XE_BO_NEEDS_UC ? XE_CACHE_NONE : XE_CACHE_WB;
|
||||
u16 cache_mode = bo->flags & XE_BO_FLAG_NEEDS_UC ? XE_CACHE_NONE : XE_CACHE_WB;
|
||||
u16 pat_index = tile_to_xe(ggtt->tile)->pat.idx[cache_mode];
|
||||
u64 start = bo->ggtt_node.start;
|
||||
u64 offset, pte;
|
||||
@ -419,8 +383,6 @@ void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
|
||||
pte = ggtt->pt_ops->pte_encode_bo(bo, offset, pat_index);
|
||||
xe_ggtt_set_pte(ggtt, start + offset, pte);
|
||||
}
|
||||
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
}
|
||||
|
||||
static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo,
|
||||
@ -442,14 +404,17 @@ static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo,
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
xe_device_mem_access_get(tile_to_xe(ggtt->tile));
|
||||
xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile));
|
||||
mutex_lock(&ggtt->lock);
|
||||
err = drm_mm_insert_node_in_range(&ggtt->mm, &bo->ggtt_node, bo->size,
|
||||
alignment, 0, start, end, 0);
|
||||
if (!err)
|
||||
xe_ggtt_map_bo(ggtt, bo);
|
||||
mutex_unlock(&ggtt->lock);
|
||||
xe_device_mem_access_put(tile_to_xe(ggtt->tile));
|
||||
|
||||
if (!err && bo->flags & XE_BO_FLAG_GGTT_INVALIDATE)
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
xe_pm_runtime_put(tile_to_xe(ggtt->tile));
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -465,19 +430,21 @@ int xe_ggtt_insert_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
|
||||
return __xe_ggtt_insert_bo_at(ggtt, bo, 0, U64_MAX);
|
||||
}
|
||||
|
||||
void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node)
|
||||
void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
|
||||
bool invalidate)
|
||||
{
|
||||
xe_device_mem_access_get(tile_to_xe(ggtt->tile));
|
||||
mutex_lock(&ggtt->lock);
|
||||
xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile));
|
||||
|
||||
mutex_lock(&ggtt->lock);
|
||||
xe_ggtt_clear(ggtt, node->start, node->size);
|
||||
drm_mm_remove_node(node);
|
||||
node->size = 0;
|
||||
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
|
||||
mutex_unlock(&ggtt->lock);
|
||||
xe_device_mem_access_put(tile_to_xe(ggtt->tile));
|
||||
|
||||
if (invalidate)
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
|
||||
xe_pm_runtime_put(tile_to_xe(ggtt->tile));
|
||||
}
|
||||
|
||||
void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
|
||||
@ -488,9 +455,54 @@ void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo)
|
||||
/* This BO is not currently in the GGTT */
|
||||
xe_tile_assert(ggtt->tile, bo->ggtt_node.size == bo->size);
|
||||
|
||||
xe_ggtt_remove_node(ggtt, &bo->ggtt_node);
|
||||
xe_ggtt_remove_node(ggtt, &bo->ggtt_node,
|
||||
bo->flags & XE_BO_FLAG_GGTT_INVALIDATE);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
static u64 xe_encode_vfid_pte(u16 vfid)
|
||||
{
|
||||
return FIELD_PREP(GGTT_PTE_VFID, vfid) | XE_PAGE_PRESENT;
|
||||
}
|
||||
|
||||
static void xe_ggtt_assign_locked(struct xe_ggtt *ggtt, const struct drm_mm_node *node, u16 vfid)
|
||||
{
|
||||
u64 start = node->start;
|
||||
u64 size = node->size;
|
||||
u64 end = start + size - 1;
|
||||
u64 pte = xe_encode_vfid_pte(vfid);
|
||||
|
||||
lockdep_assert_held(&ggtt->lock);
|
||||
|
||||
if (!drm_mm_node_allocated(node))
|
||||
return;
|
||||
|
||||
while (start < end) {
|
||||
xe_ggtt_set_pte(ggtt, start, pte);
|
||||
start += XE_PAGE_SIZE;
|
||||
}
|
||||
|
||||
xe_ggtt_invalidate(ggtt);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_ggtt_assign - assign a GGTT region to the VF
|
||||
* @ggtt: the &xe_ggtt where the node belongs
|
||||
* @node: the &drm_mm_node to update
|
||||
* @vfid: the VF identifier
|
||||
*
|
||||
* This function is used by the PF driver to assign a GGTT region to the VF.
|
||||
* In addition to PTE's VFID bits 11:2 also PRESENT bit 0 is set as on some
|
||||
* platforms VFs can't modify that either.
|
||||
*/
|
||||
void xe_ggtt_assign(struct xe_ggtt *ggtt, const struct drm_mm_node *node, u16 vfid)
|
||||
{
|
||||
mutex_lock(&ggtt->lock);
|
||||
xe_ggtt_assign_locked(ggtt, node, vfid);
|
||||
mutex_unlock(&ggtt->lock);
|
||||
}
|
||||
#endif
|
||||
|
||||
int xe_ggtt_dump(struct xe_ggtt *ggtt, struct drm_printer *p)
|
||||
{
|
||||
int err;
|
||||
|
||||
@ -11,7 +11,6 @@
|
||||
struct drm_printer;
|
||||
|
||||
void xe_ggtt_set_pte(struct xe_ggtt *ggtt, u64 addr, u64 pte);
|
||||
void xe_ggtt_invalidate(struct xe_ggtt *ggtt);
|
||||
int xe_ggtt_init_early(struct xe_ggtt *ggtt);
|
||||
int xe_ggtt_init(struct xe_ggtt *ggtt);
|
||||
void xe_ggtt_printk(struct xe_ggtt *ggtt, const char *prefix);
|
||||
@ -24,7 +23,8 @@ int xe_ggtt_insert_special_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
|
||||
int xe_ggtt_insert_special_node_locked(struct xe_ggtt *ggtt,
|
||||
struct drm_mm_node *node,
|
||||
u32 size, u32 align, u32 mm_flags);
|
||||
void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node);
|
||||
void xe_ggtt_remove_node(struct xe_ggtt *ggtt, struct drm_mm_node *node,
|
||||
bool invalidate);
|
||||
void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo);
|
||||
int xe_ggtt_insert_bo(struct xe_ggtt *ggtt, struct xe_bo *bo);
|
||||
int xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo,
|
||||
@ -33,4 +33,8 @@ void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo);
|
||||
|
||||
int xe_ggtt_dump(struct xe_ggtt *ggtt, struct drm_printer *p);
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
void xe_ggtt_assign(struct xe_ggtt *ggtt, const struct drm_mm_node *node, u16 vfid);
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
@ -17,15 +17,18 @@
|
||||
#include "xe_gsc_proxy.h"
|
||||
#include "xe_gsc_submit.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_gt_mcr.h"
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_huc.h"
|
||||
#include "xe_map.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_sched_job.h"
|
||||
#include "xe_uc_fw.h"
|
||||
#include "xe_wa.h"
|
||||
#include "instructions/xe_gsc_commands.h"
|
||||
#include "regs/xe_gsc_regs.h"
|
||||
#include "regs/xe_gt_regs.h"
|
||||
|
||||
static struct xe_gt *
|
||||
gsc_to_gt(struct xe_gsc *gsc)
|
||||
@ -127,8 +130,8 @@ static int query_compatibility_version(struct xe_gsc *gsc)
|
||||
|
||||
bo = xe_bo_create_pin_map(xe, tile, NULL, GSC_VER_PKT_SZ * 2,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_SYSTEM_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_SYSTEM |
|
||||
XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(bo)) {
|
||||
xe_gt_err(gt, "failed to allocate bo for GSC version query\n");
|
||||
return PTR_ERR(bo);
|
||||
@ -250,9 +253,30 @@ static int gsc_upload(struct xe_gsc *gsc)
|
||||
static int gsc_upload_and_init(struct xe_gsc *gsc)
|
||||
{
|
||||
struct xe_gt *gt = gsc_to_gt(gsc);
|
||||
struct xe_tile *tile = gt_to_tile(gt);
|
||||
int ret;
|
||||
|
||||
if (XE_WA(gt, 14018094691)) {
|
||||
ret = xe_force_wake_get(gt_to_fw(tile->primary_gt), XE_FORCEWAKE_ALL);
|
||||
|
||||
/*
|
||||
* If the forcewake fails we want to keep going, because the worst
|
||||
* case outcome in failing to apply the WA is that PXP won't work,
|
||||
* which is not fatal. We still throw a warning so the issue is
|
||||
* seen if it happens.
|
||||
*/
|
||||
xe_gt_WARN_ON(tile->primary_gt, ret);
|
||||
|
||||
xe_gt_mcr_multicast_write(tile->primary_gt,
|
||||
EU_SYSTOLIC_LIC_THROTTLE_CTL_WITH_LOCK,
|
||||
EU_SYSTOLIC_LIC_THROTTLE_CTL_LOCK_BIT);
|
||||
}
|
||||
|
||||
ret = gsc_upload(gsc);
|
||||
|
||||
if (XE_WA(gt, 14018094691))
|
||||
xe_force_wake_put(gt_to_fw(tile->primary_gt), XE_FORCEWAKE_ALL);
|
||||
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -272,6 +296,44 @@ static int gsc_upload_and_init(struct xe_gsc *gsc)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int gsc_er_complete(struct xe_gt *gt)
|
||||
{
|
||||
u32 er_status;
|
||||
|
||||
if (!gsc_fw_is_loaded(gt))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* Starting on Xe2, the GSCCS engine reset is a 2-step process. When the
|
||||
* driver or the GuC hit the GDRST register, the CS is immediately reset
|
||||
* and a success is reported, but the GSC shim keeps resetting in the
|
||||
* background. While the shim reset is ongoing, the CS is able to accept
|
||||
* new context submission, but any commands that require the shim will
|
||||
* be stalled until the reset is completed. This means that we can keep
|
||||
* submitting to the GSCCS as long as we make sure that the preemption
|
||||
* timeout is big enough to cover any delay introduced by the reset.
|
||||
* When the shim reset completes, a specific CS interrupt is triggered,
|
||||
* in response to which we need to check the GSCI_TIMER_STATUS register
|
||||
* to see if the reset was successful or not.
|
||||
* Note that the GSCI_TIMER_STATUS register is not power save/restored,
|
||||
* so it gets reset on MC6 entry. However, a reset failure stops MC6,
|
||||
* so in that scenario we're always guaranteed to find the correct
|
||||
* value.
|
||||
*/
|
||||
er_status = xe_mmio_read32(gt, GSCI_TIMER_STATUS) & GSCI_TIMER_STATUS_VALUE;
|
||||
|
||||
if (er_status == GSCI_TIMER_STATUS_TIMER_EXPIRED) {
|
||||
/*
|
||||
* XXX: we should trigger an FLR here, but we don't have support
|
||||
* for that yet.
|
||||
*/
|
||||
xe_gt_err(gt, "GSC ER timed out!\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void gsc_work(struct work_struct *work)
|
||||
{
|
||||
struct xe_gsc *gsc = container_of(work, typeof(*gsc), work);
|
||||
@ -285,8 +347,14 @@ static void gsc_work(struct work_struct *work)
|
||||
gsc->work_actions = 0;
|
||||
spin_unlock_irq(&gsc->lock);
|
||||
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC);
|
||||
xe_pm_runtime_get(xe);
|
||||
xe_gt_WARN_ON(gt, xe_force_wake_get(gt_to_fw(gt), XE_FW_GSC));
|
||||
|
||||
if (actions & GSC_ACTION_ER_COMPLETE) {
|
||||
ret = gsc_er_complete(gt);
|
||||
if (ret)
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (actions & GSC_ACTION_FW_LOAD) {
|
||||
ret = gsc_upload_and_init(gsc);
|
||||
@ -299,8 +367,26 @@ static void gsc_work(struct work_struct *work)
|
||||
if (actions & GSC_ACTION_SW_PROXY)
|
||||
xe_gsc_proxy_request_handler(gsc);
|
||||
|
||||
out:
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GSC);
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
}
|
||||
|
||||
void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec)
|
||||
{
|
||||
struct xe_gt *gt = hwe->gt;
|
||||
struct xe_gsc *gsc = >->uc.gsc;
|
||||
|
||||
if (unlikely(!intr_vec))
|
||||
return;
|
||||
|
||||
if (intr_vec & GSC_ER_COMPLETE) {
|
||||
spin_lock(&gsc->lock);
|
||||
gsc->work_actions |= GSC_ACTION_ER_COMPLETE;
|
||||
spin_unlock(&gsc->lock);
|
||||
|
||||
queue_work(gsc->wq, &gsc->work);
|
||||
}
|
||||
}
|
||||
|
||||
int xe_gsc_init(struct xe_gsc *gsc)
|
||||
@ -382,8 +468,8 @@ int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc)
|
||||
|
||||
bo = xe_bo_create_pin_map(xe, tile, NULL, SZ_4M,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_STOLEN_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_STOLEN |
|
||||
XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(bo))
|
||||
return PTR_ERR(bo);
|
||||
|
||||
|
||||
@ -9,12 +9,14 @@
|
||||
#include "xe_gsc_types.h"
|
||||
|
||||
struct xe_gt;
|
||||
struct xe_hw_engine;
|
||||
|
||||
int xe_gsc_init(struct xe_gsc *gsc);
|
||||
int xe_gsc_init_post_hwconfig(struct xe_gsc *gsc);
|
||||
void xe_gsc_wait_for_worker_completion(struct xe_gsc *gsc);
|
||||
void xe_gsc_load_start(struct xe_gsc *gsc);
|
||||
void xe_gsc_remove(struct xe_gsc *gsc);
|
||||
void xe_gsc_hwe_irq_handler(struct xe_hw_engine *hwe, u16 intr_vec);
|
||||
|
||||
void xe_gsc_wa_14015076503(struct xe_gt *gt, bool prep);
|
||||
|
||||
|
||||
@ -66,7 +66,7 @@ static inline struct xe_device *kdev_to_xe(struct device *kdev)
|
||||
return dev_get_drvdata(kdev);
|
||||
}
|
||||
|
||||
static bool gsc_proxy_init_done(struct xe_gsc *gsc)
|
||||
bool xe_gsc_proxy_init_done(struct xe_gsc *gsc)
|
||||
{
|
||||
struct xe_gt *gt = gsc_to_gt(gsc);
|
||||
u32 fwsts1 = xe_mmio_read32(gt, HECI_FWSTS1(MTL_GSC_HECI1_BASE));
|
||||
@ -403,7 +403,6 @@ static int proxy_channel_alloc(struct xe_gsc *gsc)
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
struct xe_bo *bo;
|
||||
void *csme;
|
||||
int err;
|
||||
|
||||
csme = kzalloc(GSC_PROXY_CHANNEL_SIZE, GFP_KERNEL);
|
||||
if (!csme)
|
||||
@ -411,8 +410,8 @@ static int proxy_channel_alloc(struct xe_gsc *gsc)
|
||||
|
||||
bo = xe_bo_create_pin_map(xe, tile, NULL, GSC_PROXY_CHANNEL_SIZE,
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_CREATE_SYSTEM_BIT |
|
||||
XE_BO_CREATE_GGTT_BIT);
|
||||
XE_BO_FLAG_SYSTEM |
|
||||
XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(bo)) {
|
||||
kfree(csme);
|
||||
return PTR_ERR(bo);
|
||||
@ -424,11 +423,7 @@ static int proxy_channel_alloc(struct xe_gsc *gsc)
|
||||
gsc->proxy.to_csme = csme;
|
||||
gsc->proxy.from_csme = csme + GSC_PROXY_BUFFER_SIZE;
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, proxy_channel_free, gsc);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
return drmm_add_action_or_reset(&xe->drm, proxy_channel_free, gsc);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -528,7 +523,7 @@ int xe_gsc_proxy_start(struct xe_gsc *gsc)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
if (!gsc_proxy_init_done(gsc)) {
|
||||
if (!xe_gsc_proxy_init_done(gsc)) {
|
||||
xe_gt_err(gsc_to_gt(gsc), "GSC FW reports proxy init not completed\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
@ -11,6 +11,7 @@
|
||||
struct xe_gsc;
|
||||
|
||||
int xe_gsc_proxy_init(struct xe_gsc *gsc);
|
||||
bool xe_gsc_proxy_init_done(struct xe_gsc *gsc);
|
||||
void xe_gsc_proxy_remove(struct xe_gsc *gsc);
|
||||
int xe_gsc_proxy_start(struct xe_gsc *gsc);
|
||||
|
||||
|
||||
@ -40,6 +40,21 @@ gsc_to_gt(struct xe_gsc *gsc)
|
||||
return container_of(gsc, struct xe_gt, uc.gsc);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gsc_create_host_session_id - Creates a random 64 bit host_session id with
|
||||
* bits 56-63 masked.
|
||||
*
|
||||
* Returns: random host_session_id which can be used to send messages to gsc cs
|
||||
*/
|
||||
u64 xe_gsc_create_host_session_id(void)
|
||||
{
|
||||
u64 host_session_id;
|
||||
|
||||
get_random_bytes(&host_session_id, sizeof(u64));
|
||||
host_session_id &= ~HOST_SESSION_CLIENT_MASK;
|
||||
return host_session_id;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gsc_emit_header - write the MTL GSC header in memory
|
||||
* @xe: the Xe device
|
||||
|
||||
@ -28,4 +28,5 @@ int xe_gsc_read_out_header(struct xe_device *xe,
|
||||
int xe_gsc_pkt_submit_kernel(struct xe_gsc *gsc, u64 addr_in, u32 size_in,
|
||||
u64 addr_out, u32 size_out);
|
||||
|
||||
u64 xe_gsc_create_host_session_id(void);
|
||||
#endif
|
||||
|
||||
@ -47,6 +47,7 @@ struct xe_gsc {
|
||||
u32 work_actions;
|
||||
#define GSC_ACTION_FW_LOAD BIT(0)
|
||||
#define GSC_ACTION_SW_PROXY BIT(1)
|
||||
#define GSC_ACTION_ER_COMPLETE BIT(2)
|
||||
|
||||
/** @proxy: sub-structure containing the SW proxy-related variables */
|
||||
struct {
|
||||
|
||||
@ -29,6 +29,7 @@
|
||||
#include "xe_gt_mcr.h"
|
||||
#include "xe_gt_pagefault.h"
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_gt_sriov_pf.h"
|
||||
#include "xe_gt_sysfs.h"
|
||||
#include "xe_gt_tlb_invalidation.h"
|
||||
#include "xe_gt_topology.h"
|
||||
@ -43,6 +44,7 @@
|
||||
#include "xe_migrate.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_pat.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_mocs.h"
|
||||
#include "xe_reg_sr.h"
|
||||
#include "xe_ring_ops.h"
|
||||
@ -310,6 +312,12 @@ int xe_gt_init_early(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
if (IS_SRIOV_PF(gt_to_xe(gt))) {
|
||||
err = xe_gt_sriov_pf_init_early(gt);
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
if (err)
|
||||
return err;
|
||||
@ -346,7 +354,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
|
||||
{
|
||||
int err, i;
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
if (err)
|
||||
goto err_hw_fence_irq;
|
||||
@ -359,7 +366,9 @@ static int gt_fw_domain_init(struct xe_gt *gt)
|
||||
xe_lmtt_init(>_to_tile(gt)->sriov.pf.lmtt);
|
||||
}
|
||||
|
||||
xe_gt_idle_sysfs_init(>->gtidle);
|
||||
err = xe_gt_idle_sysfs_init(>->gtidle);
|
||||
if (err)
|
||||
goto err_force_wake;
|
||||
|
||||
/* Enable per hw engine IRQs */
|
||||
xe_irq_enable_hwe(gt);
|
||||
@ -373,12 +382,12 @@ static int gt_fw_domain_init(struct xe_gt *gt)
|
||||
|
||||
err = xe_hw_engine_class_sysfs_init(gt);
|
||||
if (err)
|
||||
drm_warn(>_to_xe(gt)->drm,
|
||||
"failed to register engines sysfs directory, err: %d\n",
|
||||
err);
|
||||
goto err_force_wake;
|
||||
|
||||
/* Initialize CCS mode sysfs after early initialization of HW engines */
|
||||
xe_gt_ccs_mode_sysfs_init(gt);
|
||||
err = xe_gt_ccs_mode_sysfs_init(gt);
|
||||
if (err)
|
||||
goto err_force_wake;
|
||||
|
||||
/*
|
||||
* Stash hardware-reported version. Since this register does not exist
|
||||
@ -388,7 +397,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
|
||||
|
||||
err = xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
XE_WARN_ON(err);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
|
||||
@ -398,7 +406,6 @@ static int gt_fw_domain_init(struct xe_gt *gt)
|
||||
err_hw_fence_irq:
|
||||
for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
|
||||
xe_hw_fence_irq_finish(>->fence_irq[i]);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -407,7 +414,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
|
||||
{
|
||||
int err, i;
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
if (err)
|
||||
goto err_hw_fence_irq;
|
||||
@ -473,7 +479,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
|
||||
|
||||
err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
XE_WARN_ON(err);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
|
||||
@ -482,7 +487,6 @@ static int all_fw_domain_init(struct xe_gt *gt)
|
||||
err_hw_fence_irq:
|
||||
for (i = 0; i < XE_ENGINE_CLASS_MAX; ++i)
|
||||
xe_hw_fence_irq_finish(>->fence_irq[i]);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
@ -495,7 +499,6 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
|
||||
if (err)
|
||||
goto out;
|
||||
@ -518,8 +521,6 @@ int xe_gt_init_hwconfig(struct xe_gt *gt)
|
||||
out_fw:
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
out:
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
@ -545,13 +546,17 @@ int xe_gt_init(struct xe_gt *gt)
|
||||
|
||||
xe_mocs_init_early(gt);
|
||||
|
||||
xe_gt_sysfs_init(gt);
|
||||
err = xe_gt_sysfs_init(gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = gt_fw_domain_init(gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
xe_gt_freq_init(gt);
|
||||
err = xe_gt_freq_init(gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
xe_force_wake_init_engines(gt, gt_to_fw(gt));
|
||||
|
||||
@ -559,11 +564,7 @@ int xe_gt_init(struct xe_gt *gt)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = drmm_add_action_or_reset(>_to_xe(gt)->drm, gt_fini, gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
return drmm_add_action_or_reset(>_to_xe(gt)->drm, gt_fini, gt);
|
||||
}
|
||||
|
||||
static int do_gt_reset(struct xe_gt *gt)
|
||||
@ -643,9 +644,9 @@ static int gt_reset(struct xe_gt *gt)
|
||||
goto err_fail;
|
||||
}
|
||||
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_gt_sanitize(gt);
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
if (err)
|
||||
goto err_msg;
|
||||
@ -669,8 +670,8 @@ static int gt_reset(struct xe_gt *gt)
|
||||
goto err_out;
|
||||
|
||||
err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
XE_WARN_ON(err);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_gt_info(gt, "reset done\n");
|
||||
|
||||
@ -680,7 +681,7 @@ static int gt_reset(struct xe_gt *gt)
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
err_msg:
|
||||
XE_WARN_ON(xe_uc_start(>->uc));
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
err_fail:
|
||||
xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err));
|
||||
|
||||
@ -710,22 +711,20 @@ void xe_gt_reset_async(struct xe_gt *gt)
|
||||
|
||||
void xe_gt_suspend_prepare(struct xe_gt *gt)
|
||||
{
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
XE_WARN_ON(xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
|
||||
xe_uc_stop_prepare(>->uc);
|
||||
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
}
|
||||
|
||||
int xe_gt_suspend(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
xe_gt_dbg(gt, "suspending\n");
|
||||
xe_gt_sanitize(gt);
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
if (err)
|
||||
goto err_msg;
|
||||
@ -735,15 +734,13 @@ int xe_gt_suspend(struct xe_gt *gt)
|
||||
goto err_force_wake;
|
||||
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
xe_gt_info(gt, "suspended\n");
|
||||
xe_gt_dbg(gt, "suspended\n");
|
||||
|
||||
return 0;
|
||||
|
||||
err_force_wake:
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
err_msg:
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
xe_gt_err(gt, "suspend failed (%pe)\n", ERR_PTR(err));
|
||||
|
||||
return err;
|
||||
@ -753,7 +750,7 @@ int xe_gt_resume(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
xe_device_mem_access_get(gt_to_xe(gt));
|
||||
xe_gt_dbg(gt, "resuming\n");
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
if (err)
|
||||
goto err_msg;
|
||||
@ -763,15 +760,13 @@ int xe_gt_resume(struct xe_gt *gt)
|
||||
goto err_force_wake;
|
||||
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
xe_gt_info(gt, "resumed\n");
|
||||
xe_gt_dbg(gt, "resumed\n");
|
||||
|
||||
return 0;
|
||||
|
||||
err_force_wake:
|
||||
XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL));
|
||||
err_msg:
|
||||
xe_device_mem_access_put(gt_to_xe(gt));
|
||||
xe_gt_err(gt, "resume failed (%pe)\n", ERR_PTR(err));
|
||||
|
||||
return err;
|
||||
|
||||
@ -167,25 +167,20 @@ static void xe_gt_ccs_mode_sysfs_fini(struct drm_device *drm, void *arg)
|
||||
* and it is expected that there are no open drm clients while doing so.
|
||||
* The number of available compute slices is exposed to user through a per-gt
|
||||
* 'num_cslices' sysfs interface.
|
||||
*
|
||||
* Returns: Returns error value for failure and 0 for success.
|
||||
*/
|
||||
void xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt)
|
||||
int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
int err;
|
||||
|
||||
if (!xe_gt_ccs_mode_enabled(gt))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
err = sysfs_create_files(gt->sysfs, gt_ccs_mode_attrs);
|
||||
if (err) {
|
||||
drm_warn(&xe->drm, "Sysfs creation for ccs_mode failed err: %d\n", err);
|
||||
return;
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, xe_gt_ccs_mode_sysfs_fini, gt);
|
||||
if (err) {
|
||||
sysfs_remove_files(gt->sysfs, gt_ccs_mode_attrs);
|
||||
drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n",
|
||||
__func__, err);
|
||||
}
|
||||
return drmm_add_action_or_reset(&xe->drm, xe_gt_ccs_mode_sysfs_fini, gt);
|
||||
}
|
||||
|
||||
@ -12,7 +12,7 @@
|
||||
#include "xe_platform_types.h"
|
||||
|
||||
void xe_gt_apply_ccs_mode(struct xe_gt *gt);
|
||||
void xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt);
|
||||
int xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt);
|
||||
|
||||
static inline bool xe_gt_ccs_mode_enabled(const struct xe_gt *gt)
|
||||
{
|
||||
|
||||
@ -78,8 +78,3 @@ int xe_gt_clock_init(struct xe_gt *gt)
|
||||
gt->info.reference_clock = freq;
|
||||
return 0;
|
||||
}
|
||||
|
||||
u64 xe_gt_clock_cycles_to_ns(const struct xe_gt *gt, u64 count)
|
||||
{
|
||||
return DIV_ROUND_CLOSEST_ULL(count * NSEC_PER_SEC, gt->info.reference_clock);
|
||||
}
|
||||
|
||||
@ -11,5 +11,5 @@
|
||||
struct xe_gt;
|
||||
|
||||
int xe_gt_clock_init(struct xe_gt *gt);
|
||||
u64 xe_gt_clock_cycles_to_ns(const struct xe_gt *gt, u64 count);
|
||||
|
||||
#endif
|
||||
|
||||
@ -18,193 +18,246 @@
|
||||
#include "xe_lrc.h"
|
||||
#include "xe_macros.h"
|
||||
#include "xe_pat.h"
|
||||
#include "xe_pm.h"
|
||||
#include "xe_reg_sr.h"
|
||||
#include "xe_reg_whitelist.h"
|
||||
#include "xe_uc_debugfs.h"
|
||||
#include "xe_wa.h"
|
||||
|
||||
static struct xe_gt *node_to_gt(struct drm_info_node *node)
|
||||
/**
|
||||
* xe_gt_debugfs_simple_show - A show callback for struct drm_info_list
|
||||
* @m: the &seq_file
|
||||
* @data: data used by the drm debugfs helpers
|
||||
*
|
||||
* This callback can be used in struct drm_info_list to describe debugfs
|
||||
* files that are &xe_gt specific.
|
||||
*
|
||||
* It is assumed that those debugfs files will be created on directory entry
|
||||
* which struct dentry d_inode->i_private points to &xe_gt.
|
||||
*
|
||||
* This function assumes that &m->private will be set to the &struct
|
||||
* drm_info_node corresponding to the instance of the info on a given &struct
|
||||
* drm_minor (see struct drm_info_list.show for details).
|
||||
*
|
||||
* This function also assumes that struct drm_info_list.data will point to the
|
||||
* function code that will actually print a file content::
|
||||
*
|
||||
* int (*print)(struct xe_gt *, struct drm_printer *)
|
||||
*
|
||||
* Example::
|
||||
*
|
||||
* int foo(struct xe_gt *gt, struct drm_printer *p)
|
||||
* {
|
||||
* drm_printf(p, "GT%u\n", gt->info.id);
|
||||
* return 0;
|
||||
* }
|
||||
*
|
||||
* static const struct drm_info_list bar[] = {
|
||||
* { name = "foo", .show = xe_gt_debugfs_simple_show, .data = foo },
|
||||
* };
|
||||
*
|
||||
* dir = debugfs_create_dir("gt", parent);
|
||||
* dir->d_inode->i_private = gt;
|
||||
* drm_debugfs_create_files(bar, ARRAY_SIZE(bar), dir, minor);
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_debugfs_simple_show(struct seq_file *m, void *data)
|
||||
{
|
||||
return node->info_ent->data;
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
struct drm_info_node *node = m->private;
|
||||
struct dentry *parent = node->dent->d_parent;
|
||||
struct xe_gt *gt = parent->d_inode->i_private;
|
||||
int (*print)(struct xe_gt *, struct drm_printer *) = node->info_ent->data;
|
||||
|
||||
if (WARN_ON(!print))
|
||||
return -EINVAL;
|
||||
|
||||
return print(gt, &p);
|
||||
}
|
||||
|
||||
static int hw_engines(struct seq_file *m, void *data)
|
||||
static int hw_engines(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
struct xe_hw_engine *hwe;
|
||||
enum xe_hw_engine_id id;
|
||||
int err;
|
||||
|
||||
xe_device_mem_access_get(xe);
|
||||
xe_pm_runtime_get(xe);
|
||||
err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
if (err) {
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
return err;
|
||||
}
|
||||
|
||||
for_each_hw_engine(hwe, gt, id)
|
||||
xe_hw_engine_print(hwe, &p);
|
||||
xe_hw_engine_print(hwe, p);
|
||||
|
||||
err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
|
||||
xe_device_mem_access_put(xe);
|
||||
xe_pm_runtime_put(xe);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int force_reset(struct seq_file *m, void *data)
|
||||
static int force_reset(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_gt_reset_async(gt);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int sa_info(struct seq_file *m, void *data)
|
||||
static int sa_info(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_tile *tile = gt_to_tile(node_to_gt(m->private));
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
struct xe_tile *tile = gt_to_tile(gt);
|
||||
|
||||
drm_suballoc_dump_debug_info(&tile->mem.kernel_bb_pool->base, &p,
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
drm_suballoc_dump_debug_info(&tile->mem.kernel_bb_pool->base, p,
|
||||
tile->mem.kernel_bb_pool->gpu_addr);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int topology(struct seq_file *m, void *data)
|
||||
static int topology(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
xe_gt_topology_dump(gt, &p);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_gt_topology_dump(gt, p);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int steering(struct seq_file *m, void *data)
|
||||
static int steering(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
xe_gt_mcr_steering_dump(gt, &p);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_gt_mcr_steering_dump(gt, p);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ggtt(struct seq_file *m, void *data)
|
||||
static int ggtt(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
int ret;
|
||||
|
||||
return xe_ggtt_dump(gt_to_tile(gt)->mem.ggtt, &p);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
ret = xe_ggtt_dump(gt_to_tile(gt)->mem.ggtt, p);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int register_save_restore(struct seq_file *m, void *data)
|
||||
static int register_save_restore(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
struct xe_hw_engine *hwe;
|
||||
enum xe_hw_engine_id id;
|
||||
|
||||
xe_reg_sr_dump(>->reg_sr, &p);
|
||||
drm_printf(&p, "\n");
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
|
||||
drm_printf(&p, "Engine\n");
|
||||
xe_reg_sr_dump(>->reg_sr, p);
|
||||
drm_printf(p, "\n");
|
||||
|
||||
drm_printf(p, "Engine\n");
|
||||
for_each_hw_engine(hwe, gt, id)
|
||||
xe_reg_sr_dump(&hwe->reg_sr, &p);
|
||||
drm_printf(&p, "\n");
|
||||
xe_reg_sr_dump(&hwe->reg_sr, p);
|
||||
drm_printf(p, "\n");
|
||||
|
||||
drm_printf(&p, "LRC\n");
|
||||
drm_printf(p, "LRC\n");
|
||||
for_each_hw_engine(hwe, gt, id)
|
||||
xe_reg_sr_dump(&hwe->reg_lrc, &p);
|
||||
drm_printf(&p, "\n");
|
||||
xe_reg_sr_dump(&hwe->reg_lrc, p);
|
||||
drm_printf(p, "\n");
|
||||
|
||||
drm_printf(&p, "Whitelist\n");
|
||||
drm_printf(p, "Whitelist\n");
|
||||
for_each_hw_engine(hwe, gt, id)
|
||||
xe_reg_whitelist_dump(&hwe->reg_whitelist, &p);
|
||||
xe_reg_whitelist_dump(&hwe->reg_whitelist, p);
|
||||
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int workarounds(struct seq_file *m, void *data)
|
||||
static int workarounds(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
xe_wa_dump(gt, &p);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_wa_dump(gt, p);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pat(struct seq_file *m, void *data)
|
||||
static int pat(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct xe_gt *gt = node_to_gt(m->private);
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
|
||||
xe_pat_dump(gt, &p);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_pat_dump(gt, p);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int rcs_default_lrc(struct seq_file *m, void *data)
|
||||
static int rcs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_lrc_dump_default(p, gt, XE_ENGINE_CLASS_RENDER);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_lrc_dump_default(&p, node_to_gt(m->private), XE_ENGINE_CLASS_RENDER);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int ccs_default_lrc(struct seq_file *m, void *data)
|
||||
static int ccs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_lrc_dump_default(p, gt, XE_ENGINE_CLASS_COMPUTE);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_lrc_dump_default(&p, node_to_gt(m->private), XE_ENGINE_CLASS_COMPUTE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bcs_default_lrc(struct seq_file *m, void *data)
|
||||
static int bcs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_lrc_dump_default(p, gt, XE_ENGINE_CLASS_COPY);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_lrc_dump_default(&p, node_to_gt(m->private), XE_ENGINE_CLASS_COPY);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int vcs_default_lrc(struct seq_file *m, void *data)
|
||||
static int vcs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_lrc_dump_default(p, gt, XE_ENGINE_CLASS_VIDEO_DECODE);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_lrc_dump_default(&p, node_to_gt(m->private), XE_ENGINE_CLASS_VIDEO_DECODE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int vecs_default_lrc(struct seq_file *m, void *data)
|
||||
static int vecs_default_lrc(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
struct drm_printer p = drm_seq_file_printer(m);
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
xe_lrc_dump_default(p, gt, XE_ENGINE_CLASS_VIDEO_ENHANCE);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
xe_lrc_dump_default(&p, node_to_gt(m->private), XE_ENGINE_CLASS_VIDEO_ENHANCE);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct drm_info_list debugfs_list[] = {
|
||||
{"hw_engines", hw_engines, 0},
|
||||
{"force_reset", force_reset, 0},
|
||||
{"sa_info", sa_info, 0},
|
||||
{"topology", topology, 0},
|
||||
{"steering", steering, 0},
|
||||
{"ggtt", ggtt, 0},
|
||||
{"register-save-restore", register_save_restore, 0},
|
||||
{"workarounds", workarounds, 0},
|
||||
{"pat", pat, 0},
|
||||
{"default_lrc_rcs", rcs_default_lrc},
|
||||
{"default_lrc_ccs", ccs_default_lrc},
|
||||
{"default_lrc_bcs", bcs_default_lrc},
|
||||
{"default_lrc_vcs", vcs_default_lrc},
|
||||
{"default_lrc_vecs", vecs_default_lrc},
|
||||
{"hw_engines", .show = xe_gt_debugfs_simple_show, .data = hw_engines},
|
||||
{"force_reset", .show = xe_gt_debugfs_simple_show, .data = force_reset},
|
||||
{"sa_info", .show = xe_gt_debugfs_simple_show, .data = sa_info},
|
||||
{"topology", .show = xe_gt_debugfs_simple_show, .data = topology},
|
||||
{"steering", .show = xe_gt_debugfs_simple_show, .data = steering},
|
||||
{"ggtt", .show = xe_gt_debugfs_simple_show, .data = ggtt},
|
||||
{"register-save-restore", .show = xe_gt_debugfs_simple_show, .data = register_save_restore},
|
||||
{"workarounds", .show = xe_gt_debugfs_simple_show, .data = workarounds},
|
||||
{"pat", .show = xe_gt_debugfs_simple_show, .data = pat},
|
||||
{"default_lrc_rcs", .show = xe_gt_debugfs_simple_show, .data = rcs_default_lrc},
|
||||
{"default_lrc_ccs", .show = xe_gt_debugfs_simple_show, .data = ccs_default_lrc},
|
||||
{"default_lrc_bcs", .show = xe_gt_debugfs_simple_show, .data = bcs_default_lrc},
|
||||
{"default_lrc_vcs", .show = xe_gt_debugfs_simple_show, .data = vcs_default_lrc},
|
||||
{"default_lrc_vecs", .show = xe_gt_debugfs_simple_show, .data = vecs_default_lrc},
|
||||
};
|
||||
|
||||
void xe_gt_debugfs_register(struct xe_gt *gt)
|
||||
@ -212,13 +265,11 @@ void xe_gt_debugfs_register(struct xe_gt *gt)
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
struct drm_minor *minor = gt_to_xe(gt)->drm.primary;
|
||||
struct dentry *root;
|
||||
struct drm_info_list *local;
|
||||
char name[8];
|
||||
int i;
|
||||
|
||||
xe_gt_assert(gt, minor->debugfs_root);
|
||||
|
||||
sprintf(name, "gt%d", gt->info.id);
|
||||
snprintf(name, sizeof(name), "gt%d", gt->info.id);
|
||||
root = debugfs_create_dir(name, minor->debugfs_root);
|
||||
if (IS_ERR(root)) {
|
||||
drm_warn(&xe->drm, "Create GT directory failed");
|
||||
@ -226,22 +277,13 @@ void xe_gt_debugfs_register(struct xe_gt *gt)
|
||||
}
|
||||
|
||||
/*
|
||||
* Allocate local copy as we need to pass in the GT to the debugfs
|
||||
* entry and drm_debugfs_create_files just references the drm_info_list
|
||||
* passed in (e.g. can't define this on the stack).
|
||||
* Store the xe_gt pointer as private data of the gt/ directory node
|
||||
* so other GT specific attributes under that directory may refer to
|
||||
* it by looking at its parent node private data.
|
||||
*/
|
||||
#define DEBUGFS_SIZE (ARRAY_SIZE(debugfs_list) * sizeof(struct drm_info_list))
|
||||
local = drmm_kmalloc(&xe->drm, DEBUGFS_SIZE, GFP_KERNEL);
|
||||
if (!local)
|
||||
return;
|
||||
root->d_inode->i_private = gt;
|
||||
|
||||
memcpy(local, debugfs_list, DEBUGFS_SIZE);
|
||||
#undef DEBUGFS_SIZE
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(debugfs_list); ++i)
|
||||
local[i].data = gt;
|
||||
|
||||
drm_debugfs_create_files(local,
|
||||
drm_debugfs_create_files(debugfs_list,
|
||||
ARRAY_SIZE(debugfs_list),
|
||||
root, minor);
|
||||
|
||||
|
||||
@ -6,8 +6,10 @@
|
||||
#ifndef _XE_GT_DEBUGFS_H_
|
||||
#define _XE_GT_DEBUGFS_H_
|
||||
|
||||
struct seq_file;
|
||||
struct xe_gt;
|
||||
|
||||
void xe_gt_debugfs_register(struct xe_gt *gt);
|
||||
int xe_gt_debugfs_simple_show(struct seq_file *m, void *data);
|
||||
|
||||
#endif
|
||||
|
||||
@ -15,6 +15,7 @@
|
||||
#include "xe_gt_sysfs.h"
|
||||
#include "xe_gt_throttle_sysfs.h"
|
||||
#include "xe_guc_pc.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
/**
|
||||
* DOC: Xe GT Frequency Management
|
||||
@ -49,12 +50,23 @@ dev_to_pc(struct device *dev)
|
||||
return &kobj_to_gt(dev->kobj.parent)->uc.guc.pc;
|
||||
}
|
||||
|
||||
static struct xe_device *
|
||||
dev_to_xe(struct device *dev)
|
||||
{
|
||||
return gt_to_xe(kobj_to_gt(dev->kobj.parent));
|
||||
}
|
||||
|
||||
static ssize_t act_freq_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct xe_guc_pc *pc = dev_to_pc(dev);
|
||||
u32 freq;
|
||||
|
||||
return sysfs_emit(buf, "%d\n", xe_guc_pc_get_act_freq(pc));
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
freq = xe_guc_pc_get_act_freq(pc);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
|
||||
return sysfs_emit(buf, "%d\n", freq);
|
||||
}
|
||||
static DEVICE_ATTR_RO(act_freq);
|
||||
|
||||
@ -65,7 +77,9 @@ static ssize_t cur_freq_show(struct device *dev,
|
||||
u32 freq;
|
||||
ssize_t ret;
|
||||
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
ret = xe_guc_pc_get_cur_freq(pc, &freq);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -77,8 +91,13 @@ static ssize_t rp0_freq_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct xe_guc_pc *pc = dev_to_pc(dev);
|
||||
u32 freq;
|
||||
|
||||
return sysfs_emit(buf, "%d\n", xe_guc_pc_get_rp0_freq(pc));
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
freq = xe_guc_pc_get_rp0_freq(pc);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
|
||||
return sysfs_emit(buf, "%d\n", freq);
|
||||
}
|
||||
static DEVICE_ATTR_RO(rp0_freq);
|
||||
|
||||
@ -86,8 +105,13 @@ static ssize_t rpe_freq_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct xe_guc_pc *pc = dev_to_pc(dev);
|
||||
u32 freq;
|
||||
|
||||
return sysfs_emit(buf, "%d\n", xe_guc_pc_get_rpe_freq(pc));
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
freq = xe_guc_pc_get_rpe_freq(pc);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
|
||||
return sysfs_emit(buf, "%d\n", freq);
|
||||
}
|
||||
static DEVICE_ATTR_RO(rpe_freq);
|
||||
|
||||
@ -107,7 +131,9 @@ static ssize_t min_freq_show(struct device *dev,
|
||||
u32 freq;
|
||||
ssize_t ret;
|
||||
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
ret = xe_guc_pc_get_min_freq(pc, &freq);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -125,7 +151,9 @@ static ssize_t min_freq_store(struct device *dev, struct device_attribute *attr,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
ret = xe_guc_pc_set_min_freq(pc, freq);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -140,7 +168,9 @@ static ssize_t max_freq_show(struct device *dev,
|
||||
u32 freq;
|
||||
ssize_t ret;
|
||||
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
ret = xe_guc_pc_get_max_freq(pc, &freq);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -158,7 +188,9 @@ static ssize_t max_freq_store(struct device *dev, struct device_attribute *attr,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
xe_pm_runtime_get(dev_to_xe(dev));
|
||||
ret = xe_guc_pc_set_max_freq(pc, freq);
|
||||
xe_pm_runtime_put(dev_to_xe(dev));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@ -190,33 +222,28 @@ static void freq_fini(struct drm_device *drm, void *arg)
|
||||
* @gt: Xe GT object
|
||||
*
|
||||
* It needs to be initialized after GT Sysfs and GuC PC components are ready.
|
||||
*
|
||||
* Returns: Returns error value for failure and 0 for success.
|
||||
*/
|
||||
void xe_gt_freq_init(struct xe_gt *gt)
|
||||
int xe_gt_freq_init(struct xe_gt *gt)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
int err;
|
||||
|
||||
if (xe->info.skip_guc_pc)
|
||||
return;
|
||||
return 0;
|
||||
|
||||
gt->freq = kobject_create_and_add("freq0", gt->sysfs);
|
||||
if (!gt->freq) {
|
||||
drm_warn(&xe->drm, "failed to add freq0 directory to %s\n",
|
||||
kobject_name(gt->sysfs));
|
||||
return;
|
||||
}
|
||||
if (!gt->freq)
|
||||
return -ENOMEM;
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, freq_fini, gt->freq);
|
||||
if (err) {
|
||||
drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n",
|
||||
__func__, err);
|
||||
return;
|
||||
}
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
err = sysfs_create_files(gt->freq, freq_attrs);
|
||||
if (err)
|
||||
drm_warn(&xe->drm, "failed to add freq attrs to %s, err: %d\n",
|
||||
kobject_name(gt->freq), err);
|
||||
return err;
|
||||
|
||||
xe_gt_throttle_sysfs_init(gt);
|
||||
return xe_gt_throttle_sysfs_init(gt);
|
||||
}
|
||||
|
||||
@ -8,6 +8,6 @@
|
||||
|
||||
struct xe_gt;
|
||||
|
||||
void xe_gt_freq_init(struct xe_gt *gt);
|
||||
int xe_gt_freq_init(struct xe_gt *gt);
|
||||
|
||||
#endif
|
||||
|
||||
@ -12,6 +12,7 @@
|
||||
#include "xe_guc_pc.h"
|
||||
#include "regs/xe_gt_regs.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
/**
|
||||
* DOC: Xe GT Idle
|
||||
@ -40,6 +41,15 @@ static struct xe_guc_pc *gtidle_to_pc(struct xe_gt_idle *gtidle)
|
||||
return >idle_to_gt(gtidle)->uc.guc.pc;
|
||||
}
|
||||
|
||||
static struct xe_device *
|
||||
pc_to_xe(struct xe_guc_pc *pc)
|
||||
{
|
||||
struct xe_guc *guc = container_of(pc, struct xe_guc, pc);
|
||||
struct xe_gt *gt = container_of(guc, struct xe_gt, uc.guc);
|
||||
|
||||
return gt_to_xe(gt);
|
||||
}
|
||||
|
||||
static const char *gt_idle_state_to_string(enum xe_gt_idle_state state)
|
||||
{
|
||||
switch (state) {
|
||||
@ -86,8 +96,14 @@ static ssize_t name_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buff)
|
||||
{
|
||||
struct xe_gt_idle *gtidle = dev_to_gtidle(dev);
|
||||
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
|
||||
ssize_t ret;
|
||||
|
||||
return sysfs_emit(buff, "%s\n", gtidle->name);
|
||||
xe_pm_runtime_get(pc_to_xe(pc));
|
||||
ret = sysfs_emit(buff, "%s\n", gtidle->name);
|
||||
xe_pm_runtime_put(pc_to_xe(pc));
|
||||
|
||||
return ret;
|
||||
}
|
||||
static DEVICE_ATTR_RO(name);
|
||||
|
||||
@ -98,7 +114,9 @@ static ssize_t idle_status_show(struct device *dev,
|
||||
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
|
||||
enum xe_gt_idle_state state;
|
||||
|
||||
xe_pm_runtime_get(pc_to_xe(pc));
|
||||
state = gtidle->idle_status(pc);
|
||||
xe_pm_runtime_put(pc_to_xe(pc));
|
||||
|
||||
return sysfs_emit(buff, "%s\n", gt_idle_state_to_string(state));
|
||||
}
|
||||
@ -111,7 +129,10 @@ static ssize_t idle_residency_ms_show(struct device *dev,
|
||||
struct xe_guc_pc *pc = gtidle_to_pc(gtidle);
|
||||
u64 residency;
|
||||
|
||||
xe_pm_runtime_get(pc_to_xe(pc));
|
||||
residency = gtidle->idle_residency(pc);
|
||||
xe_pm_runtime_put(pc_to_xe(pc));
|
||||
|
||||
return sysfs_emit(buff, "%llu\n", get_residency_ms(gtidle, residency));
|
||||
}
|
||||
static DEVICE_ATTR_RO(idle_residency_ms);
|
||||
@ -131,7 +152,7 @@ static void gt_idle_sysfs_fini(struct drm_device *drm, void *arg)
|
||||
kobject_put(kobj);
|
||||
}
|
||||
|
||||
void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
|
||||
int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
|
||||
{
|
||||
struct xe_gt *gt = gtidle_to_gt(gtidle);
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
@ -139,16 +160,14 @@ void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
|
||||
int err;
|
||||
|
||||
kobj = kobject_create_and_add("gtidle", gt->sysfs);
|
||||
if (!kobj) {
|
||||
drm_warn(&xe->drm, "%s failed, err: %d\n", __func__, -ENOMEM);
|
||||
return;
|
||||
}
|
||||
if (!kobj)
|
||||
return -ENOMEM;
|
||||
|
||||
if (xe_gt_is_media_type(gt)) {
|
||||
sprintf(gtidle->name, "gt%d-mc", gt->info.id);
|
||||
snprintf(gtidle->name, sizeof(gtidle->name), "gt%d-mc", gt->info.id);
|
||||
gtidle->idle_residency = xe_guc_pc_mc6_residency;
|
||||
} else {
|
||||
sprintf(gtidle->name, "gt%d-rc", gt->info.id);
|
||||
snprintf(gtidle->name, sizeof(gtidle->name), "gt%d-rc", gt->info.id);
|
||||
gtidle->idle_residency = xe_guc_pc_rc6_residency;
|
||||
}
|
||||
|
||||
@ -159,14 +178,10 @@ void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle)
|
||||
err = sysfs_create_files(kobj, gt_idle_attrs);
|
||||
if (err) {
|
||||
kobject_put(kobj);
|
||||
drm_warn(&xe->drm, "failed to register gtidle sysfs, err: %d\n", err);
|
||||
return;
|
||||
return err;
|
||||
}
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, gt_idle_sysfs_fini, kobj);
|
||||
if (err)
|
||||
drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n",
|
||||
__func__, err);
|
||||
return drmm_add_action_or_reset(&xe->drm, gt_idle_sysfs_fini, kobj);
|
||||
}
|
||||
|
||||
void xe_gt_idle_enable_c6(struct xe_gt *gt)
|
||||
|
||||
@ -10,7 +10,7 @@
|
||||
|
||||
struct xe_gt;
|
||||
|
||||
void xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle);
|
||||
int xe_gt_idle_sysfs_init(struct xe_gt_idle *gtidle);
|
||||
void xe_gt_idle_enable_c6(struct xe_gt *gt);
|
||||
void xe_gt_idle_disable_c6(struct xe_gt *gt);
|
||||
|
||||
|
||||
@ -6,6 +6,7 @@
|
||||
#include "xe_gt_mcr.h"
|
||||
|
||||
#include "regs/xe_gt_regs.h"
|
||||
#include "xe_assert.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_gt_topology.h"
|
||||
#include "xe_gt_types.h"
|
||||
@ -294,14 +295,40 @@ static void init_steering_mslice(struct xe_gt *gt)
|
||||
gt->steering[LNCF].instance_target = 0; /* unused */
|
||||
}
|
||||
|
||||
static unsigned int dss_per_group(struct xe_gt *gt)
|
||||
{
|
||||
if (gt_to_xe(gt)->info.platform == XE_PVC)
|
||||
return 8;
|
||||
else if (GRAPHICS_VERx100(gt_to_xe(gt)) >= 1250)
|
||||
return 4;
|
||||
else
|
||||
return 6;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_mcr_get_dss_steering - Get the group/instance steering for a DSS
|
||||
* @gt: GT structure
|
||||
* @dss: DSS ID to obtain steering for
|
||||
* @group: pointer to storage for steering group ID
|
||||
* @instance: pointer to storage for steering instance ID
|
||||
*/
|
||||
void xe_gt_mcr_get_dss_steering(struct xe_gt *gt, unsigned int dss, u16 *group, u16 *instance)
|
||||
{
|
||||
int dss_per_grp = dss_per_group(gt);
|
||||
|
||||
xe_gt_assert(gt, dss < XE_MAX_DSS_FUSE_BITS);
|
||||
|
||||
*group = dss / dss_per_grp;
|
||||
*instance = dss % dss_per_grp;
|
||||
}
|
||||
|
||||
static void init_steering_dss(struct xe_gt *gt)
|
||||
{
|
||||
unsigned int dss = min(xe_dss_mask_group_ffs(gt->fuse_topo.g_dss_mask, 0, 0),
|
||||
xe_dss_mask_group_ffs(gt->fuse_topo.c_dss_mask, 0, 0));
|
||||
unsigned int dss_per_grp = gt_to_xe(gt)->info.platform == XE_PVC ? 8 : 4;
|
||||
|
||||
gt->steering[DSS].group_target = dss / dss_per_grp;
|
||||
gt->steering[DSS].instance_target = dss % dss_per_grp;
|
||||
xe_gt_mcr_get_dss_steering(gt,
|
||||
min(xe_dss_mask_group_ffs(gt->fuse_topo.g_dss_mask, 0, 0),
|
||||
xe_dss_mask_group_ffs(gt->fuse_topo.c_dss_mask, 0, 0)),
|
||||
>->steering[DSS].group_target,
|
||||
>->steering[DSS].instance_target);
|
||||
}
|
||||
|
||||
static void init_steering_oaddrm(struct xe_gt *gt)
|
||||
|
||||
@ -7,6 +7,7 @@
|
||||
#define _XE_GT_MCR_H_
|
||||
|
||||
#include "regs/xe_reg_defs.h"
|
||||
#include "xe_gt_topology.h"
|
||||
|
||||
struct drm_printer;
|
||||
struct xe_gt;
|
||||
@ -25,5 +26,18 @@ void xe_gt_mcr_multicast_write(struct xe_gt *gt, struct xe_reg_mcr mcr_reg,
|
||||
u32 value);
|
||||
|
||||
void xe_gt_mcr_steering_dump(struct xe_gt *gt, struct drm_printer *p);
|
||||
void xe_gt_mcr_get_dss_steering(struct xe_gt *gt, unsigned int dss, u16 *group, u16 *instance);
|
||||
|
||||
/*
|
||||
* Loop over each DSS and determine the group and instance IDs that
|
||||
* should be used to steer MCR accesses toward this DSS.
|
||||
* @dss: DSS ID to obtain steering for
|
||||
* @gt: GT structure
|
||||
* @group: steering group ID, data type: u16
|
||||
* @instance: steering instance ID, data type: u16
|
||||
*/
|
||||
#define for_each_dss_steering(dss, gt, group, instance) \
|
||||
for_each_dss((dss), (gt)) \
|
||||
for_each_if((xe_gt_mcr_get_dss_steering((gt), (dss), &(group), &(instance)), true))
|
||||
|
||||
#endif /* _XE_GT_MCR_H_ */
|
||||
|
||||
52
drivers/gpu/drm/xe/xe_gt_sriov_pf.c
Normal file
52
drivers/gpu/drm/xe/xe_gt_sriov_pf.c
Normal file
@ -0,0 +1,52 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#include <drm/drm_managed.h>
|
||||
|
||||
#include "xe_gt_sriov_pf.h"
|
||||
#include "xe_gt_sriov_pf_helpers.h"
|
||||
|
||||
/*
|
||||
* VF's metadata is maintained in the flexible array where:
|
||||
* - entry [0] contains metadata for the PF (only if applicable),
|
||||
* - entries [1..n] contain metadata for VF1..VFn::
|
||||
*
|
||||
* <--------------------------- 1 + total_vfs ----------->
|
||||
* +-------+-------+-------+-----------------------+-------+
|
||||
* | 0 | 1 | 2 | | n |
|
||||
* +-------+-------+-------+-----------------------+-------+
|
||||
* | PF | VF1 | VF2 | ... ... | VFn |
|
||||
* +-------+-------+-------+-----------------------+-------+
|
||||
*/
|
||||
static int pf_alloc_metadata(struct xe_gt *gt)
|
||||
{
|
||||
unsigned int num_vfs = xe_gt_sriov_pf_get_totalvfs(gt);
|
||||
|
||||
gt->sriov.pf.vfs = drmm_kcalloc(>_to_xe(gt)->drm, 1 + num_vfs,
|
||||
sizeof(*gt->sriov.pf.vfs), GFP_KERNEL);
|
||||
if (!gt->sriov.pf.vfs)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_init_early - Prepare SR-IOV PF data structures on PF.
|
||||
* @gt: the &xe_gt to initialize
|
||||
*
|
||||
* Early initialization of the PF data.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = pf_alloc_metadata(gt);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return 0;
|
||||
}
|
||||
20
drivers/gpu/drm/xe/xe_gt_sriov_pf.h
Normal file
20
drivers/gpu/drm/xe/xe_gt_sriov_pf.h
Normal file
@ -0,0 +1,20 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_H_
|
||||
#define _XE_GT_SRIOV_PF_H_
|
||||
|
||||
struct xe_gt;
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
int xe_gt_sriov_pf_init_early(struct xe_gt *gt);
|
||||
#else
|
||||
static inline int xe_gt_sriov_pf_init_early(struct xe_gt *gt)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
1977
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
Normal file
1977
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
Normal file
File diff suppressed because it is too large
Load Diff
56
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
Normal file
56
drivers/gpu/drm/xe/xe_gt_sriov_pf_config.h
Normal file
@ -0,0 +1,56 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_CONFIG_H_
|
||||
#define _XE_GT_SRIOV_PF_CONFIG_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_printer;
|
||||
struct xe_gt;
|
||||
|
||||
u64 xe_gt_sriov_pf_config_get_ggtt(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_ggtt(struct xe_gt *gt, unsigned int vfid, u64 size);
|
||||
int xe_gt_sriov_pf_config_set_fair_ggtt(struct xe_gt *gt,
|
||||
unsigned int vfid, unsigned int num_vfs);
|
||||
int xe_gt_sriov_pf_config_bulk_set_ggtt(struct xe_gt *gt,
|
||||
unsigned int vfid, unsigned int num_vfs, u64 size);
|
||||
|
||||
u32 xe_gt_sriov_pf_config_get_ctxs(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_ctxs(struct xe_gt *gt, unsigned int vfid, u32 num_ctxs);
|
||||
int xe_gt_sriov_pf_config_set_fair_ctxs(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
|
||||
int xe_gt_sriov_pf_config_bulk_set_ctxs(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
|
||||
u32 num_ctxs);
|
||||
|
||||
u32 xe_gt_sriov_pf_config_get_dbs(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_dbs(struct xe_gt *gt, unsigned int vfid, u32 num_dbs);
|
||||
int xe_gt_sriov_pf_config_set_fair_dbs(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
|
||||
int xe_gt_sriov_pf_config_bulk_set_dbs(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
|
||||
u32 num_dbs);
|
||||
|
||||
u64 xe_gt_sriov_pf_config_get_lmem(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_lmem(struct xe_gt *gt, unsigned int vfid, u64 size);
|
||||
int xe_gt_sriov_pf_config_set_fair_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
|
||||
int xe_gt_sriov_pf_config_bulk_set_lmem(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs,
|
||||
u64 size);
|
||||
|
||||
u32 xe_gt_sriov_pf_config_get_exec_quantum(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_exec_quantum(struct xe_gt *gt, unsigned int vfid, u32 exec_quantum);
|
||||
|
||||
u32 xe_gt_sriov_pf_config_get_preempt_timeout(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_config_set_preempt_timeout(struct xe_gt *gt, unsigned int vfid,
|
||||
u32 preempt_timeout);
|
||||
|
||||
int xe_gt_sriov_pf_config_set_fair(struct xe_gt *gt, unsigned int vfid, unsigned int num_vfs);
|
||||
int xe_gt_sriov_pf_config_release(struct xe_gt *gt, unsigned int vfid, bool force);
|
||||
int xe_gt_sriov_pf_config_push(struct xe_gt *gt, unsigned int vfid, bool refresh);
|
||||
|
||||
int xe_gt_sriov_pf_config_print_ggtt(struct xe_gt *gt, struct drm_printer *p);
|
||||
int xe_gt_sriov_pf_config_print_ctxs(struct xe_gt *gt, struct drm_printer *p);
|
||||
int xe_gt_sriov_pf_config_print_dbs(struct xe_gt *gt, struct drm_printer *p);
|
||||
|
||||
int xe_gt_sriov_pf_config_print_available_ggtt(struct xe_gt *gt, struct drm_printer *p);
|
||||
|
||||
#endif
|
||||
54
drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h
Normal file
54
drivers/gpu/drm/xe/xe_gt_sriov_pf_config_types.h
Normal file
@ -0,0 +1,54 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_CONFIG_TYPES_H_
|
||||
#define _XE_GT_SRIOV_PF_CONFIG_TYPES_H_
|
||||
|
||||
#include <drm/drm_mm.h>
|
||||
|
||||
struct xe_bo;
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_config - GT level per-VF configuration data.
|
||||
*
|
||||
* Used by the PF driver to maintain per-VF provisioning data.
|
||||
*/
|
||||
struct xe_gt_sriov_config {
|
||||
/** @ggtt_region: GGTT region assigned to the VF. */
|
||||
struct drm_mm_node ggtt_region;
|
||||
/** @lmem_obj: LMEM allocation for use by the VF. */
|
||||
struct xe_bo *lmem_obj;
|
||||
/** @num_ctxs: number of GuC contexts IDs. */
|
||||
u16 num_ctxs;
|
||||
/** @begin_ctx: start index of GuC context ID range. */
|
||||
u16 begin_ctx;
|
||||
/** @num_dbs: number of GuC doorbells IDs. */
|
||||
u16 num_dbs;
|
||||
/** @begin_db: start index of GuC doorbell ID range. */
|
||||
u16 begin_db;
|
||||
/** @exec_quantum: execution-quantum in milliseconds. */
|
||||
u32 exec_quantum;
|
||||
/** @preempt_timeout: preemption timeout in microseconds. */
|
||||
u32 preempt_timeout;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_spare_config - GT-level PF spare configuration data.
|
||||
*
|
||||
* Used by the PF driver to maintain it's own reserved (spare) provisioning
|
||||
* data that is not applicable to be tracked in struct xe_gt_sriov_config.
|
||||
*/
|
||||
struct xe_gt_sriov_spare_config {
|
||||
/** @ggtt_size: GGTT size. */
|
||||
u64 ggtt_size;
|
||||
/** @lmem_size: LMEM size. */
|
||||
u64 lmem_size;
|
||||
/** @num_ctxs: number of GuC submission contexts. */
|
||||
u16 num_ctxs;
|
||||
/** @num_dbs: number of GuC doorbells. */
|
||||
u16 num_dbs;
|
||||
};
|
||||
|
||||
#endif
|
||||
257
drivers/gpu/drm/xe/xe_gt_sriov_pf_control.c
Normal file
257
drivers/gpu/drm/xe/xe_gt_sriov_pf_control.c
Normal file
@ -0,0 +1,257 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "abi/guc_actions_sriov_abi.h"
|
||||
|
||||
#include "xe_device.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_gt_sriov_pf_control.h"
|
||||
#include "xe_gt_sriov_printk.h"
|
||||
#include "xe_guc_ct.h"
|
||||
#include "xe_sriov.h"
|
||||
|
||||
static const char *control_cmd_to_string(u32 cmd)
|
||||
{
|
||||
switch (cmd) {
|
||||
case GUC_PF_TRIGGER_VF_PAUSE:
|
||||
return "PAUSE";
|
||||
case GUC_PF_TRIGGER_VF_RESUME:
|
||||
return "RESUME";
|
||||
case GUC_PF_TRIGGER_VF_STOP:
|
||||
return "STOP";
|
||||
case GUC_PF_TRIGGER_VF_FLR_START:
|
||||
return "FLR_START";
|
||||
case GUC_PF_TRIGGER_VF_FLR_FINISH:
|
||||
return "FLR_FINISH";
|
||||
default:
|
||||
return "<unknown>";
|
||||
}
|
||||
}
|
||||
|
||||
static int guc_action_vf_control_cmd(struct xe_guc *guc, u32 vfid, u32 cmd)
|
||||
{
|
||||
u32 request[PF2GUC_VF_CONTROL_REQUEST_MSG_LEN] = {
|
||||
FIELD_PREP(GUC_HXG_MSG_0_ORIGIN, GUC_HXG_ORIGIN_HOST) |
|
||||
FIELD_PREP(GUC_HXG_MSG_0_TYPE, GUC_HXG_TYPE_REQUEST) |
|
||||
FIELD_PREP(GUC_HXG_REQUEST_MSG_0_ACTION, GUC_ACTION_PF2GUC_VF_CONTROL),
|
||||
FIELD_PREP(PF2GUC_VF_CONTROL_REQUEST_MSG_1_VFID, vfid),
|
||||
FIELD_PREP(PF2GUC_VF_CONTROL_REQUEST_MSG_2_COMMAND, cmd),
|
||||
};
|
||||
int ret;
|
||||
|
||||
/* XXX those two commands are now sent from the G2H handler */
|
||||
if (cmd == GUC_PF_TRIGGER_VF_FLR_START || cmd == GUC_PF_TRIGGER_VF_FLR_FINISH)
|
||||
return xe_guc_ct_send_g2h_handler(&guc->ct, request, ARRAY_SIZE(request));
|
||||
|
||||
ret = xe_guc_ct_send_block(&guc->ct, request, ARRAY_SIZE(request));
|
||||
return ret > 0 ? -EPROTO : ret;
|
||||
}
|
||||
|
||||
static int pf_send_vf_control_cmd(struct xe_gt *gt, unsigned int vfid, u32 cmd)
|
||||
{
|
||||
int err;
|
||||
|
||||
xe_gt_assert(gt, vfid != PFID);
|
||||
|
||||
err = guc_action_vf_control_cmd(>->uc.guc, vfid, cmd);
|
||||
if (unlikely(err))
|
||||
xe_gt_sriov_err(gt, "VF%u control command %s failed (%pe)\n",
|
||||
vfid, control_cmd_to_string(cmd), ERR_PTR(err));
|
||||
return err;
|
||||
}
|
||||
|
||||
static int pf_send_vf_pause(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_control_cmd(gt, vfid, GUC_PF_TRIGGER_VF_PAUSE);
|
||||
}
|
||||
|
||||
static int pf_send_vf_resume(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_control_cmd(gt, vfid, GUC_PF_TRIGGER_VF_RESUME);
|
||||
}
|
||||
|
||||
static int pf_send_vf_stop(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_control_cmd(gt, vfid, GUC_PF_TRIGGER_VF_STOP);
|
||||
}
|
||||
|
||||
static int pf_send_vf_flr_start(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_control_cmd(gt, vfid, GUC_PF_TRIGGER_VF_FLR_START);
|
||||
}
|
||||
|
||||
static int pf_send_vf_flr_finish(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_control_cmd(gt, vfid, GUC_PF_TRIGGER_VF_FLR_FINISH);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_control_pause_vf - Pause a VF.
|
||||
* @gt: the &xe_gt
|
||||
* @vfid: the VF identifier
|
||||
*
|
||||
* This function is for PF only.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_control_pause_vf(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_pause(gt, vfid);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_control_resume_vf - Resume a VF.
|
||||
* @gt: the &xe_gt
|
||||
* @vfid: the VF identifier
|
||||
*
|
||||
* This function is for PF only.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_control_resume_vf(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_resume(gt, vfid);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_control_stop_vf - Stop a VF.
|
||||
* @gt: the &xe_gt
|
||||
* @vfid: the VF identifier
|
||||
*
|
||||
* This function is for PF only.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_control_stop_vf(struct xe_gt *gt, unsigned int vfid)
|
||||
{
|
||||
return pf_send_vf_stop(gt, vfid);
|
||||
}
|
||||
|
||||
/**
|
||||
* DOC: The VF FLR Flow with GuC
|
||||
*
|
||||
* PF GUC PCI
|
||||
* ========================================================
|
||||
* | | |
|
||||
* (1) | [ ] <----- FLR --|
|
||||
* | [ ] :
|
||||
* (2) [ ] <-------- NOTIFY FLR --[ ]
|
||||
* [ ] |
|
||||
* (3) [ ] |
|
||||
* [ ] |
|
||||
* [ ]-- START FLR ---------> [ ]
|
||||
* | [ ]
|
||||
* (4) | [ ]
|
||||
* | [ ]
|
||||
* [ ] <--------- FLR DONE -- [ ]
|
||||
* [ ] |
|
||||
* (5) [ ] |
|
||||
* [ ] |
|
||||
* [ ]-- FINISH FLR --------> [ ]
|
||||
* | |
|
||||
*
|
||||
* Step 1: PCI HW generates interrupt to the GuC about VF FLR
|
||||
* Step 2: GuC FW sends G2H notification to the PF about VF FLR
|
||||
* Step 2a: on some platforms G2H is only received from root GuC
|
||||
* Step 3: PF sends H2G request to the GuC to start VF FLR sequence
|
||||
* Step 3a: on some platforms PF must send H2G to all other GuCs
|
||||
* Step 4: GuC FW performs VF FLR cleanups and notifies the PF when done
|
||||
* Step 5: PF performs VF FLR cleanups and notifies the GuC FW when finished
|
||||
*/
|
||||
|
||||
static bool needs_dispatch_flr(struct xe_device *xe)
|
||||
{
|
||||
return xe->info.platform == XE_PVC;
|
||||
}
|
||||
|
||||
static void pf_handle_vf_flr(struct xe_gt *gt, u32 vfid)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
struct xe_gt *gtit;
|
||||
unsigned int gtid;
|
||||
|
||||
xe_gt_sriov_info(gt, "VF%u FLR\n", vfid);
|
||||
|
||||
if (needs_dispatch_flr(xe)) {
|
||||
for_each_gt(gtit, xe, gtid)
|
||||
pf_send_vf_flr_start(gtit, vfid);
|
||||
} else {
|
||||
pf_send_vf_flr_start(gt, vfid);
|
||||
}
|
||||
}
|
||||
|
||||
static void pf_handle_vf_flr_done(struct xe_gt *gt, u32 vfid)
|
||||
{
|
||||
pf_send_vf_flr_finish(gt, vfid);
|
||||
}
|
||||
|
||||
static int pf_handle_vf_event(struct xe_gt *gt, u32 vfid, u32 eventid)
|
||||
{
|
||||
switch (eventid) {
|
||||
case GUC_PF_NOTIFY_VF_FLR:
|
||||
pf_handle_vf_flr(gt, vfid);
|
||||
break;
|
||||
case GUC_PF_NOTIFY_VF_FLR_DONE:
|
||||
pf_handle_vf_flr_done(gt, vfid);
|
||||
break;
|
||||
case GUC_PF_NOTIFY_VF_PAUSE_DONE:
|
||||
break;
|
||||
case GUC_PF_NOTIFY_VF_FIXUP_DONE:
|
||||
break;
|
||||
default:
|
||||
return -ENOPKG;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pf_handle_pf_event(struct xe_gt *gt, u32 eventid)
|
||||
{
|
||||
switch (eventid) {
|
||||
case GUC_PF_NOTIFY_VF_ENABLE:
|
||||
xe_gt_sriov_dbg_verbose(gt, "VFs %s/%s\n",
|
||||
str_enabled_disabled(true),
|
||||
str_enabled_disabled(false));
|
||||
break;
|
||||
default:
|
||||
return -ENOPKG;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_control_process_guc2pf - Handle VF state notification from GuC.
|
||||
* @gt: the &xe_gt
|
||||
* @msg: the G2H message
|
||||
* @len: the length of the G2H message
|
||||
*
|
||||
* This function is for PF only.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_control_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len)
|
||||
{
|
||||
u32 vfid;
|
||||
u32 eventid;
|
||||
|
||||
xe_gt_assert(gt, len);
|
||||
xe_gt_assert(gt, FIELD_GET(GUC_HXG_MSG_0_ORIGIN, msg[0]) == GUC_HXG_ORIGIN_GUC);
|
||||
xe_gt_assert(gt, FIELD_GET(GUC_HXG_MSG_0_TYPE, msg[0]) == GUC_HXG_TYPE_EVENT);
|
||||
xe_gt_assert(gt, FIELD_GET(GUC_HXG_EVENT_MSG_0_ACTION, msg[0]) ==
|
||||
GUC_ACTION_GUC2PF_VF_STATE_NOTIFY);
|
||||
|
||||
if (unlikely(!xe_device_is_sriov_pf(gt_to_xe(gt))))
|
||||
return -EPROTO;
|
||||
|
||||
if (unlikely(FIELD_GET(GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_0_MBZ, msg[0])))
|
||||
return -EPFNOSUPPORT;
|
||||
|
||||
if (unlikely(len != GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_LEN))
|
||||
return -EPROTO;
|
||||
|
||||
vfid = FIELD_GET(GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_1_VFID, msg[1]);
|
||||
eventid = FIELD_GET(GUC2PF_VF_STATE_NOTIFY_EVENT_MSG_2_EVENT, msg[2]);
|
||||
|
||||
return vfid ? pf_handle_vf_event(gt, vfid, eventid) : pf_handle_pf_event(gt, eventid);
|
||||
}
|
||||
27
drivers/gpu/drm/xe/xe_gt_sriov_pf_control.h
Normal file
27
drivers/gpu/drm/xe/xe_gt_sriov_pf_control.h
Normal file
@ -0,0 +1,27 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_CONTROL_H_
|
||||
#define _XE_GT_SRIOV_PF_CONTROL_H_
|
||||
|
||||
#include <linux/errno.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
struct xe_gt;
|
||||
|
||||
int xe_gt_sriov_pf_control_pause_vf(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_control_resume_vf(struct xe_gt *gt, unsigned int vfid);
|
||||
int xe_gt_sriov_pf_control_stop_vf(struct xe_gt *gt, unsigned int vfid);
|
||||
|
||||
#ifdef CONFIG_PCI_IOV
|
||||
int xe_gt_sriov_pf_control_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len);
|
||||
#else
|
||||
static inline int xe_gt_sriov_pf_control_process_guc2pf(struct xe_gt *gt, const u32 *msg, u32 len)
|
||||
{
|
||||
return -EPROTO;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
35
drivers/gpu/drm/xe/xe_gt_sriov_pf_helpers.h
Normal file
35
drivers/gpu/drm/xe/xe_gt_sriov_pf_helpers.h
Normal file
@ -0,0 +1,35 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_HELPERS_H_
|
||||
#define _XE_GT_SRIOV_PF_HELPERS_H_
|
||||
|
||||
#include "xe_gt_types.h"
|
||||
#include "xe_sriov_pf_helpers.h"
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_assert_vfid() - warn if &id is not a supported VF number when debugging.
|
||||
* @gt: the PF &xe_gt to assert on
|
||||
* @vfid: the VF number to assert
|
||||
*
|
||||
* Assert that > belongs to the Physical Function (PF) device and provided &vfid
|
||||
* is within a range of supported VF numbers (up to maximum number of VFs that
|
||||
* driver can support, including VF0 that represents the PF itself).
|
||||
*
|
||||
* Note: Effective only on debug builds. See `Xe ASSERTs`_ for more information.
|
||||
*/
|
||||
#define xe_gt_sriov_pf_assert_vfid(gt, vfid) xe_sriov_pf_assert_vfid(gt_to_xe(gt), (vfid))
|
||||
|
||||
static inline int xe_gt_sriov_pf_get_totalvfs(struct xe_gt *gt)
|
||||
{
|
||||
return xe_sriov_pf_get_totalvfs(gt_to_xe(gt));
|
||||
}
|
||||
|
||||
static inline struct mutex *xe_gt_sriov_pf_master_mutex(struct xe_gt *gt)
|
||||
{
|
||||
return xe_sriov_pf_master_mutex(gt_to_xe(gt));
|
||||
}
|
||||
|
||||
#endif
|
||||
418
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
Normal file
418
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.c
Normal file
@ -0,0 +1,418 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#include "abi/guc_actions_sriov_abi.h"
|
||||
|
||||
#include "xe_bo.h"
|
||||
#include "xe_gt.h"
|
||||
#include "xe_gt_sriov_pf_helpers.h"
|
||||
#include "xe_gt_sriov_pf_policy.h"
|
||||
#include "xe_gt_sriov_printk.h"
|
||||
#include "xe_guc_ct.h"
|
||||
#include "xe_guc_klv_helpers.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
/*
|
||||
* Return: number of KLVs that were successfully parsed and saved,
|
||||
* negative error code on failure.
|
||||
*/
|
||||
static int guc_action_update_vgt_policy(struct xe_guc *guc, u64 addr, u32 size)
|
||||
{
|
||||
u32 request[] = {
|
||||
GUC_ACTION_PF2GUC_UPDATE_VGT_POLICY,
|
||||
lower_32_bits(addr),
|
||||
upper_32_bits(addr),
|
||||
size,
|
||||
};
|
||||
|
||||
return xe_guc_ct_send_block(&guc->ct, request, ARRAY_SIZE(request));
|
||||
}
|
||||
|
||||
/*
|
||||
* Return: number of KLVs that were successfully parsed and saved,
|
||||
* negative error code on failure.
|
||||
*/
|
||||
static int pf_send_policy_klvs(struct xe_gt *gt, const u32 *klvs, u32 num_dwords)
|
||||
{
|
||||
const u32 bytes = num_dwords * sizeof(u32);
|
||||
struct xe_tile *tile = gt_to_tile(gt);
|
||||
struct xe_device *xe = tile_to_xe(tile);
|
||||
struct xe_guc *guc = >->uc.guc;
|
||||
struct xe_bo *bo;
|
||||
int ret;
|
||||
|
||||
bo = xe_bo_create_pin_map(xe, tile, NULL,
|
||||
ALIGN(bytes, PAGE_SIZE),
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(tile) |
|
||||
XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(bo))
|
||||
return PTR_ERR(bo);
|
||||
|
||||
xe_map_memcpy_to(xe, &bo->vmap, 0, klvs, bytes);
|
||||
|
||||
ret = guc_action_update_vgt_policy(guc, xe_bo_ggtt_addr(bo), num_dwords);
|
||||
|
||||
xe_bo_unpin_map_no_vm(bo);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* Return: 0 on success, -ENOKEY if some KLVs were not updated, -EPROTO if reply was malformed,
|
||||
* negative error code on failure.
|
||||
*/
|
||||
static int pf_push_policy_klvs(struct xe_gt *gt, u32 num_klvs,
|
||||
const u32 *klvs, u32 num_dwords)
|
||||
{
|
||||
int ret;
|
||||
|
||||
xe_gt_assert(gt, num_klvs == xe_guc_klv_count(klvs, num_dwords));
|
||||
|
||||
ret = pf_send_policy_klvs(gt, klvs, num_dwords);
|
||||
|
||||
if (ret != num_klvs) {
|
||||
int err = ret < 0 ? ret : ret < num_klvs ? -ENOKEY : -EPROTO;
|
||||
struct drm_printer p = xe_gt_info_printer(gt);
|
||||
|
||||
xe_gt_sriov_notice(gt, "Failed to push %u policy KLV%s (%pe)\n",
|
||||
num_klvs, str_plural(num_klvs), ERR_PTR(err));
|
||||
xe_guc_klv_print(klvs, num_dwords, &p);
|
||||
return err;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pf_push_policy_u32(struct xe_gt *gt, u16 key, u32 value)
|
||||
{
|
||||
u32 klv[] = {
|
||||
PREP_GUC_KLV(key, 1),
|
||||
value,
|
||||
};
|
||||
|
||||
return pf_push_policy_klvs(gt, 1, klv, ARRAY_SIZE(klv));
|
||||
}
|
||||
|
||||
static int pf_update_policy_bool(struct xe_gt *gt, u16 key, bool *policy, bool value)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = pf_push_policy_u32(gt, key, value);
|
||||
if (unlikely(err)) {
|
||||
xe_gt_sriov_notice(gt, "Failed to update policy %#x '%s' to '%s' (%pe)\n",
|
||||
key, xe_guc_klv_key_to_string(key),
|
||||
str_enabled_disabled(value), ERR_PTR(err));
|
||||
return err;
|
||||
}
|
||||
|
||||
xe_gt_sriov_dbg(gt, "policy key %#x '%s' updated to '%s'\n",
|
||||
key, xe_guc_klv_key_to_string(key),
|
||||
str_enabled_disabled(value));
|
||||
|
||||
*policy = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pf_update_policy_u32(struct xe_gt *gt, u16 key, u32 *policy, u32 value)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = pf_push_policy_u32(gt, key, value);
|
||||
if (unlikely(err)) {
|
||||
xe_gt_sriov_notice(gt, "Failed to update policy %#x '%s' to '%s' (%pe)\n",
|
||||
key, xe_guc_klv_key_to_string(key),
|
||||
str_enabled_disabled(value), ERR_PTR(err));
|
||||
return err;
|
||||
}
|
||||
|
||||
xe_gt_sriov_dbg(gt, "policy key %#x '%s' updated to %u\n",
|
||||
key, xe_guc_klv_key_to_string(key), value);
|
||||
|
||||
*policy = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int pf_provision_sched_if_idle(struct xe_gt *gt, bool enable)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_update_policy_bool(gt, GUC_KLV_VGT_POLICY_SCHED_IF_IDLE_KEY,
|
||||
>->sriov.pf.policy.guc.sched_if_idle,
|
||||
enable);
|
||||
}
|
||||
|
||||
static int pf_reprovision_sched_if_idle(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_provision_sched_if_idle(gt, gt->sriov.pf.policy.guc.sched_if_idle);
|
||||
}
|
||||
|
||||
static void pf_sanitize_sched_if_idle(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
gt->sriov.pf.policy.guc.sched_if_idle = false;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_set_sched_if_idle - Control the 'sched_if_idle' policy.
|
||||
* @gt: the &xe_gt where to apply the policy
|
||||
* @enable: the value of the 'sched_if_idle' policy
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_policy_set_sched_if_idle(struct xe_gt *gt, bool enable)
|
||||
{
|
||||
int err;
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
err = pf_provision_sched_if_idle(gt, enable);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_get_sched_if_idle - Retrieve value of 'sched_if_idle' policy.
|
||||
* @gt: the &xe_gt where to read the policy from
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: value of 'sched_if_idle' policy.
|
||||
*/
|
||||
bool xe_gt_sriov_pf_policy_get_sched_if_idle(struct xe_gt *gt)
|
||||
{
|
||||
bool enable;
|
||||
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
enable = gt->sriov.pf.policy.guc.sched_if_idle;
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return enable;
|
||||
}
|
||||
|
||||
static int pf_provision_reset_engine(struct xe_gt *gt, bool enable)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_update_policy_bool(gt, GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_KEY,
|
||||
>->sriov.pf.policy.guc.reset_engine, enable);
|
||||
}
|
||||
|
||||
static int pf_reprovision_reset_engine(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_provision_reset_engine(gt, gt->sriov.pf.policy.guc.reset_engine);
|
||||
}
|
||||
|
||||
static void pf_sanitize_reset_engine(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
gt->sriov.pf.policy.guc.reset_engine = false;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_set_reset_engine - Control the 'reset_engine' policy.
|
||||
* @gt: the &xe_gt where to apply the policy
|
||||
* @enable: the value of the 'reset_engine' policy
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_policy_set_reset_engine(struct xe_gt *gt, bool enable)
|
||||
{
|
||||
int err;
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
err = pf_provision_reset_engine(gt, enable);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_get_reset_engine - Retrieve value of 'reset_engine' policy.
|
||||
* @gt: the &xe_gt where to read the policy from
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: value of 'reset_engine' policy.
|
||||
*/
|
||||
bool xe_gt_sriov_pf_policy_get_reset_engine(struct xe_gt *gt)
|
||||
{
|
||||
bool enable;
|
||||
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
enable = gt->sriov.pf.policy.guc.reset_engine;
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return enable;
|
||||
}
|
||||
|
||||
static int pf_provision_sample_period(struct xe_gt *gt, u32 value)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_update_policy_u32(gt, GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_KEY,
|
||||
>->sriov.pf.policy.guc.sample_period, value);
|
||||
}
|
||||
|
||||
static int pf_reprovision_sample_period(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return pf_provision_sample_period(gt, gt->sriov.pf.policy.guc.sample_period);
|
||||
}
|
||||
|
||||
static void pf_sanitize_sample_period(struct xe_gt *gt)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
lockdep_assert_held(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
gt->sriov.pf.policy.guc.sample_period = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_set_sample_period - Control the 'sample_period' policy.
|
||||
* @gt: the &xe_gt where to apply the policy
|
||||
* @value: the value of the 'sample_period' policy
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_policy_set_sample_period(struct xe_gt *gt, u32 value)
|
||||
{
|
||||
int err;
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
err = pf_provision_sample_period(gt, value);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_get_sample_period - Retrieve value of 'sample_period' policy.
|
||||
* @gt: the &xe_gt where to read the policy from
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: value of 'sample_period' policy.
|
||||
*/
|
||||
u32 xe_gt_sriov_pf_policy_get_sample_period(struct xe_gt *gt)
|
||||
{
|
||||
u32 value;
|
||||
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
value = gt->sriov.pf.policy.guc.sample_period;
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
static void pf_sanitize_guc_policies(struct xe_gt *gt)
|
||||
{
|
||||
pf_sanitize_sched_if_idle(gt);
|
||||
pf_sanitize_reset_engine(gt);
|
||||
pf_sanitize_sample_period(gt);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_sanitize - Reset policy settings.
|
||||
* @gt: the &xe_gt
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
void xe_gt_sriov_pf_policy_sanitize(struct xe_gt *gt)
|
||||
{
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
pf_sanitize_guc_policies(gt);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_reprovision - Reprovision (and optionally reset) policy settings.
|
||||
* @gt: the &xe_gt
|
||||
* @reset: if true will reprovision using default values instead of latest
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_policy_reprovision(struct xe_gt *gt, bool reset)
|
||||
{
|
||||
int err = 0;
|
||||
|
||||
xe_pm_runtime_get_noresume(gt_to_xe(gt));
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
if (reset)
|
||||
pf_sanitize_guc_policies(gt);
|
||||
err |= pf_reprovision_sched_if_idle(gt);
|
||||
err |= pf_reprovision_reset_engine(gt);
|
||||
err |= pf_reprovision_sample_period(gt);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return err ? -ENXIO : 0;
|
||||
}
|
||||
|
||||
static void print_guc_policies(struct drm_printer *p, struct xe_gt_sriov_guc_policies *policy)
|
||||
{
|
||||
drm_printf(p, "%s:\t%s\n",
|
||||
xe_guc_klv_key_to_string(GUC_KLV_VGT_POLICY_SCHED_IF_IDLE_KEY),
|
||||
str_enabled_disabled(policy->sched_if_idle));
|
||||
drm_printf(p, "%s:\t%s\n",
|
||||
xe_guc_klv_key_to_string(GUC_KLV_VGT_POLICY_RESET_AFTER_VF_SWITCH_KEY),
|
||||
str_enabled_disabled(policy->reset_engine));
|
||||
drm_printf(p, "%s:\t%u %s\n",
|
||||
xe_guc_klv_key_to_string(GUC_KLV_VGT_POLICY_ADVERSE_SAMPLE_PERIOD_KEY),
|
||||
policy->sample_period, policy->sample_period ? "ms" : "(disabled)");
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_sriov_pf_policy_print - Dump actual policy values.
|
||||
* @gt: the &xe_gt where to read the policy from
|
||||
* @p: the &drm_printer
|
||||
*
|
||||
* This function can only be called on PF.
|
||||
*
|
||||
* Return: 0 on success or a negative error code on failure.
|
||||
*/
|
||||
int xe_gt_sriov_pf_policy_print(struct xe_gt *gt, struct drm_printer *p)
|
||||
{
|
||||
xe_gt_assert(gt, IS_SRIOV_PF(gt_to_xe(gt)));
|
||||
|
||||
mutex_lock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
print_guc_policies(p, >->sriov.pf.policy.guc);
|
||||
mutex_unlock(xe_gt_sriov_pf_master_mutex(gt));
|
||||
|
||||
return 0;
|
||||
}
|
||||
25
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
Normal file
25
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy.h
Normal file
@ -0,0 +1,25 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_POLICY_H_
|
||||
#define _XE_GT_SRIOV_PF_POLICY_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct drm_printer;
|
||||
struct xe_gt;
|
||||
|
||||
int xe_gt_sriov_pf_policy_set_sched_if_idle(struct xe_gt *gt, bool enable);
|
||||
bool xe_gt_sriov_pf_policy_get_sched_if_idle(struct xe_gt *gt);
|
||||
int xe_gt_sriov_pf_policy_set_reset_engine(struct xe_gt *gt, bool enable);
|
||||
bool xe_gt_sriov_pf_policy_get_reset_engine(struct xe_gt *gt);
|
||||
int xe_gt_sriov_pf_policy_set_sample_period(struct xe_gt *gt, u32 value);
|
||||
u32 xe_gt_sriov_pf_policy_get_sample_period(struct xe_gt *gt);
|
||||
|
||||
void xe_gt_sriov_pf_policy_sanitize(struct xe_gt *gt);
|
||||
int xe_gt_sriov_pf_policy_reprovision(struct xe_gt *gt, bool reset);
|
||||
int xe_gt_sriov_pf_policy_print(struct xe_gt *gt, struct drm_printer *p);
|
||||
|
||||
#endif
|
||||
31
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h
Normal file
31
drivers/gpu/drm/xe/xe_gt_sriov_pf_policy_types.h
Normal file
@ -0,0 +1,31 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_POLICY_TYPES_H_
|
||||
#define _XE_GT_SRIOV_PF_POLICY_TYPES_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_guc_policies - GuC SR-IOV policies.
|
||||
* @sched_if_idle: controls strict scheduling policy.
|
||||
* @reset_engine: controls engines reset on VF switch policy.
|
||||
* @sample_period: adverse events sampling period (in milliseconds).
|
||||
*/
|
||||
struct xe_gt_sriov_guc_policies {
|
||||
bool sched_if_idle;
|
||||
bool reset_engine;
|
||||
u32 sample_period;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_pf_policy - PF policy data.
|
||||
* @guc: GuC scheduling policies.
|
||||
*/
|
||||
struct xe_gt_sriov_pf_policy {
|
||||
struct xe_gt_sriov_guc_policies guc;
|
||||
};
|
||||
|
||||
#endif
|
||||
34
drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h
Normal file
34
drivers/gpu/drm/xe/xe_gt_sriov_pf_types.h
Normal file
@ -0,0 +1,34 @@
|
||||
/* SPDX-License-Identifier: MIT */
|
||||
/*
|
||||
* Copyright © 2023-2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_GT_SRIOV_PF_TYPES_H_
|
||||
#define _XE_GT_SRIOV_PF_TYPES_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
#include "xe_gt_sriov_pf_config_types.h"
|
||||
#include "xe_gt_sriov_pf_policy_types.h"
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_metadata - GT level per-VF metadata.
|
||||
*/
|
||||
struct xe_gt_sriov_metadata {
|
||||
/** @config: per-VF provisioning data. */
|
||||
struct xe_gt_sriov_config config;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct xe_gt_sriov_pf - GT level PF virtualization data.
|
||||
* @policy: policy data.
|
||||
* @spare: PF-only provisioning configuration.
|
||||
* @vfs: metadata for all VFs.
|
||||
*/
|
||||
struct xe_gt_sriov_pf {
|
||||
struct xe_gt_sriov_pf_policy policy;
|
||||
struct xe_gt_sriov_spare_config spare;
|
||||
struct xe_gt_sriov_metadata *vfs;
|
||||
};
|
||||
|
||||
#endif
|
||||
@ -29,7 +29,7 @@ static void gt_sysfs_fini(struct drm_device *drm, void *arg)
|
||||
kobject_put(gt->sysfs);
|
||||
}
|
||||
|
||||
void xe_gt_sysfs_init(struct xe_gt *gt)
|
||||
int xe_gt_sysfs_init(struct xe_gt *gt)
|
||||
{
|
||||
struct xe_tile *tile = gt_to_tile(gt);
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
@ -38,24 +38,18 @@ void xe_gt_sysfs_init(struct xe_gt *gt)
|
||||
|
||||
kg = kzalloc(sizeof(*kg), GFP_KERNEL);
|
||||
if (!kg)
|
||||
return;
|
||||
return -ENOMEM;
|
||||
|
||||
kobject_init(&kg->base, &xe_gt_sysfs_kobj_type);
|
||||
kg->gt = gt;
|
||||
|
||||
err = kobject_add(&kg->base, tile->sysfs, "gt%d", gt->info.id);
|
||||
if (err) {
|
||||
drm_warn(&xe->drm, "failed to add GT sysfs directory, err: %d\n", err);
|
||||
kobject_put(&kg->base);
|
||||
return;
|
||||
return err;
|
||||
}
|
||||
|
||||
gt->sysfs = &kg->base;
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, gt_sysfs_fini, gt);
|
||||
if (err) {
|
||||
drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n",
|
||||
__func__, err);
|
||||
return;
|
||||
}
|
||||
return drmm_add_action_or_reset(&xe->drm, gt_sysfs_fini, gt);
|
||||
}
|
||||
|
||||
@ -8,7 +8,7 @@
|
||||
|
||||
#include "xe_gt_sysfs_types.h"
|
||||
|
||||
void xe_gt_sysfs_init(struct xe_gt *gt);
|
||||
int xe_gt_sysfs_init(struct xe_gt *gt);
|
||||
|
||||
static inline struct xe_gt *
|
||||
kobj_to_gt(struct kobject *kobj)
|
||||
|
||||
@ -11,6 +11,7 @@
|
||||
#include "xe_gt_sysfs.h"
|
||||
#include "xe_gt_throttle_sysfs.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_pm.h"
|
||||
|
||||
/**
|
||||
* DOC: Xe GT Throttle
|
||||
@ -38,10 +39,12 @@ static u32 read_perf_limit_reasons(struct xe_gt *gt)
|
||||
{
|
||||
u32 reg;
|
||||
|
||||
xe_pm_runtime_get(gt_to_xe(gt));
|
||||
if (xe_gt_is_media_type(gt))
|
||||
reg = xe_mmio_read32(gt, MTL_MEDIA_PERF_LIMIT_REASONS);
|
||||
else
|
||||
reg = xe_mmio_read32(gt, GT0_PERF_LIMIT_REASONS);
|
||||
xe_pm_runtime_put(gt_to_xe(gt));
|
||||
|
||||
return reg;
|
||||
}
|
||||
@ -233,19 +236,14 @@ static void gt_throttle_sysfs_fini(struct drm_device *drm, void *arg)
|
||||
sysfs_remove_group(gt->freq, &throttle_group_attrs);
|
||||
}
|
||||
|
||||
void xe_gt_throttle_sysfs_init(struct xe_gt *gt)
|
||||
int xe_gt_throttle_sysfs_init(struct xe_gt *gt)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
int err;
|
||||
|
||||
err = sysfs_create_group(gt->freq, &throttle_group_attrs);
|
||||
if (err) {
|
||||
drm_warn(&xe->drm, "failed to register throttle sysfs, err: %d\n", err);
|
||||
return;
|
||||
}
|
||||
|
||||
err = drmm_add_action_or_reset(&xe->drm, gt_throttle_sysfs_fini, gt);
|
||||
if (err)
|
||||
drm_warn(&xe->drm, "%s: drmm_add_action_or_reset failed, err: %d\n",
|
||||
__func__, err);
|
||||
return err;
|
||||
|
||||
return drmm_add_action_or_reset(&xe->drm, gt_throttle_sysfs_fini, gt);
|
||||
}
|
||||
|
||||
@ -10,7 +10,7 @@
|
||||
|
||||
struct xe_gt;
|
||||
|
||||
void xe_gt_throttle_sysfs_init(struct xe_gt *gt);
|
||||
int xe_gt_throttle_sysfs_init(struct xe_gt *gt);
|
||||
|
||||
#endif /* _XE_GT_THROTTLE_SYSFS_H_ */
|
||||
|
||||
|
||||
@ -11,7 +11,9 @@
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_guc.h"
|
||||
#include "xe_guc_ct.h"
|
||||
#include "xe_mmio.h"
|
||||
#include "xe_trace.h"
|
||||
#include "regs/xe_guc_regs.h"
|
||||
|
||||
#define TLB_TIMEOUT (HZ / 4)
|
||||
|
||||
@ -209,7 +211,7 @@ static int send_tlb_invalidation(struct xe_guc *guc,
|
||||
* Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success,
|
||||
* negative error code on error.
|
||||
*/
|
||||
int xe_gt_tlb_invalidation_guc(struct xe_gt *gt)
|
||||
static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt)
|
||||
{
|
||||
u32 action[] = {
|
||||
XE_GUC_ACTION_TLB_INVALIDATION,
|
||||
@ -221,6 +223,45 @@ int xe_gt_tlb_invalidation_guc(struct xe_gt *gt)
|
||||
ARRAY_SIZE(action));
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_tlb_invalidation_ggtt - Issue a TLB invalidation on this GT for the GGTT
|
||||
* @gt: graphics tile
|
||||
*
|
||||
* Issue a TLB invalidation for the GGTT. Completion of TLB invalidation is
|
||||
* synchronous.
|
||||
*
|
||||
* Return: 0 on success, negative error code on error
|
||||
*/
|
||||
int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
|
||||
{
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
|
||||
if (xe_guc_ct_enabled(>->uc.guc.ct) &&
|
||||
gt->uc.guc.submission_state.enabled) {
|
||||
int seqno;
|
||||
|
||||
seqno = xe_gt_tlb_invalidation_guc(gt);
|
||||
if (seqno <= 0)
|
||||
return seqno;
|
||||
|
||||
xe_gt_tlb_invalidation_wait(gt, seqno);
|
||||
} else if (xe_device_uc_enabled(xe)) {
|
||||
xe_gt_WARN_ON(gt, xe_force_wake_get(gt_to_fw(gt), XE_FW_GT));
|
||||
if (xe->info.platform == XE_PVC || GRAPHICS_VER(xe) >= 20) {
|
||||
xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC1,
|
||||
PVC_GUC_TLB_INV_DESC1_INVALIDATE);
|
||||
xe_mmio_write32(gt, PVC_GUC_TLB_INV_DESC0,
|
||||
PVC_GUC_TLB_INV_DESC0_VALID);
|
||||
} else {
|
||||
xe_mmio_write32(gt, GUC_TLB_INV_CR,
|
||||
GUC_TLB_INV_CR_INVALIDATE);
|
||||
}
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_gt_tlb_invalidation_vma - Issue a TLB invalidation on this GT for a VMA
|
||||
* @gt: graphics tile
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user