KVM: arm64: Rename the device variable to s2_force_noncacheable

To perform cache maintenance on a region of memory, KVM/arm64 relies on
that region having a cacheable alias in the kernel's address space which
can be used with CMO instructions.

The 'device' variable is somewhat of a misnomer, as it actually
indicates whether or not the stage-2 alias is allowed to have cacheable
memory attributes. The resulting stage-2 memory attributes are further
modified by VM_ALLOW_ANY_UNCACHED, selecting between Normal-NC or
Device-nGnRE depending on what the endpoint supports.

Rename the to s2_force_noncacheable such that its purpose is a bit more
obvious.

CC: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Tested-by: Donald Dutile <ddutile@redhat.com>
Signed-off-by: Ankit Agrawal <ankita@nvidia.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20250705071717.5062-2-ankita@nvidia.com
[ Oliver: addressed typos, wound up rewriting changelog ]
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
This commit is contained in:
Ankit Agrawal
2025-07-05 07:17:12 +00:00
committed by Oliver Upton
parent 86731a2a65
commit 8cc9dc1ae4
+6 -6
View File
@@ -1478,7 +1478,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
int ret = 0;
bool write_fault, writable, force_pte = false;
bool exec_fault, mte_allowed;
bool device = false, vfio_allow_any_uc = false;
bool s2_force_noncacheable = false, vfio_allow_any_uc = false;
unsigned long mmu_seq;
phys_addr_t ipa = fault_ipa;
struct kvm *kvm = vcpu->kvm;
@@ -1653,7 +1653,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* In both cases, we don't let transparent_hugepage_adjust()
* change things at the last minute.
*/
device = true;
s2_force_noncacheable = true;
} else if (logging_active && !write_fault) {
/*
* Only actually map the page as writable if this was a write
@@ -1662,7 +1662,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
writable = false;
}
if (exec_fault && device)
if (exec_fault && s2_force_noncacheable)
return -ENOEXEC;
/*
@@ -1695,7 +1695,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* If we are not forced to use page mapping, check if we are
* backed by a THP and thus use block mapping if possible.
*/
if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) {
if (vma_pagesize == PAGE_SIZE && !(force_pte || s2_force_noncacheable)) {
if (fault_is_perm && fault_granule > PAGE_SIZE)
vma_pagesize = fault_granule;
else
@@ -1709,7 +1709,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
}
if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
if (!fault_is_perm && !s2_force_noncacheable && kvm_has_mte(kvm)) {
/* Check the VMM hasn't introduced a new disallowed VMA */
if (mte_allowed) {
sanitise_mte_tags(kvm, pfn, vma_pagesize);
@@ -1725,7 +1725,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (exec_fault)
prot |= KVM_PGTABLE_PROT_X;
if (device) {
if (s2_force_noncacheable) {
if (vfio_allow_any_uc)
prot |= KVM_PGTABLE_PROT_NORMAL_NC;
else