path: root/arch/x86
diff options
authorXiao Guangrong <xiaoguangrong@cn.fujitsu.com>2011-07-12 03:28:54 +0800
committerAvi Kivity <avi@redhat.com>2011-07-24 11:50:34 +0300
commitfce92dce79dbf5fff39c7ac2fb149729d79b7a39 (patch)
tree455461b843f5f94356786ea0e21132740458588a /arch/x86
parentc37079586f317d7e7f1a70d36f0e5177691c89c2 (diff)
KVM: MMU: filter out the mmio pfn from the fault pfn
If the page fault is caused by mmio, the gfn can not be found in memslots, and 'bad_pfn' is returned on gfn_to_hva path, so we can use 'bad_pfn' to identify the mmio page fault. And, to clarify the meaning of mmio pfn, we return fault page instead of bad page when the gfn is not allowd to prefetch Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
Diffstat (limited to 'arch/x86')
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5334b4e9ecc7..96a7ed4e6837 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2085,8 +2085,8 @@ static pfn_t pte_prefetch_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn,
slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, no_dirty_log);
if (!slot) {
- get_page(bad_page);
- return page_to_pfn(bad_page);
+ get_page(fault_page);
+ return page_to_pfn(fault_page);
hva = gfn_to_hva_memslot(slot, gfn);

Privacy Policy