head 1.2; access; symbols pkgsrc-2019Q2:1.1.0.4 pkgsrc-2019Q2-base:1.1 pkgsrc-2019Q1:1.1.0.2 pkgsrc-2019Q1-base:1.1; locks; strict; comment @# @; 1.2 date 2019.08.30.13.16.27; author bouyer; state dead; branches; next 1.1; commitid Bhbqj9CPVWgYm3BB; 1.1 date 2019.03.07.11.13.27; author bouyer; state Exp; branches; next ; commitid Gzute5jK7xPyjqeB; desc @@ 1.2 log @Upgrade Xen 4.11 packages to 4.11.2. CHANGES since 4.11.1: - include security patches up to and including XSA297 - various performances improvements, code cleanup and bug fixes @ text @$NetBSD: patch-XSA290-2,v 1.1 2019/03/07 11:13:27 bouyer Exp $ From: Jan Beulich Subject: x86/mm: add explicit preemption checks to L3 (un)validation When recursive page tables are used at the L3 level, unvalidation of a single L4 table may incur unvalidation of two levels of L3 tables, i.e. a maximum iteration count of 512^3 for unvalidating an L4 table. The preemption check in free_l2_table() as well as the one in _put_page_type() may never be reached, so explicit checking is needed in free_l3_table(). When recursive page tables are used at the L4 level, the iteration count at L4 alone is capped at 512^2. As soon as a present L3 entry is hit which itself needs unvalidation (and hence requiring another nested loop with 512 iterations), the preemption checks added here kick in, so no further preemption checking is needed at L4 (until we decide to permit 5-level paging for PV guests). The validation side additions are done just for symmetry. This is part of XSA-290. Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper --- xen/arch/x86/mm.c.orig +++ xen/arch/x86/mm.c @@@@ -1581,6 +1581,13 @@@@ static int alloc_l3_table(struct page_in for ( i = page->nr_validated_ptes; i < L3_PAGETABLE_ENTRIES; i++, partial = 0 ) { + if ( i > page->nr_validated_ptes && hypercall_preempt_check() ) + { + page->nr_validated_ptes = i; + rc = -ERESTART; + break; + } + if ( is_pv_32bit_domain(d) && (i == 3) ) { if ( !(l3e_get_flags(pl3e[i]) & _PAGE_PRESENT) || @@@@ -1882,15 +1889,25 @@@@ static int free_l3_table(struct page_inf pl3e = map_domain_page(_mfn(pfn)); - do { + for ( ; ; ) + { rc = put_page_from_l3e(pl3e[i], pfn, partial, 0); if ( rc < 0 ) break; + partial = 0; - if ( rc > 0 ) - continue; - pl3e[i] = unadjust_guest_l3e(pl3e[i], d); - } while ( i-- ); + if ( rc == 0 ) + pl3e[i] = unadjust_guest_l3e(pl3e[i], d); + + if ( !i-- ) + break; + + if ( hypercall_preempt_check() ) + { + rc = -EINTR; + break; + } + } unmap_domain_page(pl3e); @ 1.1 log @Update to 4.11.1nb1 PKGREVISION set to 1 on purpose, because this is not a stock 4.11.1 kernel (it includes security patches). 4.11.1 includes all security patches up to XSA282. Apply official patches for XSA284, XSA285, XSA287, XSA288, XSA290, XSA291, XSA292, XSA293 and XSA294. Other changes since 4.11.0 are mostly bugfixes, no new features. @ text @d1 1 a1 1 $NetBSD: $ @