Abstract
In this article, we introduce an hp certified reduced basis (RB) method for parabolic partial differential equations. We invoke a Proper Orthogonal Decomposition (POD) (in time)/Greedy (in parameter) sampling procedure first in the initial partition of the parameter domain (h-refinement) and subsequently in the construction of RB approximation spaces restricted to each parameter subdomain (p-refinement). We show that proper balance between additional POD modes and additional parameter values in the initial subdivision process guarantees convergence of the approach. We present numerical results for two model problems: linear convection–diffusion and quadratically non-linear Boussinesq natural convection. The new procedure is significantly faster (more costly) in the RB Online (Offline) stage.
Acknowledgements
This work was supported by the Norwegian University of Science and Technology, AFOSR Grant No. FA9550-07-1-0425, and OSD/AFOSR Grant No. FA9550-09-1-0613.
Notes
1 In the linear case , and it thus follows from EquationEquation (9)
(9) and the definition of
(we recall that
is coercive) that EquationEquation (10)
(10) simplifies to
.
2 We refer to [Citation12,Citation15] for details on the Construction–Evaluation procedure for the computation of lower bounds for the stability constants – a Successive Constraint Method (SCM).
3 We note that .
4 We note that after completion of the hp-POD/Greedy, we can apply the SCM algorithm independently for each parameter subdomain; we thus expect a reduction in the SCM (Online) evaluation cost because the size of the parameter domain is effectively reduced.
5 We suppose here for simplicity that ; hence
for all
.
6 In fact, we should interpret M here as the number of subdomains generated by Algorithm 4.3 so far; the ,
, are not necessarily the final M subdomains. With this interpretation, we thus do not presume termination of the algorithm.
7
EquationEquation (16)(16) is satisfied with
. We note that
is
continuous in its second argument because by the Cauchy–Schwarz inequality
.
8 To ensure a good spread over of the rather few (25 or 64 for our two examples) initial train points, we use for
a deterministic initial regular grid. (For the train sample enrichment, we use random points.) As some train points belong to a regular grid, the procedure may produce ‘aligned’ subdomain boundaries, as seen in .
9 We note that .