Publication Cover
Mathematical and Computer Modelling of Dynamical Systems
Methods, Tools and Applications in Engineering and Related Sciences
Volume 22, 2016 - Issue 4: Model Order Reduction
268
Views
0
CrossRef citations to date
0
Altmetric
Articles

A-posteriori error analysis for lithium-ion concentrations in batteries utilizing the reduced-basis method

, &
Pages 362-379 | Received 16 Oct 2015, Accepted 02 Jun 2016, Published online: 24 Jun 2016

ABSTRACT

In this paper, the authors consider a parametrized non-linear parabolic differential equation, which is motivated by lithium-ion battery models. A standard finite volume discretization leads to a high-dimensional discrete non-linear problem so that simulation of the parametrized problem for various different parameters is very costly. Therefore, the reduced-basis method is applied, so that the number of degrees of freedom is reduced significantly and a fast numerical simulation of the model is possible. To control the error, an a-posteriori error estimator is derived. Numerical experiments show the efficiency of the approach.

1. Introduction

The interest in lithium-ion batteries has been increased in the recent years. Several companies worldwide are developing such batteries for consumer electronic applications, in particular, for electric-vehicle applications. To achieve the performance and lifetime demands in this area, exact mathematical models of the battery are required. In this work, we consider a parametrized partial differential equation (μPDE) that occurs in lithium-ion battery models (see [Citation1,Citation2]) as an equation for the concentration of lithium ions. This equation describes the mass transport in the (positive) electrode of a battery. Let us refer to [Citation3,Citation4], for instance, for a different system of μPDEs describing lithium-ion batteries.

The discretization of the non-linear μPDE, using the (cell-centred) finite volume (FV) techniques (see, e.g. [Citation5]), leads to very large systems that are expensive to solve. The goal is to develop a reduced-order model for the parametrized PDE that is cheap to evaluate. This is motivated by applications like parameter estimations, optimal control and design, where repeated evaluations of the non-linear equation are required. In this paper, the spatial approximation is realized by applying the reduced-basis (RB) method [Citation6,Citation7], where the FV solution has to be computed for certain different values of the parameters. The RB solution of the μPDE for any admissible parameter μD is computed (during the online stage) as Galerkin projection into the RB space defined by a typically small set of solutions of the μPDE computed during the offline stage and corresponding to a small subset of parameter values DtrainD accurately chosen. A combination of the usual greedy strategy and the method of proper orthogonal decomposition (POD) (the POD-greedy algorithm; cf. [Citation8].) is used as the classical sampling strategy to select parameter values that define the set of basis functions. To require as little FV solutions as possible, a rigorous and quickly evaluated a-posteriori error estimate is needed in the greedy method; see [Citation9,Citation10] for parabolic problems. We derived an a-posteriori error estimate for the μPDE under consideration, where we have to deal carefully with the non-linear term, which is not monotone and does not possess an affine decomposition. This theoretical result is utilized in our numerical experiments to construct an RB basis. After obtaining an efficient reduced-order model, we want to utilize it in a parameter estimation problem. For RB discretization of the coupled lithium-ion battery model, we refer to [Citation11,Citation12]. Further, POD reduced-order modelling for simulation and parameter estimation of battery models is utilized in [Citation13] and [Citation14,Citation15], for instance.

The paper is organized in the following manner: In Section 2, we introduce the μPDE describing the mass transport in the (positive) electrode of a battery. The FV scheme is briefly explained in Section 3. The RB and POD method are explained in Section 4. Further, the POD-greedy algorithm is recalled. Section 5 is devoted to the a-posteriori error analysis. A parameter estimation problem is introduced in Section 6 and numerical tests are presented in Section 7.

2. Problem formulation

For L > 0 let Ω = (0,L) be the spatial interval, T > 0 the final time and Q = Ω × (0,T). By H = L2 (Ω), we denote the Lebesgue space of (equivalence classes of) functions which are (Lebesgue) measurable and square integrable. Furthermore, V = H1 (Ω) ⊂ H stands for the Sobolev space

V=φHΩφ(x)2dx<.

For more details on Lebesgue and Sobolev spaces, we refer to [Citation16], for instance. The space L2(0,T; V) stands for the space of (equivalence classes) of measurable abstract functions φ:[0,T]V, which are square integrable, i.e.

0Tφ(t)V2dt<.

When t is fixed, the expression φ(t) stands for the function φ(t,⋅) considered as a function in Ω only. Recall that

W(0,T)=φL2(0,T;V)φtL2(0,T;V)

is a Hilbert space supplied with its corresponding inner product, where φt is the weak partial derivative of φ with respect to t; see, e.g. [Citation16].

The set of admissible parameters is given as

D=m=μ1,μ2R2|μiaμiμibfori=1,2

with 0<μiaμib for = 1,2. For an arbitrary m=μ1,μ2D, we consider the semilinear parabolic problem

(1a) ct(t,x)xμ1cx(t,x)=0forall(t,x)Q(1a)

together with the Neumann and Robin boundary conditions

(1b) cx(t,0)=0,μ1cx(t,L)=μ2c(t,L)forallt[0,T](1b)

and the initial condition

(1c) c(0,x)=c0(x)forallxΩ.(1c)

We assume that c0 belongs to C(Ω¯) and satisfies

(2) 0<c0ac0(x)c0bforallxΩ¯(2)

with constants 0<c0ac0b<. Moreover, the concentration should be at least non-negative so that we can evaluate the square root term in (1b).

Remark 2.1:

(1) Problem (2) describes the mass transport in the (positive) electrode of a lithium-ion battery and c stands for the concentration of lithium ions. The non-linear boundary condition in (2) contains the c-dependent prefactor of the Butler–Volmer equation describing the exchange of the lithium ions at the interfaces between the electrode and the electrolyte [Citation17]. The parameter μ1 represents the diffusion coefficient and the parameter μ2 the missing parts of the Butler–Volmer equation. Hence, μ1 limits the distribution of lithium ions in the electrode and μ2 limits the lithium ions which pass the boundary, i.e. which ‘leave’ or ‘enter’ the electrode.

(2) Since the results of a one-dimensional battery model are comparable to the ones obtained by the associated three-dimensional model [Citation12], the spatial domain is chosen to be an interval in this paper.◊

For a given parameter μD, a function c=c(μ)L2(0,T;V) is called a weak solution to (1) provided c satisfies c > 0 in Q and

(3) 0TΩc(t)φt(t)+μ1cx(t)φx(t)dxdt+μ20Tc(t,L)φ(t,L)dt=Ωc0φ(0)dx(3)

holds for all φH1(0,T;H)L2(0,T;V) with φ(T) = 0 in Ω, where φx is the weak derivative of φ with respect to x.

Assumption 2.2

For any μD, there exists a unique weak solution c=c(μ)W(0,T)C(Q¯) to (2) satisfying

(4) cminc(t,x)cmaxforall(t,x)Q¯(4)

with μ-independent constants 0 < cmin ≤ cmax.

Remark 2.3:

Let μD and (4) hold. Then, it follows from [Citation18, Proposition 3.3] that there exists a time T0>0 such that (2) admits a unique solution in W0,T0CQ¯. This solution satisfies (6) in Q¯0=[0,T0]×Ω¯. However, it is not a-priorly clear that we have T0T, so that we can only ensure the unique existence of a positive weak solution locally in time.◊

3. FV discretization

We discretize (3) by the classical FV method; see, e.g. [Citation5]. The integration over time is realized by the backward Euler method. For given NN, let h=L/N be the equidistant spatial grid size. Define the centre points xi=h/2+(i1)h for i=1,,N. We divide Ω into N subintervals Ωi=xih/2,xi+h/2. Moreover, for given KN, let k = T/(K − 1) be the equidistant time grid size and tj = (j − 1)k, j = 1,…,K, the temporal grid points. By cj,ih(μ), i=1,,N and = 1,…,K, we denote the FV approximation of the concentration c(μ) at (tj,xi)Q¯ for a given parameter μD. We define the vector

cjh(μ)=cj,1h(μ)cj,Nh(μ)RNforj=1,,KandμD.

Proceeding in the standard way, we derive the following FV scheme for given parameter μD:

(5) 0=!Fjhcjh(μ);μ:=Limhμ1cjh(μ)Lexhcj1h(μ)+μ2ghcjh(μ)(5)

for = 2,…,K, where we have used the matrices for the implicit and explicit parts

Limh(μ1)=hIh+μ1ShRN×N,Lexh=hIhRN×N,

respectively, with the identity matrix Ih and the stiffness matrix

Sh=kh1112112111RN×N,ghcjh(μ)=00kcj,Nh(μ)RN.

Moreover, we put c1,ihμ=cxi for 1iN. In the following assumption, we suppose that the FV scheme admits a unique solution with positive values.

Assumption 3.1

For any μD, for any NN and KN there exists a unique solution sequence cjN(μ)j=1KRN satisfying c1,ih(μ)=c0xi for 1iN and (5). Moreover, we have

(6) cmincj,ih(μ)cmaxfor1jKand1iN(6)

with μ-independent constants cmin and cmax introduced in Assumption 2.2.

Remark 3.2:

To solve (7) numerically, we apply a globalized Newton method [Citation19] starting with the numerical solution of the previous time step.◊

4. The RB method

In this section, we introduce the RB method for (7). By applying this model-order reduction technique, we can decrease the computational complexity of the problem by diminishing significantly the number of unknowns compared with the FV model. We follow the approach presented in [Citation8]. The main idea of the RB method is that a small number of appropriate FV solutions of (7) corresponding to a small set of parameter choice, can define a sufficiently rich space where the solution of (7) (for any new parameter value) can be accurately computed. Clearly, applying the RB method is only worth it, if one is interested in many function evaluations. Here, we are interested in a parameter estimation for (7); see Sections 6 and 7.2.

4.1. RB scheme for the battery model

Let us suppose that we have computed an orthonormal basis of RB functions ξiNi=1NRN with NNN. The algorithm used for the basis computation is the POD-greedy algorithm described in Section 4.2. We set

Ξ=[ξ1ξN]RN×N.

Using the Galerkin projection, we approximate the solution of (5) by a function belonging to the space spanned by the RB functions that we represent by an N-dimensional vector cjN(μ), j ∈ {1,…,K}. Now, we assume that the RB approximation is given by the vectors

cjN(μ)=cj,1N(μ)cj,NN(μ)=i=1Nc_j,iN(μ)ξi=Ξc_jN(μ)RNforj=1,,KandμD,

where we set c_jN(μ)=c_j,1N(μ),,c_j,NN(μ)TRN. Let us introduce a weighted inner product in RN by

x,yW=xTWyforx,yN

with a symmetric and positive definite matrix WRN×N (i.e. W = WT and W ≻ 0). We replace cjh(μ) in (7) by cjN(μ) and multiply by ξiTW from the left for 1 ≤ ≤ N. Then, for any μD, we derive the equations:

0=!ξi,FjhcjN(μ);μW=ξiTWFjhcjN(μ);μfori=1,,Nandj=2,,K

which can be expressed as

(7) 0=!FjNcjN(μ);μ:=ΞTWFjhcjN(μ);μRNforj=2,,K(7)

of N non-linear equations for the N unknowns coefficients c_j,iN(μ)i=1N. More precisely, the low-dimensional RB scheme for Equation (3) is

(8) FjNcjN(μ);μ=LimNμ1c_jN(μ)LexNc_j1N(μ)+μ2gNΞc_jN(μ)RN,(8)

where, we have set the matrices

LimN(μ1)=ΞTWLimh(μ1)ΞRN×N,LexN=ΞTWLexhΞRN×N

and the vector

gNΞc_jN(μ)=ΞTWghΞc_jN(μ)RN.

Analogous to Assumption 3.1, we suppose the following hypothesis.

Assumption 4.1

For any μD, for any N ∈ ℕ and K ∈ ℕ, there exists a unique solution sequence cjN(μ)j=1KRN satisfying (7) and

(9) cmincj,iN(μ)cmaxfor1jKand1iN(9)

with μ-independent constants cmin and cmax introduced in Assumption 2.2.

4.2. Basis computation

4.2.1. POD

The POD method is one of the main components of the POD-greedy algorithm required for the selection of the RB basis functions. Given a solution of (7) represented by a snapshot matrix Ch(μ)RN×K in time, the POD method extracts its essential characteristics, called POD basis functions. In order to define the d=rankCh(μ) POD basis vectors and associated POD modes, the following optimization problem is solved [Citation20Citation22]:

(10) minJψ1,,ψ=j=1Kαjcjh(μ)i=1cjh(μ),ψiWψiW2subjecttoψii=1RNandψi,ψjW=δijfor1i,j,(10)

where W=,W1/2 denotes the weighted norm in RN, δij denotes the Kronecker symbol, and α1=αM=Δt/2, αj=Δt, 2 ≤ j ≤ K − 1 are the trapezoidal weights associated with or temporal grid; see [Citation22, Section 4]. The solution of the optimization problem (12) is characterized by the first eigenvectors {ψi}i=1 solving the symmetric eigenvalue problem

(11) j=1Kαjcjh(μ),ψiWcjh(μ)=λiψifori=1,,N(11)

with λ1λd>λd+1==λN=0. We introduce the POD space V=spanψ1,,ψRN and the orthogonal projection P:RNV by

Px=i=1x,ψiWψi=ΨΨTWxforxRN.

with Ψ=[ψ1ψ]RN×. In Algorithm 4.2.1, the POD method is summarized.

Algorithm 4.1 (POD algorithm)

4.2.2. The POD-greedy algorithm

The first step of the POD-greedy algorithm is to consider a large enough discrete training set

Dtrain=μ1,,μpDwithpN.

For an arbitrary initial parameter choice μ1Dtrain, we compute the FV solution of (5) cjhμ(1)j=1KRN. By using the POD method, we define a POD basis that defines the first RB basis of the method. Once we have this RB basis, we can compute the solution of (8) for all parameters μDtrain and we can compare them with the correspondent FV solutions of (5) through a proper error estimate introduced in Section 5.

If the biggest error is smaller than our predefined tolerance 0<εgr1, we stop the algorithm. If it is above εgr, we need to further enrich the RB basis. We compute the FV solution of (5) by using the parameter where the estimated error is the biggest. We apply the POD algorithm on the projection error which is the difference between the FV solution and the FV solution projected on the RB basis, cf. Algorithm 4.2.2 line 4.2.2. We add this information to our already computed basis. We then repeat the error estimation between the FV and the RB solutions over all the parameter set Dtrain, this time with the updated RB basis. We proceed with this basis enrichment until the estimated error is below εgr. We note from the description of the algorithm that the efficiency of the error estimator can affect the quality and the number of basis functions selected and consequently the accuracy of the RB solution together with the computational time required. Section 5 is entirely devoted to the error estimation analysis that represents the main contribution of this paper.

The POD-greedy method is presented in Algorithm 4.2.2 [Citation8].

Algorithm 4.2 (POD-greedy algorithm)

Remark 4.2:

(1) In Algorithm 4.2.2, we may also use the error itself instead of the estimator ΔN(μ). Then, the method is called strong POD-greedy algorithm, in the other case weak.

(2) The basis vectors are chosen orthonormally for stability reasons, cf. [Citation23, Section 5.7]. Analytically, the basis vectors obtained by the POD-greedy algorithm are orthonormal, cf. [Citation24]. However, in numerical realizations rounding errors may occur. For that reason, the ξ’s are orthonormalized after each Greedy step by the Gram–Schmidt procedure [Citation25], which is stopped if the information of the new basis becomes too small. Other stabilization techniques can be found in [Citation26].◊

4.3. Computational reduction and accuracy of the RB method

One of the big advantages of the RB method is that the computations required for the RB solution can be decomposed into a computationally expensive offline part and cheap online one. In the offline part, the RB basis is determined. Furthermore, all parameter-independent parts of the RB model are computed. In the online part, the RB solutions for every new requested parameters are computed.

Regarding the accuracy of the RB solutions compared with the FV solutions in Dtrain is given by the tolerance εgr. If the mapping Dmcjh(μ)RN is sufficiently regular for 1 ≤ j ≤ K and Dtrain is chosen appropriately, the error ch(m)cN(m) is close to εgr for all μD. Let us mention that there are also techniques, where the parameter set is discretized adaptively [Citation27], but we do not follow this approach here.

5. Error analysis

Considering non-linear problems, some mathematical effort has to be put in developing the error estimator. An error estimator for linear problems is examined in [Citation8]. In [Citation9,Citation10], a-posteriori error estimates for Galerkin approximations applied to non-linear parabolic equations are considered.

Suppose that ║⋅║ is a (vector) norm on RN. The POD-greedy algorithm relies essentially on the availability of a sharp error estimator ΔN(m) for the error between the FV and the RB solution, i.e.

ejN(μ)=cjh(μ)cjN(μ)ΔjN(μ)andηjeff(μ)=ΔjN(μ)ejN(μ)1

for any μDtrain and j{1,,K}, where ηjeff(μ) is called the efficiency (of the a-posteriori error estimator). If ηjeff(μ)1 holds the RB error is overestimated too much which leads to large number of basis functions. Then, the POD-greedy algorithm and the offline phase are computationally too expensive (and thus the online phase because more basis vectors than needed are used). In the worst case, the tolerance for the RB solution is not even reached. In this paper. we improve the error estimator which was presented in [Citation12, Theorem 6.5].

5.1. Non-negative matrices and inverses

Recall the following definition; see, e.g. [Citation28, p. 54].

Definition 5.1:

A matrix A=aijRN×N is called an M-matrix if A is invertible, its inverse A1 possesses only non-negative coefficients and aij ≤ 0 for 1i,jN with i  j.

A sufficient condition for a matrix to be an M-matrix is given in the next lemma [Citation28, pp. 55–56]. For the definition of a strictly diagonally dominant matrix we refer to [Citation28, p. 48].

Lemma 5.2:

Let A=aijRN×N be strictly diagonally dominant and assume that aij ≤ 0 for 1i,jN with i  j . Then, A is an M-matrix.

We have introduced the tridiagonal matrix

Limhμ1=h+μ1khμ1khμ1khh+2μ1khμ1khμ1khh+2μ1khμ1khμ1khh+μ1khRN×N

in Section 3. Since μ1μ1a>0 holds, the diagonal elements are positive and the minor diagonal elements are non-positive. Furthermore, Limhμ1 is strictly diagonally dominant. By Lemma 5.2, the matrix Limhμ1 is invertible for any mD. Moreover, it follows analogously that

(12) Limhμ1+ηDhwithη0andDh=001RN×N(12)

is an M-matrix and therefore regular as well for all μD.

We have introduced ║⋅║ as a (vector) norm on RN. The (associated) matrix norm is defined as

A∥=supx∥=1Ax∥=supx0AxxforARN×N.

It follows that

(13) Ax∥≤∥A∥∥xforallARN×NandxRN.(13)

Let ARN×N be regular. Using (13) we infer that

x∥=∥A1Ax∥≤∥A1∥∥AxforallxRN

which gives

(14) Ax∥≥xA1forallxRN.(14)

The next result is known as perturbation lemma; see, e.g. [Citation28, p. 45].

Lemma 5.3

Let A, BRN×N be given. We suppose that A is invertible satisfying A1∥≤β for a positive constant β. If AB∥≤γ and βγ<1, then B is also invertible and satisfies

B1β1βγ.

5.2. A-posteriori error analysis

For every μD and j ∈ {1,…,K} the error between the jth FV solution cjh(μ) and the RB solution cjN(μ) is defined by

ejN(μ)=cjh(μ)cjN(μ)RNforj=1,,K

and the residuals are given by

(17) rjN(μ)=FjhcjN(μ);μRNforj=2,,K.(17)

Since we solve (7) with a globalized Newton method, we cannot guarantee that our FV solution cjh(μ) fulfils Fjhcjh(μ);μ=0. Instead, we have

(18) Fjhcjh(μ);μεNewforj{2,,K}andμD(18)

with a user-specified Newton tolerance 0<εNew1. Analogously, for RB solution cjN(μ) Equation (9) is in general not valid, but it fulfils the inequality

(19) FjNcjN(μ);μεNewforj{2,,K}andμD,(19)

where we utilize the same Newton tolerance εNew as for the FV system. Let us define the matrix

(20) Gh(μ)=Limhμ1+ηDhwithDhasin(14)andη=μ2kcmin+cj,NN(μ),(20)

where k is the step size. If Assumption 4.1 and m=μ1,μ2D are satisfied, η > 0 follows. Therefore, Gh(μ) is an M-matrix and invertible.

Theorem 5.4

(A-posteriori error estimate) Let μD, Assumptions 3.1 and 4.1 hold. Suppose that cjh(μ)j=1K and cjN(μ)j=1K denote the (inexact) FV and the RB solutions satisfying (18) and (19), respectively, with the Newton tolerance εNew > 0. Let the M-matrix Gh(μ) be given by (20). If the step size satisfies

(21) kcmin+cj,NN(μ)2μ2bGh(μ)1Dhforj2,,K,(21)

then

(22) ejN(μ)∥≤2Gh(μ)1εNew+rjN(μ)+Lexhej1N(μ)(22)

for j ∈ {2,…,K} and μD, where the residual rjN(μ) has been introduced in (17).

Proof. Let μD and j ∈ {1,…,K}. Using (7), (10) and the equation

cj,Nh(μ)cj,NN(μ)=cj,Nh(μ)cj,NN(μ)cj,Nh(μ)+cj,NN(μ)=ej,NN(μ)cj,Nh(μ)+cj,NN(μ)

we obtain

(23) Fjhcjh(μ);μFjhcjN(μ);μ=Limhμ1ejN(μ)Lexhej1N(μ)+μ2kcj,Nh(μ)cj,NN(μ)001=Limhμ1ejN(μ)Lexhej1N(μ)+μ2kej,NN(μ)cj,Nh(μ)+cj,NN(μ)001.(23)

Recall that μ2μ2a>0 and cj,Nh(μ), cj,NN(μ)cmin>0 hold for all j ∈ {2,…,K} and all μD. Therefore,

ηjhμ2: =μ2cj,Nh(μ)+cj,NN(μ)

is positive and the tridiagonal matrix Limhμ1+kηjhμ2Dh is an M-matrix (compare (14)). From (23) and (16), it follows that

(24) Fjhcjh(μ);μFjhcjN(μ);μ+Lexhej1N(μ)=Limhμ1+kηjhμ2DhejN(μ)ejN(μ)Limh(μ1)+kηjh(μ2)Dh1(24)

for all j ∈ {2,…,K} and all μD. Thus, we infer from (24), (15), (17) and the triangle inequality that

(25) ejN(μ)Limhμ1+kηjhμ2Dh1Fjhcjh(μ);μFjhcjN(μ);μ+Lexhej1N(μ)Limh(μ1)+kηjh(μ2)Dh1εNew+rjN(μ)+Lexhej1N(μ).(25)

To derive an a-posteriori error estimate, we have to avoid the FV term cj,Nh(μ) in (25). Due to Assumption 3.1, we have

0cj,Nh(μ)cmincmin+cj,NN(μ)cj,Nh(μ)+cj,NN(μ)
=1cmin+cj,NN(μ)1cj,Nh(μ)+cj,NN(μ)1cmin+cj,NN(μ)

for all j ∈ {2,…,K} and all μD. Therefore, if k satisfies (21), it follows that

Gh(μ)Limhμ1+kηjμ2Dh
=μ2k1cmin+cj,NN(μ)1cj,Nh(μ)+cj,NN(μ)Dh
μ2kDhcmin+cj,NN(μ)μ2μ2b12Gh(μ)112Gh(μ)1

for j ∈ {2,…,K} and μD. Hence, we conclude from

Gh(μ)1Gh(μ)Limhμ1+kηjμ2Dh12

and Lemma 5.3 that

Limhμ1+kηjμ2Dh12Gh(μ)1

for all j ∈ {2,…,K} and all μD.

Remark 5.5

Estimate (22) depends on the constant cmin, which is usually unknown. Due to Theorem 5.4, we have cjN(μ)cjh(μ) as NN for any j ∈ {1,…,K}. Hence, we assume that

cj,Nh(μ)cj,NN(μ)4>0forNsufficientlylarge.

Then, we replace cmin by cj,NN(m)/4 and proceed as in the proof of Theorem 5.4. From

cj,Nh(μ)+cj,NN(μ)32cj,NN(μ)
we derive the following estimate for sufficiently large N: The matrix
Gˆh(μ)=Limhμ1+ηˆDhRN×NwithDhasin(14)andηˆ=2μ2k3cj,NN(μ)
is an M-matrix. If the step size satisfies
k3cj,NN(μ)4μ2bGˆh(μ)1Dhforj{2,,K},
then
(26) ejN(μ)2Gˆh(μ)1εNew+rjN(μ)+Lexhej1N(μ)(26)
for all j ∈ {2,…,K} and all μD.◊

Remark 5.6

In Section 7.1, we estimate the error in the maximum norm. Hence, we have to compute (G(μ))1. For tridiagonal matrices like in this case, there exist efficient algorithms. We use one of Hargreaves, cf. [Citation29, Algorithm 2.2]. In our numerical example the computation of the norm (G(μ))1 for a 102 × 102 matrix lasts less than 0.01 s, for a 103 × 103 matrix less than 0.1 s. In two or three dimensions, the algorithm of Hargreaves cannot be applied. But the considered matrices are still sparse and if we use the same geometry they are still structured. There are algorithms with which one can compute the norm of the inverse without computing the inverse itself. We have not checked them yet. In the worst case, one can estimate the row-sum norm of the inverse matrix with the spectral norm. However, a worst case estimate might result in a high efficiency of the a-posteriori error estimator.◊

6. Parameter estimation problem

We suppose that Assumption 2.2 holds. The non-linear model (2) contains a parameter μD, which has to be identified in order to calibrate the model. However, measurements for the concentration c(μ) are not directly available. Instead, the state of charge

(27) SoC(t;μ)=1cmaxΩc(t,x;μ)dxfort[0,T](27)

can be measured, where c(,;m)C=W(0,T)L(Q) is the unique weak solution to (2) for given μD. Suppose that SoCdL(0,T) is a given desired state of charge profile. Then, we consider the least squares objective

Jˆ:DR,Jˆ(μ)=120TSoC(t;μ)SoCd(t)2dtforμD,

where SoC(t,μ) is given by (27). Now, the parameter estimation can be expressed as

(28) minJˆ(μ) subject to μD.(28)

Problem (28) is a PDE-constrained, non-convex optimization problem so that many local optimal solutions may occur. Moreover, it is not a-priori clear that (28) admits an optimal solution at all. We refer the reader to [Citation30,Citation31] for more details on this subject.

Here, we follow the approach ‘first discretize then optimize’, cf. e.g. [Citation30]. Suppose that Assumption 3.1 holds. We utilize the already defined temporal grid tj = (j − 1)k, 1 ≤ ≤ K and replace the temporal integral by a trapezoidal rule:

Jˆ(μ)k4SoCt1;μSoCdt12+k2j=2K1SoCtj;μSoCdtj2
+k4SoCtK;μSoCdtK2forμD.

Next, we replace the state of charge by a FV approximation and evaluate the spatial integral by a midpoint rule:

SoCtj;μSoCjh(μ)=hcmaxi=1Ncj,ih(μ)forj{1,,K},

where cj,ih(μ) denotes the unique FV approximation for the concentration at tj,xiQ¯ and for μD. Finally, we define the FV reduced objective as

Jˆkh(μ)=k4SoC1h(μ)SoCdt12+k2j=2K1SoCjh(μ)SoCdtj2
+k4SoCKh(μ)SoCdtK2.

Instead of (28), we consider the FV-based parameter estimation problem

(29) minJˆkh(μ)subject to μD.(29)

Problem (29) depends on the size of N which may be large. Therefore, we are interested to utilize the RB method introduced in Section 3 in order to solve (29) approximately, but fast. Let us assume that Assumption 4.1 is valid. We replace the FV approximation SoCjh(μ) of the state of charge by the following RB discretization:

SoCjN(μ)=hcmaxi=1Ncj,iN(μ)forj{1,,K},

where cj,iN(μ) denotes the unique RB approximation for the concentration at (tj,xi) and for μD. Then, the RB-based parameter identification problem is expressed as

(30) minJˆkN(μ)subjecttoμD(30)

with

JˆkN(μ)=k4SoC1N(μ)SoCdt12+k2j=2K1SoCjN(μ)SoCdtj2
+k4SoCKN(μ)SoCdtK2.

In Section 7.2, we will present numerical experiment for the parameter estimation problem.

7. Numerical experiments

In this section, we first set up an RB model which we use for the parameter estimation with respect to a desired SoC. All computations are done on a laptop: Linux Mint 17 Qiana, 64-bit; Intel(R) Core(TM) i5-4200U CPU 1.60 GHz; 8 GB RAM; Matlab 8.1.0.604 (R2013a).

7.1. Setting up the RB model

To check the suitability of our developed-error estimators in Section 5, we do four different greedy runs. In all runs we consider the (estimated) error in the maximum norm and add one POD-mode in every greedy step. Our considered parameter set is D=0.05,5×103,101. The interval [0.05,5] as well as the interval [103,101] is discretized equidistantly with five nodes including the boundaries. Hence, our training set Dtrain has the cardinal number 25. The remaining parameters are listed in .

Table 1. Parameters for the RB model.

The accuracy of the damped Newton method is εnew = 1010 for the FV as well as for the RB solution. The greedy algorithm stops if the error is smaller than εgr = 106 or if the estimated error is smaller than εgr = 104 because we expect for this problem that the error estimators overestimate the error about two scales, cf. [Citation12].

  • Strong greedy: for the estimation of the error between the RB and FV solution the error itself is used.

  • Weak greedy: for the estimation of the error the estimator of Theorem 5.4 (error Est. 1) is used.

  • Weak greedy: the error is estimated by the error of Remark 5.5, Equation (26) (error Est. 2).

  • Weak greedy: the error is estimated by the residual. We stop the greedy algorithm as soon as the residual gets smaller than 0.106.

In , we plot the decay of the error and its estimators for the four different runs. We can hardly see a difference between the plots. As expected, the two estimators overestimate the error about two scales. The values for the estimator 1 (20) and 2 (24) seem to agree.

Figure 1. Decay of the error and its estimates for the four runs: run 1 (up left), run 2 (up right), run 3 (bottom left) and run 4 (bottom right).

Figure 1. Decay of the error and its estimates for the four runs: run 1 (up left), run 2 (up right), run 3 (bottom left) and run 4 (bottom right).

The computational time for the FV solutions to the complete training set is 3.11 × 101 s. The different computational times, number of basis vectors, the value for the estimated error and error itself is listed in .

Table 2. Results for the four different runs: computational offline time; computational time for the computation of a FV solution; computational time for the computation of a RB solution, speed-up, number of basis vectors, value of the estimator after the last POD-greedy step and value of the error after the last POD-greedy step.

7.2. Parameter estimation

We use the RB model which we generated in the second run of the previous subsection for our optimization process. We compare the results with the ones obtained by using the FV model for the optimization process. We get the desired SoC by inserting μd in our FV model. We use the Matlab routine fmincon and the sequential quadratic programming (SQP), cf. e.g. [Citation32]. We do not set a user defined gradient but use the Matlab internal one. The remaining settings for fmincon, we set the maximum number of iterations MaxIter 1000, the tolerance for the function value for the termination is set TolFun = 1010, termination tolerance for the step size TolX = 1015 and the maximum of function evaluations maxFunEvals = 1, 000. The lower bound is given by μa = (0.05,0.001) and the upper bound is given by μb = (5,0.1). We do two numerical tests. In the first one, we require comparative small values for μ1 and μ2 and the initial values are comparative big. In the second test, we change the roles. The results are listed in and .

Table 3. Parameter estimation – test 1: required parameter μd, initial parameter μinit, estimated parameter μcomp, stopping criteria, computational time for the optimization process, number of iterations, number of function evaluations, residual.

Table 4. Parameter estimation – test 2: required parameter μd, initial parameter μinit, estimated parameter μcomp, stopping criteria, computational time for the optimization process, number of iterations, number of function evaluations, residual.

The optimization results using the FV- or the RB model are the same. Hence, our RB-model is suitable for the parameter estimation procedure. The speed-up factor in this example is around 8. The speed-up factor in this example is not as impressive as in other examples due to the fact that setting the optimization problem costs much time in comparison to the computation of one RB solution.

The numerical results lead us to the assumption that the diffusion coefficient μ1 has a negligible impact on the SoC for the considered parameter set range. The parameter μ2 has an bigger impact. To confirm our assumption, we plot the SoC in dependency on μ1 and μ2, cf. .

Figure 2. SoC in dependency on μ1 (left) and μ2 (right).

Figure 2. SoC in dependency on μ1 (left) and μ2 (right).

8. Conclusion

In this paper, we developed an efficient error estimator that allows to make use of the RB method to approximate the solution of the equation system describing the mass transport in a lithium-ion battery. Our numerical tests show that it is appropriate in the one-dimensional case. With the developed-error estimation, the RB method reduces drastically the computational times needed to estimate our parameters to a required SoC. The results are accurate and reliable as the ones obtained by using the FV model. For the analysed problem, we proved that the diffusion coefficient μ1 has a negligible influence on the SoC for the considered parameter set.

Acknowledgement

The authors gratefully acknowledge partial financial support by the project Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential Equations with Applications to Lithium-Ion Batteries funded by the company Adam Opel AG.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This work was supported by the Adam Opel AG.

References

  • A. Latz, J. Zausch, and O. Iliev, Modeling of species and charge transport in Li–ion batteries based on non-equilibrium thermodynamics, Technical report, Vol. 190, Fraunhofer ITWM, Kaiserslautern, 2010.
  • A. Latz and J. Zausch, Thermodynamic consistent transport theory of Li-ion batteries, Technical report, Vol. 195, Fraunhofer ITWM, Kaiserslautern, 2010.
  • T. Fuller, M. Doyle, and J. Newman, Modeling of galvanostatic charge and discharge of the lithium ion battery/polymer/insertion cell, J. Chem. Soc. 140 (1993), pp. 1526–1533.
  • J. Wu, J. Xu, and H. Zou, On the well-posedness of a mathematical model for Lithium-ion battery systems, Methods Appl. Anal. 13 (2006), pp. 275–298. doi:10.4310/MAA.2006.v13.n3.a4
  • T. Barth and M. Ohlberger, Finite volume methods: foundation and analysis, in Encyclopedia of Computational Mechanics, E. Stein, R. de Borst, and T.J.R. Hughes, eds., John Wiley & Sons, 2004, pp. 439–474. Available at http://dx.doi.org/10.1002/0470091355.ecm010.
  • A.T. Patera and G. Rozza, Reduced Basis Approximation and A Posteriori Error Estimation for Parametrized Partial Differential Equations, MIT Pappalardo Grad. Monogr. Mech. Eng, 2007. Available at http://augustine.mit.edu/methodology/bookParts/Patera_Rozza_bookPartI_BV1.pdf.
  • J.S. Hesthaven, G. Rozza, and B. Stamm, Certified Reduced Basis Methods for Parametrized Partial Differential Equations, Springer, Cham, 2016.
  • B. Haasdonk and M. Ohlberger, Reduced basis method for finite volume approximations of parametrized linear evolution equations, ESAIM: Math. Model. Numer. Anal. 42 (2) (2008), pp. 277–302. doi:10.1051/m2an:2008001
  • M.A. Grepl, Reduced-Basis Approximation and A Posteriori Error Estimation for Parabolic Partial Differential Equations, Ph.D. thesis, Massachusetts Institute of Technology, 2005.
  • M.A. Grepl, Y. Maday, N.C. Nguyen, and A.T. Patera, Efficient reduced-basis treatment of nonaffine nonlinear partial differential equations, ESAIM: Math. Model. Numer. Anal. 41 (3) (2007), pp. 575–605. doi:10.1051/m2an:2007031
  • S. Volkwein and A. Wesche, The reduced basis method applied to transport equations of a lithium-ion battery, Int. J. Comput. Math. Electrical Electron. Eng. 32 (2013), pp. 1760–1772. doi:10.1108/COMPEL-04-2013-0115
  • A. Wesche, Reduced Basis Methods for Model Reduction and Sensitivity Analysis of Complex Partial Differential Equations with Applications to Lithium-Ion Batteries, Ph.D. thesis., University of Konstanz, 2016.
  • L. Cai and R.E. White, Reduction of model order based on proper orthogonal decomposition for lithium-ion battery simulations, J. Electrochem. Soc. 156 (2009), pp. A154–A161. doi:10.1149/1.3049347
  • O. Lass and S. Volkwein, POD Galerkin schemes for nonlinear elliptic-parabolic systems, SIAM J. Scientific Comput. 35 (2013), pp. A1271–A1298. doi:10.1137/110848414
  • O. Lass and S. Volkwein, Parameter identification for nonlinear elliptic-parabolic systems with application in lithium-ion battery modeling, Comput. Optim. Appl. 62 (2015), pp. 217–239. doi:10.1007/s10589-015-9734-8
  • L.C. Evans, Partial Differential Equations, American Math. Society, Providence, Rhode Island, 2008.
  • P. Atkins and J. de Paula, Atkins’ Physical Chemistry, 10th ed., Oxford University Press, Oxford, 2014.
  • J.P. Raymond and H. Zidani, Hamiltonian Pontryagin’s principles for control problems governed by semilinear parabolic equations, Appl. Math. Optim. 39 (1999), pp. 143–177. doi:10.1007/s002459900102
  • P. Deuflhard, Newton Methods for Nonlinear Problems: affine Invariance and Adaptive Algorithms, 2nd ed., Springer, Heidelberg, 2004.
  • P. Holmes, J.L. Lumley, G. Berkooz, and C.W. Rowley, Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 2nd ed., Cambridge Monographs on Mechanics, Cambridge University Press, Cambridge, 2012.
  • K. Kunisch and S. Volkwein, Galerkin proper orthogonal decomposition methods for parabolic problems, Numerische Mathematik 90 (2001), pp. 117–148. doi:10.1007/s002110100282
  • S. Volkwein, Model Reduction Using Proper Orthogonal Decomposition, lecture notes, Universität Konstanz, 2012. Available at http://www.math.uni-konstanz.de/numerik/personen/volkwein/teaching/POD-Vorlesung.pdf.
  • G. Rozza, Shape Design by Optimal Flow Control and Reduced Basis Techniques, Ph.D. thesis, EPFL, Lausanne, 2005.
  • B. Haasdonk, Convergence Rates of the POD-Greedy method, ESAIM: Math. Model. Numer. Anal. 47 (2013), pp. 859–873. doi:10.1051/m2an/2012045
  • S. Lang, Linear Algebra, 3rd ed., Springer, New York, 2004.
  • L.N. Trefethen and D. Bau III, Numerical Linear Algebra, SIAM, Philadelphia, PA, 1997.
  • B. Haasdonk, M. Dihlmann, and M. Ohlberger, A training set and multiple bases generation approach for parameterized model reduction based on adaptive grids in parameter space, Math. Comput. Model. Dyn. Syst 17 (4) (2011), pp. 423–442. doi:10.1080/13873954.2011.547674
  • J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.
  • G.I. Hargreaves, Computing the condition number of tridiagonal and diagonal-plus-semiseparable matrices in linear time, SIAM. J. Matrix Anal. & Appl. 27 (2005), pp. 801–820.
  • M. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich, Optimization with PDE Constraints, Mathematical Modelling: theory and Applications, Springer, Netherlands, 2009.
  • F. Tröltzsch, Optimal Control of Partial Differential Equations: theory, Methods and Applications, Vol. 116, Graduate Studies in Mathematics, American Mathematical Society, Providence, 2010.
  • J. Nocedal and S.J. Wright, Numerical Optimization, 2nd ed., Springer, New York, 2006.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.