Publication Cover
Optimization
A Journal of Mathematical Programming and Operations Research
Volume 72, 2023 - Issue 8
959
Views
1
CrossRef citations to date
0
Altmetric
Articles

Lipschitz upper semicontinuity in linear optimization via local directional convexity

, &
Pages 2091-2108 | Received 22 Jul 2021, Accepted 18 Feb 2022, Published online: 06 Apr 2022

Abstract

This work is focussed on computing the Lipschitz upper semicontinuity modulus of the argmin mapping for canonically perturbed linear programs. The immediate antecedent can be traced out from Camacho J et al. [2022. From calmness to Hoffman constants for linear semi-infinite inequality systems. Available from: https://arxiv.org/pdf/2107.10000v2.pdf], devoted to the feasible set mapping. The aimed modulus is expressed in terms of a finite amount of calmness moduli, previously studied in the literature. Despite the parallelism in the results, the methodology followed in the current paper differs notably from Camacho J et al. [2022] as far as the graph of the argmin mapping is not convex; specifically, a new technique based on a certain type of local directional convexity is developed.

Mathematics Subject Classifications:

1. Introduction and motivation

The main goal of the present paper is to compute the upper semicontinuity modulus of the optimal set (argmin) mapping associated with the parameterized linear programming (LP, in brief) problem given by (1) π:minimizecxsubject toatxbt,tT:={1,2,,m},(1) where xRn is the decision variable, regarded as a column-vector, the prime stands for transposition, atRn is fixed for each tT, and the pair (c,b)Rn×Rm, with b=(bt)tTRm, is the parameter to be perturbed around a nominal one (c¯,b¯)Rn×Rm. In this way we are dealing with the so-called canonical perturbations (tilt perturbations of the objective function together with the right-hand side – RHS – of the constraints). We consider the feasible set and the optimal set mappings F:RmRn and Fop:Rn×RmRn defined as (2) F(b):={xRn:atxbt for all tT},(2) (3) Fop(π):=argmin{cx:xF(b)}.(3) From now on we identify each problem π with the corresponding parameter (c,b); accordingly, π¯(c¯,b¯) denotes the nominal problem. The space of variables, Rn, is endowed with an arbitrary norm , whose dual norm is denoted by , i.e. u=maxx1|ux|. The parameter space Rn×Rm is endowed with the norm (c,b):=max{c,b}(since c is identified with the linear functional xcx), where b:=maxtT|bt|.

The current paper is mainly oriented to the computation of the Lipschitz upper semicontinuity modulus of Fop at the nominal parameter π¯, denoted by LipuscFop(π¯) following [Citation1]; see Section 2 for the formal definitions. At this moment let us comment that LipuscFop(π¯) provides a semi-local measure of the stability (in fact, a rate of deviation) of the optimal set around the nominal problem π¯. The term ‘semi-local’ refers to the fact that only parameters π around π¯ are considered, while all elements of Fop(π) are taken into account. A point-based formula (only depending on the nominal data (c¯,b¯)) for the aimed LipuscFop(π¯) is provided in Theorem 4.2 (see also Theorem 4.1) in terms of a finite amount of calmness moduli (of a local nature) of certain feasible set mappings coming from adding new constraints to the system in (Equation1). These calmness moduli can be computed through the point-based formula given in [Citation2, Theorem 4] and recalled in [Citation3, Theorem 3]. The reader is referred to monographs on variational analysis as [Citation4–7] for details about calmness and other Lipschitz-type properties and to [Citation8] for other stability criteria in linear optimization.

The theory of parametric linear optimization goes back to the early 1950s (see, e.g. [Citation9, Citation10]). Some years later a systematic development of stability theory in LP with canonical perturbations emerged. One direction of research was the analysis of semicontinuity and Lipschitz continuity properties based on approaches from variational analysis like Berge's theory or Hoffman's error bounds. In the current parametric context both F and Fop are polyhedral multifunctions; i.e. their graphs, gphF and gphFop, are finite unions of convex polyhedra. In fact gphF is a convex polyhedral cone, while the union of polyhedra constituting gphFop comes from considering the different choices of subsets of active indices involved in the Karush–Kuhn–Tucker (KKT) optimality conditions. Hence, as a consequence of a classical result by Robinson [Citation11], both F and Fop are Lipschitz upper semicontinuous (see Section 2.2 for the formal definition) at any b¯ and π¯, respectively, provided that F(b¯) and Fop(π¯) are nonempty. The current paper borrows the terminology from [Citation5] or [Citation1], although the Lipschitz upper semicontinuity property, introduced in [Citation11] as upper Lipschitz continuity, has been also popularized under the name outer Lipschitz continuity (see the reference book [Citation4]). In the context of RHS perturbations (where only b is perturbed, c remains fixed) a well-known result establishes that both F and Fop are Lipschitz continuous relative to their domains (see, e.g. [[Citation12, p. 232],[Citation13, Chapter IX (Sec. 7)],[Citation4, Chapter 3C]]). At this moment we point out that the case of ‘fully perturbed’ problems, when all data (c,at and bt, tT) are perturbed, entails notable differences regarding the Lipschitz upper semicontinuity as it is emphasized in Remark 2.3.

The immediate antecedent to this work can be found in [Citation3, Theorems 4 and 6], where the Lipschitz upper semicontinuity modulus of F at b¯, LipuscF(b¯), is analysed.

Remark 1.1

There exists a striking resemblance between the formula of LipuscF(b¯) obtained from [Citation3, Theorems 4 and 6] and the new one, established in Theorem 4.2, of LipuscFop(π¯). Just to show the similar appearance, here we gather both formulae: LipuscF(b¯)=maxxE(b¯)clmF(b¯,x),LipuscFop(π¯)=maxxEop(π¯)clmFop(π¯,x),where clmF(b¯,x) denotes the calmness modulus (see again Section 2 for the definition) of F at (b¯,x)gphF and clmFop(π¯,x) the corresponding calmness modulus of Fop at (π¯,x)gphFop, and E(b¯) and Eop(π¯) are two nonempty finite subsets of extreme points of certain subsets of F(b¯) and Fop(π¯) introduced in (Equation11) and (Equation16), respectively. Despite these formal similarities between the two results, let us emphasize that they both follow different methodologies, mainly due to the fact that gphF is convex (hence the last part of Theorem 2.1 below applies), while gphFop is not, even when fixing c¯ and allowing only for RHS perturbations. Indeed, as commented above, gphFop is a finite union of convex polyhedra.

To overcome the drawback coming from the lack of convexity of gphFop, the paper appeals to a weaker form of this property. Specifically, Theorem 3.1 shows that a certain local directional convexity property of the graph of Fop is preserved; indeed, when parameter c remains fixed (c=c¯) and b is perturbed in the way b¯+μd for a fixed direction dRn and a small μ0. In order to illustrate this idea, let us consider the following example, where convX stands for the convex hull of XRn.

Example 1.1

Let us consider the problem in R2 (4) minimizec1x1+c2x2subject tox1b1,x2b2, x1+x2b3,12x1+12x2b4,(4) with c=c¯=(11)andb=(0,0,0,1)+μ(1,1,1,1), μR;in other words, we are perturbing b¯=(0,0,0,1) in the directions ±d with d=(1,1,1,1).

It can be easily checked that (see Figure ) Fop(c¯,b¯+μd)={{(μμ)} if μ0,conv{(μ0),(0μ)}if 0μ23,conv{(μ0),(1+32μ112μ)}if 23μ2,{(μ2μ)}if μ2.In particular ((c¯,b¯d),(11)) and ((c¯,b¯+d),(10)) belong to gphFop, while the middle point ((c¯,b¯),(01/2)) does not.

Figure 1. Local directional convexity in Example 1.1.

Figure 1. Local directional convexity in Example 1.1.

Now we summarize the structure of the paper. Section 2 contains some notation and preliminary results used later on. Section 3 formalizes the announced local directional convexity of gphFop. The main result of this section, Theorem 3.1, is applied in Section 4 to establish Lemma 4.1, which constitutes a key step in the process of computing LipuscFop(π¯), leading to Theorem 4.2. Finally, Section 5 includes some conclusions and perspectives. In particular, the so-called Hoffman stability modulus is recalled, which is known to coincide with the Lipschitz upper semicontinuity modulus under the convexity of the graph. This is the case of F, but no longer of Fop (see Example 5.1). The reader is referred to [Citation3, Citation14–18] for extra details about Hoffman constants from a global (instead of semi-local) approach.

2. Preliminaries

This section introduces some necessary notation and results which are used later on. Given XRp, pN, we use the standard notation coneX and spanX for the conical convex hull and linear hull of X respectively, with the convention coneset=spanset={0p} (the zero vector of Rp). Provided that X is convex, extrX stands for the set of extreme points of X.

Consider a generic multifunction M:YX between metric spaces Y and X (with both distances denoted by d). Recall that the graph and the domain of M are respectively given by (y,x)gphMxM(y) and ydomMM(y)set. Mapping M is said to be calm at (y¯,x¯)gphM if there exist a constant κ0 and neighbourhoods V of y¯ and U of x¯ such that (5) d(x,M(y¯))κd(y,y¯)for all yV and all xM(y)U,(5) where d(x,Ω):=inf{d(x,ω):ωΩ}, provided that xX and ΩX, with the usual convention infset:=+ and d(x,set)=+. Since we are concerned with nonnegative constants, we understand that supset:=0. It is well-known (see, e.g. [Citation4, Theorem 3H.3 and Exercise 3H.4]) that neighbourhood V appearing in the definition of calmness is redundant; formally, the calmness of M at (y¯,x¯)gphM is equivalent to the existence of a constant κ0 and a (possibly smaller) neighbourhood U of x¯ such that (6) d(x,M(y¯))κd(y¯,M1(x))for all xU.(6) The latter property is known as the metric subregularity of M1 at (x¯,y¯) (recall that yM1(x)xM(y)). The calmness modulus of M at (y¯,x¯)gphM, denoted by clmM(y¯,x¯), is the infimum of constants κ such that (Equation5) (equivalently (Equation6)) holds for some associated neighbourhoods; this modulus can be written as: (7) clmM(y¯,x¯)=lim sup(y,x)(y¯,x¯)xM(y)d(x,M(y¯))d(y,y¯)=lim supxx¯d(x,M(y¯))d(y¯,M1(x)),(7) where 00:=0 and limsup is understood as the supremum (maximum, indeed) of all possible sequential upper limits (i.e. with (y,x) being replaced with elements of sequences {(yr,xr)}rN converging to (y¯,x¯) as r).

2.1. Lipschitz upper semicontinuity of multifunctions

The current work is focussed on a semi-local Lipschitz-type property: A multifunction M is said to be Lipschitz upper semicontinuous (see, e.g. [[Citation1, Definition 2.1(iii)],[Citation3, Section 3]]) at y¯domM if there exist a constant κ0 and a neighbourhood V of y¯ such that (8) d(x,M(y¯))κd(y,y¯)for all yV and all xM(y).(8) The associated Lipschitz upper semicontinuity modulus at y¯domM, denoted by LipuscM(y¯), is defined as the infimum of constants κ satisfying (Equation8) for some associated V. Clearly V is not redundant here. The following result provides a limiting expression for LipuscM(y¯).

Proposition 2.1

[Citation3, Proposition 2(i)]

Let M:YX be a multifunction between metric spaces and let y¯domM, then LipuscM(y¯)=lim supyy¯(supxM(y)d(x,M(y¯))d(y,y¯)).

The following result establishes the relationship between calmness and Lipschitz upper semicontinuity for generic multifunctions. See Section 5 for additional comments including the so-called Hoffman stability, defined in (Equation22).

Theorem 2.1

[Citation3, Corollary 2 and Theorem 4]

Let M:YX be a multifunction between metric spaces and let y¯domM. We have (9) LipuscM(y¯)supxM(y¯)clmM(y¯,x).(9) Moreover, equality holds in (Equation9) if Y is a normed space, X is a reflexive Banach space, gphM is a nonempty convex set, and M(y¯) is closed.

Remark 2.1

  1. Observe that the previous theorem relates local and semi-local Lipschitz-type measures for multifunctions.

  2. The convexity assumption in the previous theorem is not superfluous for establishing equality in (Equation9), as it was shown in [Citation3, Example 2].

2.2. Lipschitz upper semicontinuity of the feasible set mapping

This subsection deals with the feasible set mapping F:RmRn introduced in (Equation2), which has a closed convex graph and, so, Theorem 2.1 allows us to write (10) LipuscF(b¯)=supxF(b¯)clmF(b¯,x).(10) Going further, the next theorem provides a more computable expression for LipuscF(b¯) as far as it involves a finite amount of calmness moduli. It appeals to the following set of extreme points: (11) E(b):=extr(F(b)span{at,tT}),bdomF.(11) For details about this construction the reader is addressed to [Citation19, p. 142]. In addition to the following theorem, set E(b) is also a key tool in [Citation20] to provide a point-based formula for the calmness modulus of the optimal value function in linear optimization. This set is known to be always nonempty and finite.

Theorem 2.2

[Citation3, See Theorems 4 and 6]

Let b¯domF. Then (12) LipuscF(b¯)=maxxE(b¯)clmF(b¯,x).(12)

In the sequel, for any (b,x)gphF, Tb(x) represents the set of active indices at x, defined as Tb(x):={tT:atx=bt}.In particular, it is appealed to in the definition of the following family of sets appearing in the next remark: given (b,x)gphF, Db(x) is formed by all subsets DTb(x) such that system {atd=1,tD,atd<1,tTb(x)D}is consistent (in the variable dRn); in other words, {at, tD} lives in some hyperplane which leaves the remaining coefficient vectors at,tTb(x)D and the origin, 0n, in one of its two associated open half-spaces.

Remark 2.2

It is worth mentioning that any calmness modulus, clmF(b¯,x), at any xE(b¯) appearing in (Equation12) can be computed through the point-based formula given in [Citation2, Theorem 4] (see also [Citation3, Theorem 3], as stated in the introduction), which is recalled here for completeness: (13) clmF(b¯,x¯)=maxDDb¯(x¯)d(0n,conv{at,tD})1,(b¯,x¯)gphF,(13) where d represents the distance associated with the dual norm .

Remark 2.3

The fact of considering RHS perturbations is crucial in our analysis. Observe that in the case when the at's are also perturbed the corresponding feasible set mapping is no longer Lipschitz upper semicontinuous in general. Just consider the example in R2 with only one constraint: F~(a,b)={xR2a1x1+a2x2b},a=(a1,a2)R2,bR. Then, for each r=1,2,, we have supxF~((1,1r),0)d(x,F~((1,0),0))r1limkd((krk),F~((1,0),0))r1=+,which entails LipuscF~((1,0),0)=+. Roughly speaking, the previous situation arises from the semi-local nature of this property (i.e. the fact that it involves the whole feasible sets associated with perturbed parameters). Indeed, other variational properties of local character (dealing with parameters and points near the nominal ones) do not change so drastically. For instance, this is the case of the calmness property (see, [Citation2, Theorem 5]).

3. Local directional convexity of Fop

This is an instrumental section oriented to tackle our problem of computing LipuscFop(π¯). As gphFop is not convex, we are not allowed to apply the last part of Theorem 2.1, and this fact entails notable differences with respect to the methodology followed in [Citation3]. Specifically, this section is devoted to establish a certain directional-type convexity of the graph of Fop around b¯ with a fixed c¯, where only local directional RHS perturbations of the constraints are considered; see Figure  for a geometrical motivation. Formally, associated with our nominal problem π¯=(c¯,b¯), any scalar ϵ>0, and any direction dRm with d=1, we consider the local directional optimal set mapping Fπ¯,d,ϵop:[0,ϵ]Rn given by (14) Fπ¯,d,ϵop(μ)=Fop(c¯,b¯+μd),μ[0,ϵ].(14) The following lemmas constitute key tools for establishing Theorem 3.1. To start with, we introduce some extra notation: given π=(c,b)domFop, Mπ denotes the so-called minimal KKT subsets of indices at π, defined as (15) Mπ:={DTb(x)|ccone{at,tD}D is minimal for the inclusion order},(15) provided that x is any optimal point of π. For convenience, sometimes Mπ is alternatively denoted by Mc,b for π=(c,b)domFop. Let us observe that Mπ is correctly defined since the expression in (Equation15) indeed does not depend on x as it was commented in [Citation20, Remark 2]. Finally, we introduce the counterpart of E(b) when dealing with optimization problems, (16) Eop(π):=extr(Fop(π)span{at,tT}),πdomFop.(16) The reader is referred to [Citation20, Section 2.2] for additional details about this set of extreme points. Standard arguments of linear optimization yield Eop(π)=Fop(π)E(b) for π=(c,b)domFop.

Lemma 3.1

[Citation20, Lemma 2]

Let {πr}rNdomFop converge to π¯. Then setLim suprEop(πr)Eop(π¯),where Lim supr stands for the Painlevé–Kuratowski upper/outer limit as r (see, e.g. [Citation6, Citation7]).

Lemma 3.2

Let (c¯,b¯)domFop. Then there exists ϵ>0 such that for every bdomF with bb¯ϵ we have Mc¯,bMc¯,b¯.

Proof.

Reasoning by contradiction, suppose that there exists a sequence {br}rN such that domFbrb¯ and DrMc¯,brMc¯,b¯ for all rN. Since DrT (finite) for all r, we may assume (by taking a subsequence if necessary) that {Dr}rN is constant, say Dr=D for all r.

Applying Lemma 3.1, let x¯Lim suprEop(c¯,br) and, without loss of generality (for an appropriate subsequence, without relabelling), write x¯=limrxr, for some xrEop(c¯,br), r=1,2, For each r, DMc¯,br entails DTbr(xr), which yields DTb¯(x¯). Moreover, c¯cone{at,tD} and D is minimal with respect to this property since it is for any r (recall Dr=D). Hence, we attain the contradiction DMc¯,b¯.

Theorem 3.1

Let π¯=(c¯,b¯)domFop and ϵ>0 be as in Lemma 3.2. Then gphFπ¯,d,ϵop is convex for all dRm with d=1.

Proof.

Let (μ0,x0), (μ1,x1)gphFπ¯,d,ϵop with 0μ0μ1ϵ. Let us see that (μλ,xλ):=(1λ)(μ0,x0)+λ(μ1,x1)gphFπ¯,d,ϵop for λ]0,1[. If μ0=μ1, then x0 and x1 belong to the same convex set Fop(c¯,b¯+μ0d), and hence clearly xλFop(c¯,b¯+μ0d)=Fπ¯,d,ϵop(μλ)for all λ]0,1[.

From now on, let us assume μ0<μ1. First observe that xλF(b¯+μλd) for all λ]0,1[ because of the convexity of gphF. We distinguish two cases:

Case 1 μ0=0. Fix any λ]0,1[. Observe that in this case μλ=λμ1. Let us prove that xλFπ¯,d,ϵop(μλ). Take D1Mc¯,b¯+μ1dMc¯,b¯ (because of Lemma 3.2 and the choice of ε). In particular, D1Tb¯+μ1d(x1)Tb¯(x0). Therefore D1Tb¯+λμ1d(xλ) since, for any tD1, atxλ=(1λ)atx0+λatx1=(1λ)b¯t+λ(b¯t+μ1dt)=b¯t+λμ1dt.Hence, as we are not perturbing c¯, KKT optimality conditions ensure xλFop(c¯,b¯+λμ1d)=Fπ¯,d,ϵop(μλ).

Case 2 μ0>0. Fix again any λ]0,1[ and let us see that xλFπ¯,d,ϵop(μλ).

Start by choosing an arbitrary x¯Fπ¯,d,ϵop(0)=Fop(c¯,b¯) and define x~1:=(1μ0μ1)x¯+μ0μ1x1.Reasoning as in the previous case, with x¯ and μ0μ1 playing the role of x0 and λ, respectively, we deduce x~1Fπ¯,d,ϵop(μ0μ1μ1)=Fπ¯,d,ϵop(μ0).Appealing to the convexity of the previous optimal set, define x~α:=(1α)x0+αx~1Fπ¯,d,ϵop(μ0)α[0,1].A routine computation yields the existence of scalars α[0,1] and β1 such that xλx¯=β(x~αx¯).Indeed, they are unique and their explicit expressions are α=λμ1(1λ)μ0+λμ1,β=(1λ)μ0+λμ1μ0.Since x~αFπ¯,d,ϵop(μ0), there exists DαMc¯,b¯+μ0dMc¯,b¯, in particular, DαTb¯+μ0d(x~α)Tb¯(x¯). Hence, for any tDα, taking into account the fact that βμ0=μλ, one has atxλ=atx¯+βat(x~αx¯)=b¯t+β(b¯t+μ0dtb¯t)=b¯t+βμ0dt=b¯t+μλdt.In this way, DαTb¯+μλd(xλ) and again the KKT optimality conditions yield xλFop(c¯,b¯+μλd). In other words, (μλ,xλ)gphFπ¯,d,ϵop.

The following example illustrates the previous results.

Example 3.1

Example 1.1 revisited

Let us consider the parameterized problem (Equation4) in R2 with T={1,2,3,4} and let us see that the statement of Lemma 3.2 fulfils by taking 0<ϵ<25. First observe that (17) bb¯<25xFop(c¯,b)}Tb(x){1,2,3}.(17) Reasoning by contradiction, assume that there exists (b,x)R4×R2 such that xFop(c¯,b), bb¯<25 and 4Tb(x). It is clear that {4}Tb(x) according to KKT conditions (c¯cone{a4}). Then, by distinguishing cases we attain a contradiction: assume that 1Tb(x), then x1=b1 and 12x1+12x2=b4, which yields x2=b1+2b4>25+2(125)=45 which contradicts x2b2<25. Hence 1Tb(x). Then, necessarily 3Tb(x) (since c¯cone{a2,a4}), but again x1+x2=b3 and 12x1+12x2=b4 yield the contradiction 25>x2=12b3+b4>15+35=25.

From (Equation17) one easily derives (18) Mc¯,bMc¯,b¯={{1,2},{3}},whenever bb¯<25.(18) Indeed, given any bb¯<25 one can check Mc¯,b={{{1,2},{3}},if b1+b2=b3,{{1,2}},if b1+b2<b3,{{3}},if b1+b2>b3.Therefore, for any dR4, with d=1, and μ[0,25[, we have (19) Fop(c¯,b¯+μd)={{μ(d1d2)},if d1+d2d3,μconv{(d1d3d1),(d3d2d2)},if d1+d2>d3,(19) which clearly entails the convexity of gphFπ¯,d,ϵop for each dR4 with d=1 and each 0<ϵ<25.

Finally, observe that (Equation17) is no longer true for ϵ=25 since Tb¯+25(1,1,1,1)((45,25))={2,3,4}.However, (Equation18) still holds by replacing 25 with 12, since the only way to preclude Mc¯,bMc¯,b¯ is having {1,4}Mc¯,b, which implies bb¯12; although the description of Fop(c¯,b¯+μd) would be different from that of (Equation19) when μ>25.

4. Lipschitz upper semicontinuity of the optimal set mapping

This section tackles the final goal of the current paper: the computation of the Lipschitz upper semicontinuity modulus for the optimal set mapping Fop introduced in (Equation3). First, let us see that perturbations of c are redundant when looking for the aimed modulus. Formally, we consider Fc¯op:RmRn given by Fc¯op(b)=Fop(c¯,b),bRm.

Proposition 4.1

[Citation20, Lemma 4]

There exists ϵ>0 such that Fop(π)Fop(c¯,b),whenever π(c,b),π¯(c¯,b¯)domFop satisfy ππ¯<ϵ.

Corollary 4.1

Let π¯(c¯,b¯)domFop. We have (20) LipuscFop(π¯)=LipuscFc¯op(b¯)=lim supbb¯(supxFop(c¯,b)d(x,Fop(π¯))d(b,b¯)).(20)

Proof.

Inequality LipuscFop(π¯)LipuscFc¯op(b¯) follows directly from the definitions. Appealing to Propositions 2.1 and 4.1 we have LipuscFop(π¯)=lim supππ¯(supxFop(π)d(x,Fop(π¯))d(π,π¯))lim supππ¯(supxFop(c¯,b)d(x,Fop(π¯))d(π,π¯))lim supbb¯(supxFop(c¯,b)d(x,Fop(π¯))d(b,b¯))=LipuscFc¯op(b¯).

From now on, appealing to the local directional convexity of the graph of Fop under RHS perturbations, we adapt some arguments used in [Citation3] to the current setting. The following technical lemma uses Theorem 3.1 to provide an upper bound on the variation rate of optimal solutions with respect to RHS perturbations.

Lemma 4.1

Let π¯=(c¯,b¯)domFop. Let ϵ>0 be as in Lemma 3.2, take ((c¯,b),x)gphFop with bb¯ϵ and let p(x) be a projection (a best approximation) of x in Fop(π¯). Then d(x,Fop(π¯))d(b,b¯)clmFop(π¯,p(x)).

Proof.

The case b=b¯ is trivial from 0/0:=0. Assume bb¯ and let d:=bb¯bb¯. With the notation (Equation14), p(x)Fπ¯,d,ϵop(0) and xFπ¯,d,ϵop(bb¯), which entails, for each λ[0,1], by applying Theorem 3.1, xλ:=(1λ)p(x)+λxFπ¯,d,ϵop(λbb¯)=Fop(c¯,(1λ)b¯+λb).Moreover, from [Citation3, Lemma 1] we have that p(x) is also a best approximation of xλ in Fop(π¯); i.e. xλp(x)=d(xλ,Fop(π¯)), λ[0,1]. Consequently, d(xλ,Fop(π¯))λ(bb¯)=λxp(x)λbb¯=d(x,Fop(π¯))d(b,b¯),whenever λ]0,1].Hence, letting λ0 we obtain the aimed inequality.

Proposition 4.2

Let π¯=(c¯,b¯)domFop, then LipuscFop(π¯)=supzFop(π¯)clmFop(π¯,z).

Proof.

Inequality ‘’ comes from Theorem 2.1. Let us prove the nontrivial inequality . Take an ϵ>0 as in Lemma 3.2 (and Theorem 3.1). We can write (recall Corollary 4.1) LipuscFop(c¯,b¯)=lim supbb¯(supxFc¯op(b)d(x,Fop(π¯))d(b,b¯))lim supbb¯bb¯ϵ(supxFc¯op(b)clmFop(π¯,p(x))),where we have applied Lemma 4.1 and, as in that lemma, for each xFc¯op(b), with bb¯ϵ, p(x)Fop(π¯) is a projection of x in Fop(π¯).

Consequently, LipuscFop(π¯)supzFop(π¯)clmFop(π¯,z).

Next we show that the supremum in the previous proposition may be reduced to a finite subset of points, indeed to points in Eop(c¯,b¯). To do that we appeal to the following theorem.

Theorem 4.1

[Citation21, Corollary 4.1]

Let (π¯,x¯)gphFop with π¯=(c¯,b¯). Then (21) clmFop((c¯,b¯),x¯)=maxDMπ¯clmLD((b¯,b¯D),x¯),(21) where, for each DMπ¯, LD:Rm×RDRn is defined by LD(b,d):={xRn:atxbt,t=1,,m;atxdt,tD},and b¯D:=(b¯t)tD.

Remark 4.1

Observe that each LD is nothing else but a feasible set mapping of the same type as F but associated with an enlarged system. Consequently, Remark 2.2 also applies here for computing each clmLD((b¯,b¯D),x¯), and therefore clmFop((c¯,b¯),x¯).

Finally, by gathering the previous results of this section, we can establish our main goal in the following theorem. We point out that this theorem provides an implementable procedure for computing the aimed LipuscFop(π¯) as far as it is written in terms of a finite amount of calmness moduli of feasible set mappings (as the previous theorem says) and these calmness moduli can be computed via formula (Equation13).

Theorem 4.2

Let π¯domFop, then LipuscFop(π¯)=maxxEop(π¯)clmFop(π¯,x).

Proof.

Starting from the equalities of Proposition 4.2 and Theorem 4.1, we can write LipuscFop(π¯)=supxFop(π¯)clmFop(π¯,x)=supxFop(π¯)maxDMπ¯clmLD((b¯,b¯D),x)=maxDMπ¯supxLD(b¯,b¯D)clmLD((b¯,b¯D),x)=maxDMπ¯maxxEop(π¯)clmLD((b¯,b¯D),x)=maxxEop(π¯)clmFop((c¯,b¯),x).In the third equality we have used the fact that Fop(c¯,b¯)=LD(b¯,b¯D) for all DMπ¯ (see [Citation21, Proposition 4.1]), while the fourth comes from Theorems 2.1 and 2.2 by taking Remark 4.1 into account, together with the fact that the role played by E(b¯) in Theorem 2.2 is now played by extr(LD(b¯,b¯D)span{at,tT})=Eop(π¯),for all DMπ¯.The last equality comes again from Theorem 4.1.

5. Conclusions and perspectives

At this moment we recall the parallelism between both formulae of LipuscF(b¯) and LipuscFop(π¯), for π¯=(c¯,b¯)domFop, pointed out in Remark 1.1. The first one was established in [Citation3] by strongly appealing to the convexity of gphF, while a local directional convexity property of gphFop has been used here to develop the second one. Indeed, [Citation3] analyses another Lipschitz-type property, called there Hoffman stability, which is commented in the next subsection.

5.1. On the Hoffman stability of the optimal set

We say that M:YX, with Y and X being metric spaces, is Hoffman stable at y¯ if there exists κ0 such that d(x,M(y¯))κd(y,y¯) for all ydomM and all xM(y), or, equivalently, if (22) d(x,M(y¯))κd(y¯,M1(x))for all xX.(22) The associated Hoffman modulus at y¯domM, HofM(y¯), is the infimum of constants κ satisfying (Equation22) and may be expressed as HofM(y¯)=sup(y,x)gphMd(x,M(y¯))d(y,y¯)=supxXd(x,M(y¯))d(y¯,M1(x)).Theorem 4 in [Citation3], again appealing to the convexity of gphF, establishes the equality LipuscF(b¯)=HofF(b¯),which is no longer true for our optimal set mapping Fop, neither for Fc¯op (recall that gphFop and gphFc¯op are not convex), as the following example shows.

Example 5.1

Consider the parameterized problem of Example 1.1 with R2 being endowed with the Euclidean norm, and consider the following nominal parameters: c¯=(11)andb¯=(1,1,1,1).The reader can easily check that (c,b)(c¯,b¯)<13Fop(c,b)={(b1b2)}.Thus, an ad hoc routine calculation gives LipuscFop(c¯,b¯)=2.Of course Theorem 4.2 can be alternatively used for this computation.

Now, let us perturb b¯ by considering bμ:=b¯+μ(1,0,0,1) for μ2/3. Then it is easy to check that (1μ13μ)Fop(c¯,bμ) for all μ2/3 and, accordingly, HofFop(c¯,b¯)HofFc¯op(b¯)limμ+(1μ13μ)(11)bμb¯=limμ+μ2+(3μ2)2μ=10.

The computation of HofFop(c¯,b¯) and HofFc¯op(b¯) remains as an open problem for further research.

5.2. Some repercussions on the stability of the optimal value

Let us finish the paper with some comments about perspectives on the behaviour of the optimal value function ϑ:Rn×Rm[,+], given by ϑ(π):=inf{cx:xF(b)},π=(c,b)Rn×Rm,(with the convention ϑ(π):=+ when F(b)=set); in our context of finite LP problems ϑ(π) is finite if and only if πdomFop. Moreover, a well-known result in the literature establishes the continuity of ϑ restricted to its domain (see, e.g. [Citation22, Theorem 4.5.2] for a proof based on the Berge's theory); i.e. the continuity of ϑR:=ϑdomFop. Going further and regarding the subject of the current paper, the calmness modulus of ϑR provides a quantitative measure of the stability of the optimal value (indeed, a rate of variation with respect to perturbations of the data). Specifically, for π¯domFop this calmness modulus is given by clmϑR(π¯)=lim supππ¯πdomFop|ϑ(π)ϑ(π¯)|ππ¯.This calmness modulus is analysed in [Citation20], where point-based formulae for this quantity are provided in two stages: firstly, under RHS perturbations and, in a second stage, under canonical perturbations. That paper is focussed on a dual approach and formulae obtained there involve the maximum and minimum norms of dual optimal solutions (vectors of KKT multipliers) associated with the minimal KKT subsets of indices, Mπ¯.

At this moment we point out the fact that appealing to LipuscFop(π¯) we may follow a primal approach to the estimation of clmϑR(π¯). The situation is particularly easy in the case of RHS perturbations as commented in the next lines: given (c¯,b),(c¯,b¯)domFop we have (23) |ϑ(c¯,b)ϑ(c¯,b¯)|=|c¯xc¯p(x)|c¯d(x,Fop(π¯)),(23) where xRn is any optimal solution of (c¯,b) and p(x)Rn is any projection of x on Fop(π¯). From (Equation23), we obtain lim supbb¯(c¯,b)domFop|ϑ(c¯,b)ϑ(π¯)|ππ¯c¯LipuscFop(π¯).Adding perturbations of c and trying to reproduce inequalities of the form (Equation23) in the context of canonical perturbations yields a different scenario, where the size of primal optimal solutions could play an important role, but this lies out of the scope of the present paper.

Acknowledgments

The authors wish to thank the anonymous referees for their valuable critical comments, which have definitely improved the original version of the paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research has been partially supported by Grant PGC2018-097960-B-C21 from MICINN, Spain, and ERDF, ‘A way to make Europe’, European Union, and grant PROMETEO/2021/063 from Generalitat Valenciana, Spain.

References

  • Uderzo A. On the quantitative solution stability of parameterized set-valued inclusions. Set-Valued Var Anal. 2021;29(2):425–451.
  • Cánovas MJ, López MA, Parra J, et al. Calmness of the feasible set mapping for linear inequality systems. Set-Valued Var Anal. 2014;22(2):375–389.
  • Camacho J, Cánovas MJ, Parra J. From calmness to Hoffman constants for linear semi-infinite inequality systems. Available from: https://arxiv.org/pdf/2107.10000v2.pdf
  • Dontchev AL, Rockafellar RT. Implicit functions and solution mappings: a view from variational analysis. New York: Springer; 2009.
  • Klatte D, Kummer B. Nonsmooth equations in optimization: regularity, calculus, methods and applications. Dordrecht: Kluwer Academic; 2002. (Nonconvex optimization and its applications; vol. 60).
  • Mordukhovich BS. Variational analysis and generalized differentiation, I: basic theory. Berlin: Springer; 2006.
  • Rockafellar RT, Wets RJ-B. Variational analysis. Berlin: Springer; 1998.
  • Goberna MA, López MA. Linear semi-infinite optimization. Chichester: Wiley; 1998.
  • Gass S, Saaty T. Parametric objective function (part 2)-generalization. J Oper Res Soc Am. 1955;3:395–401.
  • Saaty T, Gass S. Parametric objective function (part 1). J Oper Res Soc Am. 1954;2:316–319.
  • Robinson SM. Some continuity properties of polyhedral multifunctions. Math Progr Study. 1981;14:206–214.
  • Klatte D. Lipschitz continuity of infima and optimal solutions in parametric optimization: the polyhedral case. In: Guddat J, Jongen HTh, Kummer B, et al., editors. Parametric optimization and related topics. Berlin: Akademie; 1987. p. 229–248.
  • Dontchev A, Zolezzi T. Well-posed optimization problems. Berlin: Springer; 1993. (Lecture Notes in Mathematics; vol. 1543.
  • Azé D, Corvellec J-N. On the sensitivity analysis of Hoffman constants for systems of linear inequalities. SIAM J Optim. 2002;12(4):913–927.
  • Hoffman AJ. On approximate solutions of systems of linear inequalities. J Res Natl Bur Stand. 1952;49(4):263–265.
  • Klatte D, Thiere G. Error bounds for solutions of linear equations and inequalities. Z Oper Res. 1995;41:191–214.
  • Peña J, Vera JC, Zuluaga LF. New characterizations of Hoffman constants for systems of linear constraints. Math Program. 2021;187(1–2):79–109.
  • Zălinescu C. Sharp estimates for Hoffman's constant for systems of linear inequalities and equalities. SIAM J Optim. 2003;14(2):517–533.
  • Li W. Sharp Lipschitz constants for basic optimal solutions and basic feasible solutions of linear programs. SIAM J Control Optim. 1994;32(1):140–153.
  • Gisbert MJ, Canovas MJ, Parra J, et al. Calmness of the optimal value in linear programming. SIAM J Optim. 2018;28(3):2201–2221.
  • Cánovas MJ, Henrion R, López MA, et al. Outer limit of subdifferentials and calmness moduli in linear and nonlinear programming. J Optim Theory Appl. 2016;169(3):925–952.
  • Bank B, Guddat J, Klatte D, et al. Non-linear parametric optimization. Berlin: Akademie; 1982 and Birkhäuser: Basel; 1983.