Publication Cover
Optimization
A Journal of Mathematical Programming and Operations Research
Volume 71, 2022 - Issue 14
769
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Some properties of K-convex mappings in variable ordering settings

, &
Pages 4125-4146 | Received 27 Jun 2020, Accepted 08 May 2021, Published online: 07 Jun 2021

Abstract

We consider a generalization of standard vector optimization which is called vector optimization with variable ordering structures. The problem class under consideration is characterized by a point-dependent proper cone-valued mapping: here, the concept of K-convexity of the incorporated mapping plays an important role. We present and discuss several properties of this class such as the cone of separations and the minimal variable K-convexification. The latter one refers to a general approach for generating a variable ordering mapping for which a given mapping is K-convex. Finally, this approach is applied to a particular case.

2010 Mathematics Subject Classifications:

1. Introduction

In this paper, we present some properties of a K-convex mapping in variable ordering settings, that is, a vector mapping F:CRnRm such that (1) λF(x)+(1λ)F(y)F(λx+(1λ)y)K(λx+(1λ)y)(1) for all λ[0,1] and all x,yC, where C is a convex set and K:xRnK(x)Rm is a proper cone-valued mapping. This class of mappings arises in vector optimization with a variable ordering structure, given as (2) K-minF(x),(2) which consists in finding a point x¯C such that F(x)F(x¯)K(x¯){0}for all xC (see [Citation1]). We refer to Definition 2.3, where a proper cone-valued mapping, or synonymously variable ordering cone mapping, is defined. The K-convexity of a mapping has been extensively studied in standard vector optimization, in particular, in the context of convergence results for numerical solution methods [Citation2–5]. The K-convexity of F in (Equation2) allows to give a sufficient first-order optimality condition [Citation1]. Furthermore, for the (generalized) concept of vector optimization with variable ordering structures, recent results on the convergence of solution methods can be found in [Citation1,Citation6]. We also mention [Citation7], where a variable ordering setting for set-valued optimization was introduced and studied.

Several applications of Problem (Equation2) arise, e.g. in medical diagnosis and portfolio optimization [Citation8,Citation9]; for a summary of these applications, we refer to [Citation10, Section 1.3.1]. We briefly expose here an application in medical diagnosis. After obtaining information from images, the data are transformed into another presentation and, based on that, the diagnosis is made. In order to determine the best transformation from the original data to the desired pattern, different criteria for measuring can be used which lead to the optimization of a vector of functions. It is well known that the solution of that model, which uses a classical weighting technique, may yield inadequate results. However, if the set of weights is point-dependent, then better results are reported [Citation10].

The goal of this paper is twofold. Firstly, we define the so-called cone of separations and relate its properties to K-convexity; in particular, under certain assumptions, the set of K-convex mappings is reduced to the class of affine functions. Secondly, we introduce a particular cone-valued mapping that provides a theoretical approach for obtaining K-convex mappings. This particular mapping is called minimal variable K-convexification. We study several of its properties, for example, the Lipschitz continuity of related cone generator mappings.

This paper is organized as follows. Section 2 contains basic notation and definitions as well as some preliminary results. Section 3 presents the concept of the cone of separations. Section 4 discusses the main results of this paper: a special cone-valued mapping, the minimal variable K-convexification, is defined and corresponding properties are shown. In Section 5, this approach is applied to a particular case, and Section 6 gives some conclusions.

2. Notations and preliminary results

Throughout this paper, we will use the following standard notations. The inner product in Rn is denoted by , and the Euclidian norm by . The ball and the sphere centred at xRn with radius r>0 are B(x,r)={yRn:yxr} and S(x,r)={yRn:yx=r}, respectively. The set Ck(Rn,Rm) represents the set of k-times continuously differentiable mappings F:RnRm with derivative DF(x) and Hessian D2F(x) at a point xRn. For a set CRn, let intC, convC, bdC and clC denote the set of its interior points, its convex hull, its boundary and its closure, respectively.

The following definition recalls some well-known notations related to cones (see, e.g. [Citation11]).

Definition 2.1

  • A non-empty set KRm is called a cone if αzK for all zK and all real numbers α0.

  • A cone K is called solid if intKset.

  • A cone K is called pointed if K[K]={0}.

  • A cone K is called proper if it is solid, pointed and a convex closed set.

  • The dual cone K of a cone K is the set K=lRm:l,z0 zK.

  • Let ARm be a given set. The intersection of all convex cones containing the set A is called the convex conic hull of A and is denoted by conecoA.

  • A set A is called a generator of a cone K if conecoA=K.

It is well known that a proper cone in Rm defines a partial ordering in Rm (see, e.g. [Citation11, p. 43]). The following lemma summarizes some known results on cones.

Lemma 2.1

  1. A convex closed cone KRm is pointed, if and only if there exist wS(0,1)Rm and a real number δ>0 such that w,zδz,for all zK.

  2. A cone KRm is proper if and only if K is proper.

  3. Let KiRm,i=1,2, be convex closed cones and assume that K1 is pointed. Then, we have K1K2={0},if and only if there exists lRm such that l,z10l,z2,for all z1K1, z2K2, and l,z<0 for all zK1{0}.

Proof.

For the proof of (i), (ii) and (iii), see [Citation11, Subsection 2.6.1], [Citation12, Subsection 2.7.2] and [Citation13, Theorem 3.22], respectively.

In the following, we will sometimes consider a set-valued mapping Ψ:CRnRm and its graph grΨ=(x,y)C×Rm:yΨ(x).The next definition recalls two well-known concepts [Citation14,Citation15].

Definition 2.2

  • Let two nonempty sets A,BRm be given. The directed Hausdorff distance between A and B is given by ΔH(A,B)=supinfaAbBab.Moreover, the Hausdorff distance between A and B is defined as dH(A,B)=maxΔH(A,B),ΔH(B,A).

  • A set-valued mapping Ψ:CRm is called Lipschitz continuous on C if there exists a real number μ>0 such that dH(Ψ(x),Ψ(y))μxy,for all x,yC.

As mentioned above, in this paper, we are interested in variable ordering cone mappings, which are a particular class of set-valued mappings and whose definition is recalled in the following, see [Citation1,Citation10].

Definition 2.3

Let a convex set CRn, a set-valued mapping K:CRnRm and a mapping F:RnRm be given.

  • K is said to be a cone-valued mapping if K(x) is a cone for all xC.

  • K is said to be a proper cone-valued mapping (or, synonymously, variable ordering cone mapping) if K(x) is a proper cone for all xC.

  • Assume that K is a cone-valued mapping and that K(x) is closed and convex for all xC. The mapping F is called K-convex on C if (3) λF(x)+(1λ)F(y)F(λx+(1λ)y)K(λx+(1λ)y),(3) for all λ[0,1] and all x,yC.

Our interest in proper cone-valued mappings is motivated by the fact that they provide a variable ordering structure, see [Citation1,Citation10]. That is why we also call them variable ordering cone mappings. In the concluding lemma of this section, we characterize K-convexity for the differentiable case.

Lemma 2.2

[Citation1,Citation6]

Let the set C and the mappings K and F be given as in the previous definition. Furthermore, assume that intCset, K is a cone-valued mapping such that K(x) is closed and convex for all xC, and that F is continuously differentiable on an open neighbourhood of C. Then , we have the following:

  1. If (4) F(x)F(y)DF(y)(xy)K(y)(4) for all x,yC, then F is K-convex on C.

  2. If F is K-convex on C and grK is closed, then (Equation4) holds for all x,yC.

3. Cone of separations on Rn

Throughout this section, we assume the following.

  • K:RnRm is a proper cone-valued mapping and grK is closed.

  • FC1(Rn,Rm) is a K-convex mapping (on C=Rn).

  • x1,x2Rn are arbitrarily chosen points.

According to Lemma 2.2, these assumptions imply that (5) F(x1)F(x2)DF(x2)(x1x2)K(x2).(5) As already mentioned, the motivation of this paper is closely related to the consideration of proper cone-valued mappings. These mappings have the property that K(x), xC, are closed cones. Obviously, the supposed closedness of grK implies already the closedness of K(x), xC. In order to avoid confusion concerning the motivation of this paper, we will sometimes assume that grK is closed and use the notation of a proper cone-valued mapping. The following definition is basic for this section.

Definition 3.1

The set KS(x1,x2)=lRm:l,z10l,z2,  z1K(x1), z2K(x2)is called the cone of separations for x1,x2Rn.

Obviously, it holds that KS(x1,x2)={0} whenever intK(x1)intK(x2)set. In Figure , a cone of separations is illustrated assuming m = 2 and K(x1)K(x2)={0}.

Figure 1. A cone of separations for x1,x2Rn when m = 2.

Figure 1. A cone of separations for x1,x2∈Rn when m = 2.

The goal of this section is to relate some properties of the mapping F to those of the cone of separations. The next lemma presents some properties of KS(x1,x2).

Lemma 3.1

  1. If lKS(x1,x2), then the hyperplane {zRm:l,z=0} separates the cones K(x1) and K(x2).

  2. KS(x1,x2) is a convex, closed and pointed cone.

  3. If K(x1)K(x2)={0}, then KS(x1,x2) is a proper cone.

  4. It is (6) KS(x1,x2)=K(x1)K(x2).(6)

Proof.

(ii) It is easily seen that KS(x1,x2) is convex and closed. Suppose for a moment that KS(x1,x2) is not pointed, that is, there exists l0Rm{0} with l0KS(x1,x2)[KS(x1,x2)].From the previous expression, it follows that l0,z1=l0,z2=0,for all z1K(x1) and all z2K(x2). However, this is not possible since K(x1) and K(x2) are solid and, therefore, span the whole space Rm.

(iii) By (ii), we only have to show that KS(x1,x2) is solid. By Lemma 2.1(iii) and since K(x1) and K(x2) are pointed, there exist l1,l2Rm such that l1,z10>l1,z2,l2,z1>0l2,z2,for all z1K(x1)S(0,1) and all z2K(x2)S(0,1). Hence, l1+l2int KS(x1,x2).The statements (i) and (iv) are obvious.

The first theorem in this section relates KS(x1,x2) to the Jacobian of F.

Theorem 3.1

If lKS(x1,x2), then lT[DF(x1)DF(x2)]=0.

Proof.

Let lKS(x1,x2) and suppose that lT[DF(x1)DF(x2)]0. By definition, we have (7) l,z10l,z2,(7) for all z1K(x1) and z2K(x2). Hence, by the K-convexity of F, (Equation5) and (Equation7), we get for all xRn that l,F(x)F(x1)DF(x1)(xx1)l,F(x)F(x2)DF(x2)(xx2),and, therefore, l,F(x2)F(x1)+DF(x1)x1DF(x2)x2l,[DF(x1)DF(x2)]x.Now, let a real number α>0 be arbitrarily chosen. Substituting x=α[DF(x1)DF(x2)]Tl, we get l,F(x2)F(x1)+DF(x1)x1DF(x2)x2αlT[DF(x1)DF(x2)]2.By lT[DF(x1)DF(x2)]0, letting α+ yields a contradiction since the left-hand side of the latter inequality is finite, while its right-hand-side becomes unbounded. This completes the proof.

As a consequence, we obtain the following corollary.

Corollary 3.1

  1. For all lKS(x1,x2) and all xRn, we obtain l,F(x+x1)F(x+x2)l,F(x1)F(x2).

  2. If KS(x1,x2) is a solid cone, then DF(x1)=DF(x2).

Proof.

(i) By (Equation5), the K-convexity of F implies F(x+x1)F(x1)DF(x1)xK(x1),F(x+x2)F(x2)DF(x2)xK(x2),for all xRn. Combining this with (Equation7) it follows that l,F(x+x1)F(x1)DF(x1)xl,F(x+x2)F(x2)DF(x2)x,and therefore, (8) l,F(x+x1)F(x+x2)l,F(x1)F(x2)+l,[DF(x1)DF(x2)]x.(8) By Theorem 3.1, we obtain the desired result.

(ii) Since KS(x1,x2) is a solid cone, there exist zRm and a real number δ>0 such that B(z,δ)KS(x1,x2).Then, there exist linearly independent vectors viKS(x1,x2),i=1,..,m, and by Theorem 3.1, we have (vi)TDF(x1)DF(x2)=0,i=1,..,m.The linear independence of vi,i=1,..,m implies DF(x1)=DF(x2).

We conclude this section by presenting its main result.

Theorem 3.2

Let DRn be an open and connected set. If there exists y¯Rn such that K(x)K(y¯)={0} for all xD, then F|D is an affine mapping.

Proof.

By Lemma 3.1(iii), KS(x,y¯) is a solid cone for all xD, and by Corollary 3.1(ii), we get the system DF(x)=DF(y¯), xD,whose solution is F(x)=DF(y¯)(xx0)+F(x0),xD,for some x0D.

Note that the fact that F has to be an affine mapping, under the assumptions of Theorem 3.2, is a surprising result given the flexibility of variable order structures.

4. Minimal variable K-convexification

In this section, we study a particular cone-valued mapping which provides a theoretical approach to obtain K-convex mappings. For a more practical approach, we refer to [Citation16], where Bishop-Phelps and simplicial cones are used. Throughout this section assume the following:

  • CRn is a convex set with intCset.

  • F:RnRm is a continuously differentiable mapping on an open neighbourhood of C.

Definition 4.1

  • The set Kconv(F) is defined as the family of all cone-valued mappings K:CRm with the following properties:

    1. K(y) is closed and convex for all yC.

    2. F(x)F(y)DF(y)(xy)K(y) for all x,yC.

  • The mapping KF:CRm is defined by (9) KF(y)=KKconv(F)K(y),yC,(9) and it is called the minimal variable K-convexification of F on C.

It is easily seen that KF(y) is a convex and closed cone for all yC, and that, by Lemma 2.2(i), F is KF-convex on C. Note that KF(y) need not to be pointed. According to the previous definition, the minimality of KF(y) is defined with respect to all KKconv(F).

Lemma 2.2(i) implies that F is K-convex on C for all KKconv(F). Moreover, Lemma 2.2(ii) yields that KKconv(F) whenever the assumptions of Lemma 2.2 are fulfilled, that is, whenever F is K-convex on C, grK is closed and K(y) is convex for every yC.

The next lemma presents a specific form for the minimal variable K-convexification of F on C. For this define the mapping Fˆ:C×CRm by Fˆ(x,y)=F(x)F(y)DF(y)(xy),and let Fˆ(C,y)={Fˆ(x,y):xC}.

Lemma 4.1

For all yC, we have KF(y)=cl[conecoFˆ(C,y)].

Proof.

By (ii) in Definition 4.1, we have for all KKconv(F) and all x,yC that Fˆ(x,y)K(y),and therefore, Fˆ(C,y)KF(y) for all yC. Since KF(y) is a convex and closed cone, we get cl[conecoFˆ(C,y)]KF(y) for all yC.

On the other hand, cl[conecoFˆ(C,y)] is a closed and convex cone for all yC and Fˆ(x,y)cl[conecoFˆ(C,y)],for all x,yC. Therefore, KF(y)cl[conecoFˆ(C,y)] for all yC which completes the proof.

Following [Citation1], one is furthermore interested in conditions which are related to the existence of a proper cone KRm such that (10) KF(y)K,(10) for all yC. The following lemma presents a necessary condition for this property.

Lemma 4.2

Assume that there exists a proper cone KRm such that (Equation10) holds for all yC. Then, there exists wS(0,1)Rm such that the function fw:CR given as fw(x)=w,F(x)is convex on C.

Proof.

Since KF(y)K for all yC, by (ii) in Definition 4.1, we have (11) Fˆ(x,y)K,(11) for all x,yC. Applying Lemma 2.1(i), there exist wS(0,1) and δ>0 such that (12) w,zδz,(12) for all zK. By (Equation11) and (Equation12), we get w,Fˆ(x,y)δFˆ(x,y)0,for all x,yC. Obviously, the latter means that fw is convex.

Our next goal is to present a sufficient condition for the existence of a proper cone KRm such that (Equation10) holds for all yC. For this, we assume in the remainder of this section that the set C is compact and that the mapping F is twice continuously differentiable on an open neighbourhood of C. Furthermore, for wS(0,1)Rm define the function fˆw(x,y)=w,Fˆ(x,y),where x,yC.

Lemma 4.3

Assume that there exists a vector wS(0,1)Rm such that D2fw(y) is positive definite (denoted by D2fw(y)0) for all yC. Then, there exist real numbers M>0, δ>0 such that Fˆ(x,y)fˆw(x,y)<M,for x,yC whenever 0<xy<δ.

Proof.

Suppose contrarily that there exist two sequences {xp},{yp}C such that limpxp=limpyp=x¯,limpxpypxpyp=u¯,and that (13) limpFˆ(xp,yp)fˆw(xp,yp)=.(13) Note that x¯C, since C is compact. Obviously, we have Fˆ(xp,yp)fˆw(xp,yp)=Fˆ(yp+xpypxpypxpyp,yp)fˆw(yp+xpypxpypxpyp,yp).A componentwise (Fi,i=1,..,m, denote the components of F) second-order Taylor expansion separately for each of the both parts of the latter fraction yields (14) limpFˆ(xp,yp)fˆw(xp,yp)=u¯TD2F1(x¯)u¯u¯TD2fw(x¯)u¯,,u¯TD2Fm(x¯)u¯u¯TD2fw(x¯)u¯T.(14) By D2fw(x¯)0, the right-hand side in (Equation14) is finite which contradicts (Equation13).

The next theorem presents a sufficient condition for the existence of a proper cone KRm such that (Equation10) holds for all yC. The cone K, as defined in (Equation16), will be later used in Lemma 4.5 as well as in Proposition 4.2.

Theorem 4.1

Assume that there exists a vector wS(0,1)Rm such that D2fw(y)0 for all yC. Then, there exists a proper cone KRm such that KF(y)K,for all yC.

Proof.

By Lemma 4.3, there exist real numbers M1>0 and δ>0 such that Fˆ(x,y)fˆw(x,y)<M1,for x,yC whenever 0<xy<δ. Furthermore, since D2fw(y)0 for all yC, it follows that fˆw(x,y)>0 for all x,yC with xy. Hence, the fraction Fˆ(x,y)fˆw(x,y) is defined for all x,yC with xy, and the compactness of C implies that there exists a real number M>1 such that (15) Fˆ(x,y)fˆw(x,y)<M,(15) for all x,yC with xy. Now, define the set (16) K=zRm:Mw,zz.(16) Since M>1, it is easily seen that wintK and, therefore, K is solid. Moreover, K is obviously a proper cone. By (Equation15), we have Fˆ(x,y)<Mfˆw(x,y)=Mw,Fˆ(x,y),for all x,yC with xy and, therefore, Fˆ(C,y)K for all yC. Finally, by Lemma 4.1, we get KF(y)=cl[conecoFˆ(C,y)]cl[conecoK]=K,for all yC which completes the proof.

Note that if m = 2 and the assumptions of Theorem 4.1 hold, then we can obtain a formula for KF as follows. Let w0S(0,1) with w,w0=0, where w is given as in Theorem 4.1. Moreover, for yC, let (17) v1(y)=argmaxvFˆ(C,y)S(0,1)v,w0,(17) (18) v2(y)=argminvFˆ(C,y)S(0,1)v,w0.(18) Then, we have (19) KF(y)=coneco{v1(y),v2(y)}.(19) Obviously, finding v1(y) and v2(y) is not trivial and a global optimization method must be used (see, e.g. [Citation17, Subsection 1.1.4]).

Example 4.1

Let n = m = 2, C=[0,1]2 and F(x1,x2)=x12+x22x13x23.Take w=(1,0)T and fix w0=(0,1)T. By using (Equation17) and (Equation18), we compute v1(y), v2(y) for y{(0,0)T,(0.5,0)T,(1,1)T} and list the corresponding results in the following table:

This table and (Equation19) allow us to illustrate KF(y) for y{(0,0)T,(0.5,0)T,(1,1)T} in Figure .

Next, we deal with the Lipschitz continuity of two generators of KF(y). For this we recall the assumptions given just before Lemma 4.3, namely, that C is a compact set and F is twice continuously differentiable. We start with the following lemma.

Figure 2. A minimal variable K-convexification for m = 2. (a) y=(0,0)T, (b) y=(0.5,0)T and (c) y=(1,1)T.

Figure 2. A minimal variable K-convexification for m = 2. (a) y=(0,0)T, (b) y=(0.5,0)T and (c) y=(1,1)T.

Lemma 4.4

  1. Let A,BRm be nonempty compact sets. Then, there exist a¯A and b¯B such that ΔH(A,B)=a¯b¯ as well as ΔH(A,B)a¯b for all bB.

  2. There exists μ>0 such that Fˆ(x,y1)Fˆ(x,y2)μy1y2,for all x,y1,y2C.

  3. Let a,bRm{0}. Then, it holds that aabbabab,and the equality holds if and only if a=b.

Proof.

The compactness of A and B implies (i). Since Fˆ is continuously differentiable and C is compact, it holds that Fˆ is Lipschitz continuous on C×C. Hence, there exists μ>0 such that Fˆ(x,y1)Fˆ(x,y2)μ(x,y1)(x,y2)=μy1y2,for all x,y1,y2C. For proving (iii) consider aabb2=2+ab2a2b2ab=ab2ab2abab2ab.

Now, we prove the Lipschitz continuity of the two generators.

Proposition 4.1

Let the set-valued mapping G:CRm be given as G(y)=convFˆ(C,y).Then, G is Lipschitz continuous on C.

Proof.

Obviously, we only have to show that there exists μ>0 such that ΔH(G(y1),G(y2))μy1y2,for all y1,y2C. Since G(y) is a compact set for all yC, by Lemma 4.4 (i), it follows that there exists z¯1G(y1) such that (20) ΔH(G(y1),G(y2))z¯1z2,(20) for all z2G(y2). Moreover, there exist xiC, λi0, i=1,,p, with (21) z¯1=i=1pλiFˆ(xi,y1),i=1pλi=1.(21) Choosing (22) z2=i=1pλiFˆ(xi,y2)(22) and substituting (Equation21) and (Equation22) into (Equation20), we get (23) ΔH(G(y1),G(y2))i=1pλiFˆ(xi,y1)Fˆ(xi,y2).(23) Then, the statement follows by applying the triangle inequality and Lemma 4.4 (ii).

The generator used in the next result is in general not Lipschitz continuous on C. However, we present a sufficient condition for its Lipschitz continuity on a subset of C.

Lemma 4.5

Let wS(0,1)Rm, M>1 and the cone K be given as in (Equation16). Assume that viK, λi0, i=1,,p and i=1pλi=1.Then, we have i=1pλiviminiviM.

Proof.

Let Φ be an orthogonal matrix whose first row is w. Then, i=1pλivi=Φi=1pλivii=1pλiw,vii=1pλiviMminiviM.

As mentioned just before Lemma 4.5, the next result yields a sufficient condition for a generator to be Lipschitz continuous (only) on a subset AC. Here, we have to ‘cut out’ a subset from Fˆ(C,y). We refer also to Remark 4.1, where we discuss how, under certain conditions, this result could be used.

Proposition 4.2

Let AC be a compact convex set. Assume that for some wS(0,1)Rm it holds that D2fw(y)0 for all yC, and that there exists δ>0 such that (24) KF(y)=conecoFˆ(C,y)int B(0,δ),(24) for all yA. Then, the set-valued mapping G:ARm, given as G(y)=KF(y)S(0,1),is Lipschitz continuous on A.

Proof.

We will prove that G is locally Lipschitz continuous for any arbitrarily chosen y¯A. Then, the Lipschitz continuity of G on A follows from the convexity and the compactness of A. By Theorem 4.1, there exists a proper cone K such that KF(y)K,where K is given as in (Equation16) for certain M>1. Let μ>0 be given as in Lemma 4.4 (ii). Choose y1,y2By¯,δ4μA.By Lemma 4.4 (ii), we have (25) Fˆ(x,y2)δ2,(25) whenever Fˆ(x,y1)δ for xC. Next, we will show that (26) ΔH(G(y1),G(y2))2μMδy1y2.(26) Analogously to the proof of Proposition 4.1, by (Equation24), we get (27) ΔH(G(y1),G(y2))i=1pλiFˆ(xi,y1)i=1pλiFˆ(xi,y1)i=1pλiFˆ(xi,y2)i=1pλiFˆ(xi,y2)(27) with (28) Fˆ(xi,y1)δ,Fˆ(xi,y2)δ2,i=1,,p,i=1pλi=1.(28) Note that in (Equation28), the two inequalities follow from (Equation24) and (Equation25), respectively. By Lemma 4.4 (iii) and (Equation27), we get (29) ΔH(G(y1),G(y2))i=1pλiFˆ(xi,y1)Fˆ(xi,y2)i=1pλiFˆ(xi,y1)i=1pλiFˆ(xi,y2).(29) Moreover, by Lemma 4.5 and (Equation28), it follows that (30) i=1pλiFˆ(xi,y1)δM,i=1pλiFˆ(xi,y2)δ2M.(30) Finally, applying the triangle inequality, Lemma 4.4 (ii) and using (Equation30) in (Equation29), we obtain (Equation26).

Remark 4.1

Proposition 4.2 can be exploited in the following way. Let AC be a set for which the assumptions of Proposition 4.2 are fulfilled. Choose a cone-valued mapping KA:CRm with KA(y)=KF(y)if yA,Kif yCA,where KRm is a fixed cone satisfying KF(y)=K for ybdA and KF(y)K for yCA. Define the set-valued mapping GA(y)=KA(y)S(0,1).Since S(0,1) is bounded, the Hausdorff distance is a metric for the family of compact subsets of S(0,1) (see, e.g. [Citation15, Section 4C]). A moment of reflection shows that GA is continuous on C whenever its codomain is endowed with this metric. Moreover, by Proposition 4.2, it follows that GA is Lipschitz continuous on A and, by definition, that GA is Lipschitz continuous on CA. Consequently, GA is Lipschitz continuous on C. We will come back to this approach at the end of Section 5. Of course, such a choice of KA is not always possible. Necessary or sufficient conditions for its existence will be considered in future research.

5. A particular case

In this section, we study a particular case and apply to it the general approach described in the previous sections. Specifically, we consider an interval [α,β]R and a mapping FC2([α,β],R2), F(t)=(F1(t),F2(t)). Moreover, we assume that F1(t)>0, t[α,β], where Fi(t) and Fi(t), i=1,2, denote the first and second derivative at tR, respectively. For this case, we obtain a particular expression for KF(t). First, we define four auxiliary functions.

Lemma 5.1

Let t0[α,β] and define ϕ(t)=Fˆ1(t,t0),t[α,t0],ϕ+(t)=Fˆ1(t,t0),t[t0,β].Then, ϕ is strictly decreasing, and ϕ+ is strictly increasing.

Proof.

For ϕ+ we get ϕ+(t)=F1(t)F1(t0)=F1(θt)(tt0)for some θt(t0,β). Since F1(t)>0, t[t0,β], it holds that ϕ+ is strictly increasing. Analogously, we obtain that ϕ is strictly decreasing.

Since Fˆ1(t0,t0)=0, Lemma 5.1 yields for t0[α,β] that Fˆ1(t,t0)>0,for all t[α,β]{t0}. Moreover, by Lemma 5.1, both ϕ and ϕ+ are injective functions. Therefore, the functions ψ:[0,ϕ(α)][0,Fˆ2(α,t0)] and ψ+:[0,ϕ+(β)][0,Fˆ2(β,t0)], given by ψ(x)=Fˆ2ϕ1(x),t0,ψ+(x)=Fˆ2ϕ+1(x),t0,are well defined. The following two results concern the relationship between the latter two functions and Fˆ(t,t0).

Lemma 5.2

The right derivatives of ψ and ψ+ at x = 0 are ψ(0)=ψ+(0)=F2(t0)F1(t0).

Proof.

We show it for ψ+. Since ψ+(0)=0, we have (31) ψ+(0)=limΔx0+ψ+(Δx)Δx=limtt0+Fˆ2(t,t0)Fˆ1(t,t0).(31) After applying L'Hôpital's rule twice we get the desired result. The proof runs analogously for ψ.

Lemma 5.3

If F2(t)F1(t) is an increasing function on [α,β], then ψ(x) is concave and ψ+(x) is convex.

Proof.

For showing the convexity of ψ+, we prove that (32) ψ+(x)>0,(32) for all x(0,ϕ+(β)). For x(0,ϕ+(β)) and t=ϕ+1(x), after a straightforward calculation, we get that ψ+(x)=F2(t)F1(t)F2(t)F2(t0)F1(t)F1(t0)F1(t)F1(t0)2.Since F1(t)>0, in order to obtain (Equation32), we use that F2(t)F1(t)>F2(t)F2(t0)F1(t)F1(t0).By Cauchy's mean value theorem, the latter inequality holds since t>t0, and F2(t)F1(t) is an increasing function on [α,β]. The concavity of ψ is obtained analogously.

In the remainder of this section, we assume that F2(t)F1(t) is increasing on [α,β]. The next result is a consequence of Lemma 5.3.

Corollary 5.1

Let t0(α,β). Then, we have Fˆ2(α,t0)Fˆ1(α,t0)Fˆ2(t,t0)Fˆ1(t,t0)Fˆ2(β,t0)Fˆ1(β,t0),for all t[α,β]{t0}.

Proof.

If t>t0, then for x=ϕ+(t), we have x(0,ϕ+(β)]. By the convexity of ψ+, Lemma 5.2 implies F2(t0)F1(t0)ψ+(x)xψ+(ϕ+(β))ϕ+(β).Hence, we obtain F2(t0)F1(t0)Fˆ2(t,t0)Fˆ1(t,t0)Fˆ2(β,t0)Fˆ1(β,t0)and, analogously, for t<t0 that Fˆ2(α,t0)Fˆ1(α,t0)Fˆ2(t,t0)Fˆ1(t,t0)F2(t0)F1(t0).A combination of these two results delivers for t>t0 that Fˆ2(α,t0)Fˆ1(α,t0)F2(t0)F1(t0)Fˆ2(t,t0)Fˆ1(t,t0)Fˆ2(β,t0)Fˆ1(β,t0),and a corresponding result for t<t0. This completes the proof.

In the next result, we see the advantage of considering FC2([α,β],R2). We show that in this case, the minimal variable K-convexification of F on [α,β] can easily and directly be calculated.

Theorem 5.1

Let the mappings Gα:[α,β]R2 and Gβ:[α,β]R2 be given as Gα(t0)=F(α)if t0=α,Fˆ(α,t0)if α<t0β,Gβ(t0)=Fˆ(β,t0)if αt0<β,F(β)if t0=βand define the set-valued mapping G(t0)={Gα(t0),Gβ(t0)}. Then, KF(t0)=conecoG(t0).

Proof.

We distinguish two cases.

Case 1: Let t0(α,β). Then, we have G(t0)=Fˆ(α,t0),Fˆ(β,t0)Fˆ([α,β],t0).Hence, it holds that (33) conecoG(t0)conecoFˆ([α,β],t0).(33) Next, we prove that (34) Fˆ([α,β],t0)conecoG(t0).(34) Note that 0conecoG(t0). Now, choose zFˆ([α,β],t0){0}. Therefore, there exists t[α,β]{t0} such that z=Fˆ(t,t0). From Corollary 5.1, it follows for z=(z1,z2) that z1=Fˆ1(t,t0)>0 and that Fˆ2(α,t0)Fˆ1(α,t0)z2z1Fˆ2(β,t0)Fˆ1(β,t0).Thus, there exists λ[0,1] such that z2z1=λFˆ2(α,t0)Fˆ1(α,t0)+(1λ)Fˆ2(β,t0)Fˆ1(β,t0).A moment of reflection shows that the latter equation yields (35) z=λz1Fˆ1(α,t0)Fˆ(α,t0)+(1λ)z1Fˆ1(β,t0)Fˆ(β,t0).(35) Note that in (Equation35) the coefficients of Fˆ(α,t0) and Fˆ(β,t0) are nonnegative since λ[0,1], z1>0, Fˆ1(α,t0)>0 and Fˆ1(β,t0)>0. Thus, (Equation34) is shown. By Lemma 4.1, (Equation33) and (Equation34), we get KF(t0)=conecoG(t0).

Case 2: Let t0=α (analogously for t0=β). Applying L'Hôpital's rule twice, we get limtαFˆ2(t,α)Fˆ1(t,α)=F2(α)F1(α),and by G(α)={F(α),Fˆ(β,α)}, we get G(α)clconecoFˆ([α,β],α).Moreover, letting t0α in Corollary 5.1, we obtain (36) F2(α)F1(α)Fˆ2(t,α)Fˆ1(t,α)Fˆ2(β,α)Fˆ1(β,α).(36) Using (Equation36) we deduce an expression which is analogous to (Equation35). The remaining part of the proof runs as in Case 1.

Example 5.1

Let α=1, β=1 and F(t)=(t2,t3)T. By using Theorem 5.1, we have KF(t0)=conecoG(t0) for t0{1,0.15,1}, which is illustrated in Figure .

Figure 3. Application of Theorem 5.1 for F(t)=(t2,t3)T with α=1 and β=1. (a) t0=1, (b) t0=0.15 and (c) t0=1.

Figure 3. Application of Theorem 5.1 for F(t)=(t2,t3)T with α=−1 and β=1. (a) t0=−1, (b) t0=0.15 and (c) t0=1.

Note that the set-valued mapping G(t0) in Theorem 5.1 is not Lipschitz continuous. Indeed, since limt0αGα(t0)=limt0αFˆ(α,t0)=0andGα(α)=F(α)0,the function Gα(t0) is not continuous. However, we can find a cone-valued mapping Kδ(t0) close to KF(t0) such that its generator mapping is Lipschitz continuous. As an example, fix δ>0, define Gαδ(t0)=Fˆ(α,α+δ)if αt0α+δ,Fˆ(α,t0)if α+δ<t0β,Gβδ(t0)=Fˆ(β,t0)if αt0<βδ,Fˆ(β,βδ)if βδt0βand set Kδ(t0)=conecoGαδ(t0),Gβδ(t0).Analogously to Proposition 4.2, it can be shown that Kδ(t0)S(0,1) is Lipschitz continuous on [α,β]. Moreover, it is easy to check that KF(t0)=Kδ(t0),for all t0[α+δ,βδ] and that KF(t0)Kδ(t0),for all t0[α,β][α+δ,βδ]. This latter construction represents an application of the strategy described in Remark 4.1 to this particular case considered in the current section.

6. Final remarks

In this paper, we considered vector optimization problems with variable ordering structures. They are related to K-convex mappings where K refers to a proper cone-valued mapping. This model has several applications, e.g. in medical diagnosis and portfolio optimization. We defined the cone of separations and showed that under certain assumptions the corresponding K-convex mapping needs to be affine. Note that this is a somehow counter-intuitive property and its geometric meaning need still to be discussed in the future. Moreover, we introduced the concept of the minimal variable K-convexification KF(x), presented sufficient conditions for the images of this cone-valued mapping to be contained in a proper cone and proved the Lipschitz continuity of some corresponding generator mappings. Note that we described a theoretical approach for generating a variable ordering mapping for which a given mapping is K-convex. We applied this approach to a particular case; the construction of further applications will be a topic for further research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Bello Cruz JY, Bouza Allende G. A steepest descent-like method for variable order vector optimization problems. J Optim Theory Appl. 2014;162:371–391.
  • Graña Drummond LM, Iusem AN. A projected gradient method for vector optimization problems. Comput Optim Appl. 2004;28(1):5–29.
  • Graña Drummond LM, Svaiter BF. A steepest descent method for vector optimization. J Comput Appl Math. 2005;175(2):395–414.
  • Luc DT. Theory of vector optimization. New York (NY): Springer; 1989.
  • Luc DT, Tan NX, Tinh PN. Convex vector functions and their subdifferential. Acta Math. 1998;23(1):107–127.
  • Bello Cruz JY, Bouza Allende G. On inexact projected gradient methods for solving variable vector optimization problems. Optim Eng. 2020. Available from: https://doi.org/10.1007/s11081-020-09579-8
  • Durea M, Strugariu R, Tammer C. On set-valued optimization problems with variable ordering structure. J Global Optim. 2015;61:745–767.
  • Engau A. Variable preference modeling with ideal-symmetric convex cones. J Global Optim. 2008;42:295–311.
  • Wacker M, Deinzer F. Automatic robust medical image registration using a new democratic vector optimization approach with multiple measures. In: Yang GZ, editor. Medical image computing and computer-assisted intervention – MICCAI. New York: Springer; 2009.
  • Eichfelder G. Variable ordering structures in vector optimization. New York (NY): Springer; 2014.
  • Boyd S, Vandenberghe L. Convex optimization. Cambridge: Cambridge University Press; 2009.
  • Dattorro J. Convex optimization & Euclidean distance geometry. Palo Alto (CA): Meboo Publishing USA; 2013.
  • Jahn J. Vector optimization: theory, applications and extensions. New York (NY): Springer; 2011.
  • Mordukhovich BS. Variational analysis and generalized differentiation. New York (NY): Springer; 2006.
  • Rockafellar RT, Wets RJ. Variational analysis. New York (NY): Springer; 1998.
  • Bouza Allende G, Hernández Escobar D, Rückmann J-J. Generation of K-convex test problems in variable ordering settings. Rev Inv Oper. 2018;39(3):463–479.
  • Pintér JD. Global optimization in action. New York (NY): Springer; 1996.