节点文献

部分信息下正倒向随机系统的最优控制和微分对策理论

Optimal Control and Differential Game of Partial Information Forward-Backward Stochastic Systems

【作者】 肖华

【导师】 吴臻;

【作者基本信息】 山东大学 , 金融数学与金融工程, 2011, 博士

【摘要】 倒向随机微分方程是一个带终端条件而不是初始条件的Ito型随机微分方程。倒向随机微分方程的线性形式由Bismut[7]引入,而非线性形式由Pardoux和Peng[39]、Duffie和Epstein [13]分别独立引入。倒向随机微分方程与一个正向随机微分方程相藕合,形成了一个正倒向随机微分方程.自从被引入以来,正倒向随机微分方程在许多不同领域,特别在随机控制、金融数学方面备受关注。比如:源自随机最优控制的经典的哈密顿系统就是一类正倒向随机微分方程;用于期权定价的Black-Scholes公式能够通过正倒向随机微分方程来表示得到。有关正倒向随机微分方程的更多内容,参见Ma和、’ong [34]、Yong和Zhou[70]的专著。由于正倒向随机微分方程是很好的动态系统,我们很自然地去考虑其系统下的随机最优控制和微分对策问题。本文将致力于研究完全和部分信息下的正倒向随机微分方程的随机滤波、最优控制和微分对策。Wang和Wu[54]首先研究了系统状态和观测方程由布朗运动所驱动的正倒向随机系统的滤波理论.他们提出了一种倒向分离的技术,而这种技术在解决部分可观的最优控制问题时比Wonham[59]的分离原理更方便.受Wang和Wu的工作的启迪,我们研究了系统状态和观测方程由布朗运动和泊松过程联合驱动的正倒向随机系统的滤波方程,并将其应用到一类带随机跳的部分可观的最优控制问题。由于泊松过程随机跳跃的性质,我们得到了不同于Wang和Wu[54]的一些新的有趣的结果。Shi和Wu[47]研究了一类带随机跳的部分藕合的正倒向随机微分方程的最优控制问题,Wu[61]研究了不带跳的部分可观的正倒向随机微分方程的最优控制,他们都要求控制域是凸的.Wang和Wu[55]则研究了控制域非凸、正向方程扩散项系数不含控制变量的部分藕合的正倒向随机系统的部分可观的最优控制问题。基于前面的工作,Xiao[63]考虑了带有随机跳跃的部分藕合正倒向随机系统、控制域是凸的情况下的部分可观的最优控制问题,得到了最优控制需要满足的一个必要条件和充分条件,将Shi和Wu[47]推广到部分可观的情况,将Wu[61]推广到随机跳的情况,也部分推广了Liptser和Shiryayev [33],Bensoussan [6], Tang [50], Wang和Wu[54,55]的结果到随机跳或者正倒向系统的情形。然而,前述工作都没有考虑状态和观测有相关噪声的情形。据我所知,目前仅有Tang[50]考虑了正向连续状态与观测过程具有相关噪声的情况,得到了一般的随机最大值原理。在这里,我们研究了具有相关噪声的、带有随机跳跃的正倒向随机系统的最优控制问题.在凸控制域的条件下,我们得到了一个最大值原理和一个验证定理.当前的工作能够包含Shi和Wu[47]、Wu[61]的结果,能够部分推广Shi和Wu[48]、Wang和Wu[55]到随机跳跃,Tang和Hou [51]、Xiao[63]到相关噪声,Tang[50]到正倒向跳扩散系统,Peng[41]到部分信息的情形.到目前为止,仅有两篇文章考虑倒向随机微分方程的微分对策问题:一篇是Yu和Ji[72],运用完全平方技术研究了线性二次非零和微分对策,得到了一个显式的纳什均衡点;另一篇是Wang和Yu[56],研究了非线性倒向随机微分方程的微分对策问题,以最大值原理的形式给出了纳什均衡点的充分和必要条件。上述对策问题都局限于倒向系统的研究,据我所知,Buckdahn和Li[8]、Yu[71]研究了正倒向系统的微分对策问题。在Buckdahn和Li[8]里,对策系统的值函数通过倒向方程在零时刻的解定义,进而证明了一个动态规划原理,并揭示对策的上下值函数是Hamilton-Jacobi-Bellman-Isaacs方程唯一的粘性解。最近,Yu[71]研究了正倒向系统线性二次非零和的对策问题.在本文,我们将要研究的是更一般的情形,即非线性的部分藕合的正倒向随机微分方程的微分对策问题,联合正倒向随机微分方程理论和经典的凸变分技术,得到了非零和对策均衡点与零和对策鞍点的最大值原理和验证定理。为了更好地刻画市场中所谓的非正常交易现象(比如内部交易)以及寻找部分信息倒向重随机微分方程线性二次非零和微分对策均衡点的显式解,我们关心一类新的部分信息下起始点藕合的正倒向重随机微分方程的微分对策问题。这类问题具有更广泛的理论和实际意义。首先,正倒向重随机系统包含许多系统作为它的特例。例如:如果我们去掉倒向Ito积分项,或者正向方程,或者两者同时去掉,则正倒向重随机系统退化为正倒向随机系统,或者倒向重随机系统,或者倒向系统;其次,所有的结果能够退化成完全信息的情形;最后,如果当前的零和随机微分对策仅有一个参与者,则对策问题退化成一些相关的最优控制问题。更详细说的话,我们的结果是如下一些研究工作的部分推广:部分信息倒向随机微分方程和正倒向随机微分方程的最优控制(见Huang, Wang和Xiong [20], Xiao和Wang [64]),完全信息的倒向重随机微分方程的最优控制(见Han, Peng和Wu[18]),完全信息和部分信息的倒向随机微分方程的微分对策(见Wu和Yu[56],Yu和Ji[72]Zhang [73],Wu和Yu[57]).本文共分四章,主要结果如下。第一章:我们对第二到第四章研究的问题进行了简要的介绍。第二章:我们研究了线性的带随机跳跃的正倒向随机微分方程的随机滤波。通过应用得到的滤波方程,我们求解了一个部分可观的线性二次的最优控制问题,得到了一个显式可观的最优控制。定理2.1设条件(H2.1)和(H2.2)成立,方程(2.14)存在解,则状态(x,y,z1,z2,r1,r2)的滤波估计(πt(x),πt(y),πt(z1),πt(z2),πt(r1),πt(r2))由(2.14),(2.22)和(2.23)表出,滤波估计πt(x)的条件均方误差由(2.21)表出.推论2.1假定a6(·)=a10(·)三0,即x(·)和N1(·)不会同时发生跳跃,则(2.14)变为(2.22),(2.23)和(2.21)仍然成立,其中相应的a6(·)和a10(·)由零代替。推论2.2如果c5(·)三0,即观测过程Z(·)不会发生跳跃,则(2.24)仍是相应的滤波方程,(2.22),(2.23)和(2.21)仍然成立.定理2.2设条件(H2.1)-(H2.3)成立.则对任意的v(·)∈Uad,方程(2.30)的解xv(·)有滤波估计和这里采用了记号γ(t)=E[(xv(t)-πt(xv))2[FtZ].定理2.3设条件(H2.1)-(H2.4)成立,则(2.44)式表示的u(·)是前述的部分可观的最优控制问题的真正的最优控制。定理2.4设条件(H2.1)-(H2.4)成立,则最优控制u(·)和相应的泛函指标J(u(·))各由(2.44)和(2.54)表示。第三章:我们研究了带随机跳跃的部分可观的正倒向随机微分方程的最优控制问题,就状态和观测不相关和相关两种情况进行了分别讨论。我们以最大值原理形式确立了两种情况下最优控制的必要条件和充分条件,并举了两个例子来说明理论的应用。引理3.1设条件(H3.1)成立,则有引理3.2设条件(H3.1)成立,则有引理3.3设条件(H3.1)成立,则有如下的变分不等式成立:定理3.1设条件(H3.1)成立,u(·)是我们随机最优控制问题的最优控制,(x(·),y(·),z(·),r(·,·))是相应的最优轨迹,(p(·),q(·),k(·),L(·,·))是方程(3.19)的解。则我们有定理3.2设(H31)和(H3.2)成立,zv(·)是FtY适应的,u(·)∈Uad是一个容许控制,(x(·),y(·),z(·),r(·,·))是其相应的轨迹.另设β(·)和(p(·),g(·),k(·),L(·,·))各自满足(3.17)和(3.19),哈密顿函数H关于(x,y,z,r,v)是凸的,且那么u(·)是一个最优控制。定理3.3系统满足(3.30)和(3.33),容许控制集Ud如(3.32)定义,选取v(·)∈Uad使(3.29)所表示的消费泛函达到最小,这表示了一个部分可观的最优控制问题,那么如(3.35)中所示的候选最优控制u(·)是想要的唯一最优控制,其显式表达如(3.57)所示。定理3.4设条件(H3.1)成立,容许控制集Uad如(3.1)定义,u(·)是一个最优控制,{p,(Q,K,K,R),(q,k,k,r)}是方程(3.68)在控制u(·)下相应的Ft适应的平方可积的解,则最大值原理对任意的v(·)∈Uad都成立。定理3.5设条件(H3.1)和(H3.2)成立,pv(·)是FtY适应的,u(.)∈Uad是一个容许控制,其相应的状态轨迹为(x(·),y(·),z(·),z(·),r(·,·)).又设{p,(Q,K,K,R),(q,k,k,r)}是方程(3.68)的解,哈密顿函数H(t,u(t))关于(x,y,z,z,r,v)是凸的,且有则u(·)是一个最优控制。第四章:我们首先研究了终端藕合的正倒向随机微分方程的微分对策问题,给出了最大值原理形式的必要性条件和充分条件。这个研究的动机之一是为了寻找非线性期望下线性二次零和微分对策鞍点的显式解。为了更好刻画所谓市场中非正常交易现象(比如内部交易)以及寻找部分信息倒向重随机微分方程线性二次非零和微分对策均衡点的显式解,我们接着研究了一类新的部分信息下起始点藕合的正倒向重随机微分方程的微分对策问题。对非零和对策的纳什均衡点和零和对策的鞍点,我们都给出了必要性条件和充分性条件。引理4.1设条件(H4.1)成立,则对i=1.2,有下式成立:引理4.2设条件(H4.1)和(H4.2)成立,则对i=1,2,下述的变分不等式成立:定理4.1设条件(H4.1)和(H4.2)成立,(u1(·),u2(·))是问题Ⅰ的一个均衡点,(x(·),y(·),z(·))和(pi(·),qi(·),ki(·))是(4.10)和(4.23)相应的解,则有和对任意的(v1(·),v2(·))∈U1×U2,a.e.a.s..成立。定理4.2设条件(H4.1),(H4.2),和(H4.3)成立,(u1(·),u2(·))∈U1×U2是一个容许控制,(x,y,z)和(pi,qi,ki)方程(4.10)和(4.23)相应的解.假定对任意的(t,a,b,c)∈[0,T]×Rn×Rm×Rm×d存在,对任意的t∈[0,T],关于(a,b,c)是凹的(Arrow条件),且则(u1(·),u2(·))是问题Ⅰ的一个均衡点.定理4.3设条件(H4.1)和(H4.2)成立,(u1(·),u2(·))∈U1×U2是问题Ⅱ的一个鞍点,(x,y,z)和(p,q,k)是方程(4.10)和(4.23)的解,这里的哈密顿函数H1和H2如(4.37)和(4.38)定义.则有和对任意的(v1(·),v2(·))∈U1×U2,a.e.a.s.成立。定理4.4设条件(H4.1),(H4.2)和(H4.3)成立,(u1(·),u2(·))∈U1×U2是一个容许控制,(x,y,z)和(p,q,k)是方程(4.10)和(4.41)的解。假定哈密顿函数H满足如下的条件最小最大值原理:(ⅰ)设φ和γ都是凹函数,对任意的(t,a,b,c)∈[0,T]×Rn×Rm×Rm×d存在,且关于(a,b,c)是凹的.则对任意的v2(·)∈U2,有和成立。(ⅱ)设φ和γ都是凸函数,对任意的(t,a,b,c)∈[0,T]×Rn×Rm×Rm×d成立,关于(a,b,c)是凸的.则对任意的v1(·)∈U1,有和成立。(ⅲ)设(ⅰ)和(ⅱ)都成立,则(u1(·),u2(·))是一个鞍点,且定理4.5设条件(H4.4)成立,(u1(·),u2(·))是问题(NZSG)的一个均衡点,而且(y(·),z(·),Y(·),Z(·))和pi(·),pi(·),qi(·),qi(·))是方程(4.62)和(4.63)相应于(u1(·),u2(·))的各自的解。则有和对任意的(v1(·),v2(·))∈U1×U2,a.e.a.s.成立。推论4.1设条件(H4.4)成立,对任意的t∈[0,T],εt=Ft,(u1(·),u2(·))是问题(NZSG)的均衡点,而且,(y(·),z(·),Y(·),Z(·))和(pi(·),pi(·),qi(·),qi(·))是方程(4.62)和(4.63)相应于(u1(·),u2(·))的各自的解。则有和对任意的(v1(·),v2(·))∈U1×U2,a.e.a.s.成立.定理4.6设条件(H4.4)和(H4.5)成立,(y,z,YZ)和(pi,pi,qi,qi)是方程(4.62)和(4.63)相应于(u1(·),u2(·))的解。假定φi和γi各自关于Y和y(i=1,2)是凹的,对任意的(t,y,z,Y,Z,)∈[0,T]×Rn×Rn×1×Rm×Rm×d,(y,z,Y,Z,v1)→H1(t,y,z,Y,Z,v1,u2(t),p1(t),p1(t),q1(t),q1(t)),(y,z,Y,Z,v2)→H2(t,y,z,Y,Z,u1(t),v2,p2(t),p2(t),q2(t),q2(t))是凹的,而且,则(u1(·),u2(·))是问题(NZSG)的均衡点。

【Abstract】 A backward stochastic differential equation (BSDE, in short) is an Ito’s stochas-tic differential equation (SDE, in short) in which the terminal rather than the initial condition is specified. The BSDEs were introduced by Bismut [7] in the linear case and independently by Pardoux and Peng [39] and Duffie and Epstein [13] in the non-linear case. A BSDE coupled with a forward SDE formulates a forward-backward stochastic differential equation (FBSDE, in short). Since their introduction. FBSDEs have received considerable research attention in number of different areas, especially in stochastic control and financial mathematics. For instance, the classical Hamilto-nian system arising from necessary conditions for stochastic optimal control problems belongs to one of such kind of equations; the celebrated Black-Scholes formula for options pricing can be recovered via an FBSDE. For more details, especially refer to the monographs by Ma and Yong [34] and Yong and Zhou [70]. Since FBSDEs are well-defined dynamic systems, it is natural to consider optimal control and differential game problems of FBSDEs. This thesis is dedicated to studying stochastic filtering, optimal control and differential game of FBSDEs with complete or partial information.Wang and Wu [54] originally studied the filtering theory of forward-backward stochastic systems, where the state and observation equations are driven by standard Brownian motions. They proposed a kind of backward separation techniques, which is more convenient to solve the partially observable optimal control problem than that of Wonham [59]. Inspired by Wang and Wu [54], we study a more general case where the state and observation equations are driven by both Brownian motions and Poisson processes. Due to the property of random jumps from Poisson processes, we obtain some new and interesting results which are distinguished from that of Wang and Wu [54].Shi and Wu [47] investigated a kind of optimal control problems of forward-backward stochastic differential equations with random jumps (FBSDEPs. in short), and Wu [61] studied optimal control of partially observable FBSDEs, both in the case of the convex control domain and the diffusion term allowing to contain the control variable. Wang and Wu [55] studied a kind of stochastic recursive optimal control problem in the case where the control domain is not necessarily convex and the dif-fusion term of forward equation does not contain the control variable. Based on their works, we consider the optimal control of partially observable FBSDEPs and establish a necessary and a sufficient conditions for an optimal control. The results extend those of Shi and Wu [47] and Wu [61] to the cases of partial observation and random jumps respectively, and partly generalize those of Liptser and Shiryayev [33], Bensoussan [6], Tang [50] and Wang and Wu [54,55] to the cases of forward-backward systems or random jumps.However, the works mentioned above do not deal with the correlation between the states and observations. To my best knowledge, there is only one paper about this topic (see Tang [50]), but Tang only considered the forward system dynamics driven by Brownian motion and proved a general stochastic maximum principle. Here, we study the optimal control problems of FBSDEPs with correlated noisy observations. In the case of convex control domain, a local maximum principle and a verification theorem are proved. The present results are a partial extension to Shi and Wu [48], Tang and Hou [51], Tang [50], Wang and Wu [55], Xiao [63] and Meng [35] for Brownian motion case only, or Poisson point processes only, or forward SDEs only, or uncorrelated noisy observations. Full information control problem can be considered a special case of partial information control problem. From this point of view, the present results represent partial extension to the relevant ones in Peng [41], Shi and Wu [47] and Xu [66].Up till now, there are only two papers about differential games of BSDEs:one is Yu and Ji [72], where an linear quadratic (LQ, in short) nonzero-sum game was studied by a standard completion of squares techniques and the explicit form of a Nash equi-librium point was obtained:the other one is Wang and Yu [56], where the game system was a nonlinear BSDE, and a necessary and a sufficient conditions in the form of maxi-mum principle were established. The game problems mentioned above are restricted to backward stochastic systems. To my best knowledge, there are only two papers about the differential games of forward-backward stochastic systems (see Buckdahn and Li [8] and Yu [71]). In Buckdahn and Li [8], the game system is described by a decoupled FBSDE, and the performance criterion is defined by the solution variable of BSDE, at the value at time 0. Buckdahn and Li proved a dynamic programming principle for both the upper and the lower value functions of the game, and showed that these two functions are the unique viscosity solutions to the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations. Recently, Yu [71] studied a linear-quadratic case of nonzero-sum game problem for forward-backward stochastic systems, where the FB-SDE method was employed to obtain an explicit Nash equilibrium point. However, in the present paper, we shall study the problem in more general situation. Com-bining FBSDE theory with certain classical convex variational techniques, we prove a necessary and a sufficient conditions for a Nash equilibrium point of nonzero-sum differential game of FBSDEs in the form of maximum principle, as well for a saddle point of zero-sum differential game of FBSDEs. Meanwhile, an example of a nonzero-sum differential game is worked out to illustrate theoretical applications. In terms of maximum principle, the explicit form of an equilibrium point is obtained.Inspired by finding an equilibrium point of an LQ nonzero-sum differential game of partial information backward doubly stochastic differential equations (BDSDEs, in short) and well describing so-called informal trading phenomena such as "insider trad-ing" in the market, we are concerned with a new type of differential game problem of partial information forward-backward doubly stochastic differential equations (FBDS-DEs, in short). This problem possesses fine generality. Firstly, the FBDSDE game system covers many systems as its particular case. For example, if we drop the terms on backward Ito’s integral or forward equation or both them, then the FBDSDE can be reduced to FBSDE or BDSDE or BSDE. Secondly, all the results can be reduced to the case of full information. Finally, if the present zero-sum stochastic differential game has only one player, the game problem is reduced to some related optimal control. In detail, our results are a partial extension to optimal control of partial information BSDEs and FBSDEs (see Huang, Wang and Xiong [20] and Xiao and Wang [64]) and full information BDSDEs (see Han, Peng and Wu [18]), and to differential games of full information BSDEs and partial information BSDEs (see Wu and Yu [56], Yu and Ji [72], Zhang [73] and Wu and Yu [57]).The thesis consists of four chapters. We list the main results as follows.Chapter 1:We give a brief introduction on the problems investigated from Chap- ter 2 to Chapter 4.Chapter 2:We study stochastic filtering of linear FBSDEPs. By applying the filtering equation established, we solve a partially observable LQ control problem, where an explicit observable optimal control is determined by the optimal filtering estimation.Theorem 2.1 Let (H2.1) and (H2.2) hold, and assume that there exists a solution to (2.14). Then the filtering estimation (πt(x),πt(y),πt(z1),πt(z2),πt(ri),πt(r2)) of the state (x, y, z1,z2, r1, r2) are given by (2.14), (2.22) and (2.23), and the conditional mean square error of the filtering estimationπt(x) is given by (2.21).Corollary 2.1 If we assume that a6(·)=a10(·)=0, that is, x(·) and N1(·) have no common jump times, then (2.14) becomes and (2.22), (2.23) and (2.21) still hold where the corresponding a6(·) and aw(·) inΣ(·),Λ(·) and (2.21) are replaced by 0.Corollary 2.2 If c5(·)=0, that is, the observation process Z(·) has no jumps, (2.24) is also the corresponding filtering equation, and (2.22), (2.23) and (2.21) still hold.Theorem 2.2 Let (H2.1)-(H2.3) hold. For any v(·)∈Uad, the state variable xv(·), which is the solution to (2.30), has the filtering estimation and where we adopt the notationγ(t)=E[(x"(t)-πt(xv))2|FtZ].Theorem 2.3 If(H2.1)-(H2.4) hold, then u(·) in (2.44)is an indeed optimal control for the aforesaid partially observable optimal control problem. Theorem 2.4 Let (H2.1)-(H2.4) hold. Then the optimal control u(·) and the corre-sponding performance criterion J(u(·)) are given by (2.44) and (2.54), respectively.Chapter 3:We firstly study optimal control of partially observable FBSDEPs when the states and observations are non-correlated. Based on this, we further extend it to the case where the states and observations are correlated. For these two cases, we establish the corresponding necessary and the sufficient conditions in the form of maximum principle. We also work out two examples to illustrate the theoretical applications.Lemma 3.1 Let assumption (H3.1) hold. ThenLemma 3.2 Let assumption (H3.1) hold. ThenLemma 3.3 Let assumption (H3.1) hold. Then the following variational inequality holds:Theorem 3.1 Let (H3.1) hold. Let u(·) be an optimal control for our stochastic optimal control problem, (x(·),y(·), z(·), r(·,·)) be the corresponding optimal trajectory, and (p(·),q(·), k(·),L(·,·))be the solution of (3.19). Then we haveTheorem 3.2 Let (H3.1) and (H3.2) hold. Let Zv(·) be FtY-adapted, u(·)∈Uad be an admissible control, and (x(·),y(·), z(·),r(·,·)) be the corresponding trajectories. Let β(·) and (p(·),q(·),k(·), L(·,·)) satisfy (3.17) and (3.19), respectively. Moreover the Hamiltonian H is convex in (x,y,z.r,v), and Then u(·) is an optimal control.Theorem 3.3 Minimizing the cost functional (3.29) over v(·)∈Uad in (3.32), subject to (3.30) and (3.33), formulates a partially observed optim,al control problem. Then the candidate optimal control u(·) in (3.35) is the desired unique optimal control, and its explicit expression is denoted by (3.57).Theorem 3.4 Assume that the hypothesis (H3.1) holds. Let u(·) be an optimal con-trol and{p, (Q, K. K, R), (q, k, k, r)} be the corresponding Ft-adapted square-integrable solution of FBSDEP (3.68). Then the necessary maximum principle is true for any v(·)∈Uad defined by (3.1).Theorem 3.5 Let (H3.1) and (H3.2) hold,ρv(·) be FtY-adapted and u(·)∈Uad be an admissible control with be the corresponding trajectories (x(·),y(·), z(·),z(·), r(·,·)). Further, we suppose that{p,(Q,K,K,R).(q,k,k,r)} satisfies equation (3.68), the Hamiltonian H(t, u(t)) is convex in (x,y, z, z, r, v), and Then u(}) is an optimal control.Chapter 4:We firstly study the differential games of terminal coupled FBSDEs. One of the motivations of this study is the problem of finding a saddle point in an LQ zero-sum differential game with generalized expectation. We give a necessary and a sufficient optimality conditions for the foregoing games. Inspired by finding an equilib-rium point of an LQ nonzero-sum differential game of partial information BDSDEs and well describing so-called informal trading phenomena such as "insider trading" in the market, we also further investigate differential games of partial information FBDSDEs. A necessary and a sufficient conditions for a Nash equilibrium point of nonzero-sum game are given, as well for a saddle point of zero-sum game. Lemma 4.1 Let assumption(H4.1)hold. Then it yields,for i=1,2,Lemma 4.2 Let assumpti.ons (H4.1)and (H4.2) hold. Then the following variational inequality holds for i=1,2:Theorem 4.1 Le (H4.1) and (H4.2) hold. Let(u1(·),u2(·))6e an equilibrium point of ProblemⅠwith the corresponding solutions(X(·),y(·),z(·))and (pi(·),qi(·),ki(·)) of (4.10)and(4.23).Then it follows that and are true for any(v1(·),v2(.))∈u1×u2,a.e.,a.s..Theorem 4.2 Let(H4.1),(H4.2)and(H4.3)hold. Let(u1(·),u2(·))∈u1×u2 with the corresponding solutions(x,y,z)and(pi,qi,ki)of equations(4.10)and(4.23).Suppose exist for all(t,a,b,c)∈[0,T]×Rn×Rm×Rm×d,and are concave in(a,b,c)for all t∈[0,T](the Arrow condition. Moreover Then (u1(·),u2(·)) is an equilibrium point of Problem I.Theorem 4.3 Let the assumptions (H4.1) and (H4.2) hold. Let (u1(·),u2(·))∈u1xu2 be a saddle point of ProblemⅡwith corresponding solutions (x, y, z) and (p. q, k) of equations (4.10) and (4.23) where the Hamiltonian functions H1 and H2 are defined by (4.37) and (4.38) respectively. Then it follows that and are true for any (v1(·), v2(·))∈u1×u2, a.e., a.s..Theorem 4.4 Let (H4.1), (H4.2) and (H4.3) hold. Let(u1(·),u2(·))∈u1×u2 with the corresponding solutions (x,y.z) and (p,q,k) of equations (4.10) and (4.41). Sup-pose that the Hamiltonian function H satisfies the following conditional mini-maximum principle: (i)Assum,e that bothφandγare concave, and exists for all (t, a, b, c)∈[0..T]×Rn×Rm×Rm×d, and is concave in (a,b,c). Then we have and (ⅱ)Assume that bothφandγare convex, and exists for all (t,a,b, c)∈[0,T]×Rn×Rm×Rm×d, and is convex in (a,b,c). Then we have. and (ⅲ)If both (ⅰ) and (ⅱ)are true, then (u1(·),u2(·)) is a saddle point which impliesTheorem 4.5 Let (H4.4) hold and(u1(·),u2(·))be an equilibrium point Problem (NZSG).Further,(y(·),z(·),Y(·),Z(·)) and (pi(·),pi(·),qi(·),qi(·)) are the solutions of(4.62)and(4.63) corresponding to the control(u1(·),u2(·)),respectively.Then it follows that and are true for any(v1(·),v2(·))∈u1×u2,a.e.a.s.Corollary 4.1 Suppose thatε1=F1 for allt.Let (H4.4) hold,and (u1(·),u2(·))be an eguilibrium point of Problem(NZSG).Moreove,(y(·),z(·),Y(·),Z(·)) and (pi(·),pi(·) qi(v),qi(·)) are the solutions of (4.62) and (4.63) corresponding to the control(u1(·), u2(·)),respectively. Then it follows that and are true for any(v1(·),v2(·))∈u1×u2,a.e.a.s.Theorem 4.6 Let.(H4.4) and (H4.5) hold. Let(y,z,Y,Z) and (pi.pi,qi,qi) be the solu-tions of equations (4.62)and(4.63)corresponding to the admissible control(u1(·),u2(·)), respectively. Suppose thatφi andγi are concave Y and y(i=1,2)respectively, and that for all(t,y,z,Y,Z,)∈[0,T]×Rn×Rn×l×Rm×Rm×d, (y,z.Y,v1)→,(t,y,z,Y,Z,v1,u2(t),p1(t),p1(t),q1(t),q1(t)), (y,z,Y,Z,v2)→H2(t,y,z,Y,Z,u1(t):v2,p2(t):p2(t),q2(t),q2(t)) are concave. Moreover, Then (u1(·),u2(·)) is an equilibrium point of Problem (NZSG).

  • 【网络出版投稿人】 山东大学
  • 【网络出版年期】2012年 07期
节点文献中: