节点文献

泛函正倒向随机微分方程理论和G-期望下的最优化

Theory of Functional FBSDE and Optimization Under G-expectation

【作者】 杨淑振

【导师】 彭实戈; 嵇少林;

【作者基本信息】 山东大学 , 概率论与数理统计, 2014, 博士

【摘要】 倒向随机微分方程和正倒向随机微分方程已经广泛应用到很多领域,特别是金融数学和随机控制理论(见[22],[27],[71],[111],[112]和相关的文章).状态依赖完全耦合的正倒向随机微分方程如下:现在研究正倒向随机微分方程(1)主要有三种方法,i.e.,压缩映射的方法(见[2]和[95]),四步法(见[68])和连续性方法(见[52],[88]和[113]).在[70],Maet al研究了一般的马尔科夫的正倒向随机微分方程.他们归结现有的方法为一致性方法,且克服了正倒向随机微分方程长期存在的问题.拟线性抛物偏微分方程联系到马尔科夫正倒向随机微分方程(见[86],[92]和[95]),推广了Feynman-Kac公式.近来Dupire在[27]中引入一种新的泛函导数且被Cont和Fourni[16],[17],[18]进一步发展.在Dupire的工作下,Peng和Wang [96]得到了被称为泛函的Feynman-Kac公式的路径依赖的偏微分方程(P-PDE)并联系到一类倒向随机微分方程.进一步,在一定条件下,Peng [84]证明了路径依赖的二阶偏微分方程存在唯一的粘性解.Ekren, Touzi,和Zhang([36],[34],[35])给出了全非线性的路径依赖的偏微分方程粘性解新的定义并得到了粘性解唯一性.在第一章第一节,考虑下面泛函正倒向随机微分方程:其中Xs:=X(t)0≤t≤s.Hu和Peng在[52]中首次提出连续性方法,其中主要技术是提出单调性条件.但是[52]和[97]给出的单调性条件不能用于方程(2).这里的主要困难是方程(2)依赖于解X (t)0≤t≤T的路径.在这一节,采用连续性方法且给出一种新的Lipschitz和单调性条件.这些新的条件涉及X(t)o≤t≤T的路径.因此,称之为积分型Lipschitz和单调性条件.参看正文假设1.1和1.2.特别的,给出了两个例子说明这两个假设并不特殊.在积分Lipschitz和单调性条件下,连续性方法给出方程(2)解的存在唯一性.另外给出了方程(2)和下面的路径依赖经典解的部分关系:其中假设路径依赖微分方程u具有光滑和正则的性质,通过方程(2)解的存在唯一性可以说明这个路径依赖的偏微分方程至多有一个解.Bismut [3]最先提出线性倒向随机微分方程.非线性倒向随机微分方程解的存在唯性由Pardoux和Peng [91]得到Peng [86]和Pardoux, Peng [92]给出了拟线性抛物偏微分方程和倒向随机微分方程的对应关系,推广了经典的Feynman-Kac公式Peng在[89]中指出非马尔科夫倒向随机微分方程对应的偏微分方程是一个开问题.在第一章第二节,考虑非马尔科夫完全耦合的正倒向随机微分方程和路径依赖偏微分方程的对应关系.更精确来说,非马尔科夫正倒向系统如下:在泛函导数的框架下,首先给出路径依赖的偏微分方程.在一般性假设下,建立了路径依赖方程的正则性估计.可以说明,方程(4)的解联系到下面的路径依赖的偏微分方程的解其中在实际问题中,系统的状态方程通常对历史有依赖性.在第一章第三节,考虑由下面的泛函依赖的随机微分方程驱动的随机控制问题:消费函数为初始值为γt∈A,最优控制问题是关于控制u(·)∈U[t,T](见定义1.21)对J取最小.定义值函数V:Λ→R为得到下面路径依赖的偏微分方程其中且证明了值函数是偏微分方程的粘性解.另外,给出了光滑情形的验证定理.动态规划原理和对应的HJB方程是解决最优控制问题的重要方法(见[88],[43],[110],[114]和[87]).不同于延迟问题(见[23],[25],[64]和[65])和泛函随机系统的动态规划原理(见[72]),这里给出来了一种新的泛函伊藤公式和对应的HJB方程.自1983年来,Crandall和Lions [21]发展了粘性解.有限维的问题得到的很好的解决,详见[20].但在实际应用中,需要系统依赖历史,则相关的最优控制问题变成了无穷维.Mohammed [64]和[65]研究了泛函依赖的随机微分方程Chang et al.[23]研究了带记忆的随机最优控制问题.但在应用Ekeland变分原理时产生了问题.详见[57],[60].近来,在一定假设下,Peng [84]证明了全非线性路径依赖偏微分方程粘性解存在唯一性Ekren, Touzi,和Zhang([36],[39],[40])抽象的考虑全非线性偏微分方程,在粘性解的定义中采用复杂的super-和sub-集合,特别的他们的定义涉及到非线性期望.在Dupire’s泛函伊藤公式下,Tang和Zhang [109]研究了递归效用的最优控制问题.在第一章第四节:给出连续空间上的弱Frechet导数.对比Dupire导数的定义,尝试给出Frechet导数在一个较小的空间上扰动.沿着这个想法,选择了W1,2空间作为扰动空间,给出了一种新的弱Frechet导数.在第一章第五节:考虑弱Frechet导数下的随机最优控制问题.记连续空间为C.这种新的导数不需要考虑右连左极函数空间,其中Dupire’s导数需要考虑.在这种新的框架下,给出了半鞅的伊藤公式.然后考虑了带记忆的随机微分方程相关的随机控制问题.且把粘性解的定义限制到W1,2.在这种新的粘性解的定义下,验证了值函数是对应的HJB方程唯一的粘性解.金融数学中,需要计算违约概率.在线性概率假设下,应用正态分布描述股票收益率,可以很简单的计算违约概率.一般情况下,市场是不确定的.G-期望由Peng最近几年提出,在一定假设下等价于一族概率(见[30]).在G-期望理论中,引入了G-正态分布和G-布朗运动和相关的伊藤计算(见[78],[80],[81]).在马尔科夫情形,G-期望对应全非线性偏微分方程,可以应用到经济和金融等方面(见[90]).在第二章第一节,考虑了G-热方程的数值性质.下面的方程用于计算非线性概率([78]):其中(?|)(x)=1{x<0},x∈R.且u(t,x):=E[(?)(x+(?)tX)],(t,x)∈[0,∞)×Rd,是上面方程的粘性解,其中E是次线性期望.沿着[90],[93],[100]的工作,证明了全隐格式收敛到G-热方程的粘性解.在相同的最大波动率下,通过下面的方程比较非线性概率u(1,0)和线性概率u(1,0):和通过计算,有Pardoux和Peng [91]首次提出非线性倒向随机微分方程.独立的Duffie和Epstein[28]提出了随机递归效应联系的倒向随机微分方程.倒向随机微分方程是一种递归效应的形式(见[38]).自此,经典的最优控制问题推广到了”随机递归效应问题”,消费函数通过倒向方程解定义Peng [87]得到了对应的HJB方程并证明了值函数是HJB方程的粘性解.在[88],Peng推广了以前的结果引入了后向半群,这样可以更加直接的呈现动态规划原理.Wu和Yu[110]采用后向半群的方法研究了反射倒向随机微分方程的随机控制问题Buckdah-n和Li在[6]中研究了相关的随机博弈问题.其中Buckdahn et al.[7]得到了随机递归效应问题存在性的结果.考虑到度量风险和金融中的不确定性问题,Peng [78]引入了次线性期望,推广了线性概率Peng研究了全非线性期望,称为G-期望E[·](见[82]和相关的结果)和条件期望Et[.]在范数E[|·|p]1/p下的完备化.在G-期望框架下(简记G-框架)一种新的布朗运动记为G-布朗运动.给出了G-布朗运动相关的计算.通过经典的方法可以证明由G-布朗运动驱动的随机微分方程解的存在唯一性.但是通过G-布朗运动驱动的倒向随机微分方程解的问题是一个挑战.近来G-期望理论和相关应用参看[76,77,83,106,73,32,33,94,102,1031.另外有其它一些框架研究非线性概率的问题Denis和Martini [31]给出了一种拟线性随机分析,但是得不到条件期望.这个问题进一步在Denis et al.[30]和Soner et al.[107]中研究.其中Soner et al.[108]得到了一类倒向随机微分方程解的深入的结果,称为2BSDE.不同的风险控制问题(博弈)见[72,75,98,67]和金融的应用见[69,74].在第二章第二节,考虑G-期望下的随机递归效应问题.近来Huet. al研究了G-驱动的倒向随机微分方程见[50]和[49]:在关于f(s,y,z)和g(s,y,z)在(y,z)标准假设下,得到了唯一解(Y,Z,K).不增的G-鞅K是一个集成.倒向方程的一些重要的性质如比较定理和Girsanov变换见[49].这里考虑G-布朗运动驱动的倒向随机微分方程对应的随机控制问题.即,由G-布朗运动驱动的随机微分方程如下目标泛函为Yt t,x,u:定义随机最优控制问题的值函数为:控制集是G-框架下的.主要结果是值函数V是确定的且是下面的方程的粘性解其中Zhang [115]考虑了类似的问题.正倒向方程[115]比较简单:正向方程是时齐次的,倒向方程不含Z和K.过去的二十年中,倒向随机微分方程广泛应用于金融,随机控制等其他领域.不同于连续时间的离散,Cohen和Elliott [13]考虑了有限时间有限状态上的倒向随机微分方程.非连续情形的逼近见[8,5,66],给出了一般的结果如比较定理详见[11,12,13,14,15,19].在第三章,考虑这种有限时间有限状态下的Girsanov变换.随机计算中,Doleans-Dade定义半鞅指数Y为下面的微分方程的解初始条件为Y0=1对应的解为Follmer给出下面Doleans-Dade在[42]随机指数的离散情形:如果户是等价于P的概率测度,则鞅可以写成其中Λ是一个P-鞅满足Λ0和△t+1-Λt>-1P-a.s.本节推广Follmer的结果,给定下面的线性倒向随机微分方程[13]:为了得到上面的方程的显式解,给出推广的Girsanov变换.考虑一步差分方程给定概率空间(Ω,FT,{Ft)0≤t≤T,P)其中a是适应过程.定义测度Q为可以证明Q和P是空间(Ω,FT)上的等价概率测度且Y是(Ω,FT,{Ft)0≤t≤T,Q)上的鞅.通过构造的Girsanov变换,给出了完备市场动态资产定价.

【Abstract】 Backward stochastic differential equations and forward-backward stochastic differ-ential equations (FBSDEs) have been widely recognized that they provide useful tools in many fields, especially mathematical finance and the stochastic control theory (see [22],[27],[71],[111],[112] and the references therein).A state dependent fully coupled FBSDE is formulated as: There have been three main methods to solve FBSDE (10), i.e., the Method of Con-traction Mapping (see [2] and [95]), the Four Step Scheme (see [68]) and the Method of Continuation (see [52],[88] and [113]). In [70], Ma et al. studied the wellposedness of the FBSDEs in a general non-Markovian framework. They find a unified scheme which combines all existing methodology in the literature, and overcome some fundamental difficulties that have been long-standing problems for non-Markovian FBSDEs.It is well known that quasilinear parabolic partial differential equations are related to Markovian FBSDEs (see [86],[92] and [95]), which generalizes the classical Feynman-Kac formula. Recently a new framework of functional ltd calculus was introduced by Dupire [27] and later developed by Cont and Fourni [16],[17],[18]. Inspired by Dupire’s work, Peng and Wang [96] obtained a so-called functional Feynman-Kac formula for classical solutions of path-dependent partial differential equation (P-PDE) in terms of non-Markovian BSDEs. Furthermore, under a special condition, Peng [84] proved that the viscosity solution of the second order fully nonlinear P-PDE is unique. Ekren, Touzi, and Zhang ([36],[34],[35]) gave another definition of the viscosity solution of the fully nonlinear P-PDE and obtained the uniqueness result of viscosity solutions. In section1of chapter1, we study the following functioned fully coupled FBSDE: where Xs:=X(t)0≤t≤s.As mentioned above, Hu and Peng [52] initiated the continuation method in which the key issue is a certain monotonicity condition. But unfortunately, the Lipschitz and monotonicity conditions in [52] and [97] do not work for equation (11). Here the main difficulty is that the coefficients of (11) depend on the path of the solution X(t)0<t<T-In this section, we revise the continuation method and propose a new type of Lipschitz and monotonicity conditions. These new conditions involve an integral term with respect to the path of X(t)1≤t≤T.Thus, we call them the integral Lipschitz and monotonicity conditions. The readers may see Assumption1.1and1.2for more details. In particular, we present two examples to illustrate that our assumptions are not restrictive. Under the integral Lipschitz and monotonicity conditions, the continuation method can go through and it leads to the existence and uniqueness of the solution to equation (11).We explore the relationship between the solution of functional fully coupled FBSDE (11) and the classical solution of the following related P-PDE: whereWe prove that if the solution u of the above P-PDE has some smooth and regular properties, then we can solve the related equation (11) and consequently, the P-PDE has a unique solution.Linear Backward Stochastic Differential Equations (BSDE) was introduced by Bis-mut [3]. The existence and uniqueness theorem for nonlinear BSDEs was established by Pardoux and Peng [91]. Then Peng [86] and Pardoux and Peng [92] gave a relation-ship between Markovian forword-backward systems and systems of quasilinear parabolic PDEs, which generalized the classical Feynman-Kac formula. Peng [89] pointed out that for non-Markovian forword-backward systems, it was an open problem to find the corresponding " PDE".In the section2of chapter1, we study the relationship between solution-s of non-Markovian fully coupled forword-backward systems and classical solutions of path-dependent PDEs. More precisely, the non-Markovian forword-backward system is described by the following fully coupled forword-backward SDE:We first give the definition of classical solution, within the framework of functional Ito calculus, for the path-dependent PDE. Then under mild hypotheses, we establish some estimates and regularity results for the solution of the above system with respect to paths. Finally, we show that the solution of (13) is related to the classical solution of the following path-dependent PDE whereIn many real world applications, the systems can only be modeled by stochastic systems whose evolutions depend on the past history of the states. So in section3of chapter1, we study a stochastic optimal control prob-lem in which the system is described by the following stochastic functional differential equation: The cost functional is For the initial datum-γt∈A, our optimal control problem is to find an admissible control u(·)∈u[t,T](see Definition1.21) so as to minimize the cost functional J. In this case, the value function V:A→R is defined to beWe obtain the following path-dependent HJB equation where We prove that the value function is the viscosity solution of the path-dependent HJB equation. In addition, the stochastic verification theorem for the smooth case is also proved.It is well known that dynamic programming with related HJB equations is a pow-erful approach to solving optimal control problems (see [88],[43],[110],[114] and [87]). Different from the HJB equations derived for stochastic delay systems (see [23],[25],[64] and [65]) and dynamic programming principle for functional stochastic systems (see [72]), we establish the dynamic programming principle and derive the HJB equation in a new framework of functional Ito calculus.Since1983, Crandall and Lions [21] developed the notion of viscosity solution. The finite dimension optimal stochastic control problem has been studied well, more see [20]. But In many real world applications, the systems can only be modeled by stochastic systems whose evolutions depend on the past history of the states, and the related optimal stochastic control problem become an infinite dimension problem.Mohammed in [64] and [65] studied functional stochastic differential equations. Chang et al.[23] studied the optimal stochastic control problem which driven by s-tochastic functional differential equations with bounded memory. But they maked a mistake for using the Ekeland variational principle. More see [57],[60].Furthermore, Under a special condition, Peng [84] proved that the viscosity solution of second order fully nonlinear path-dependent PDE is unique. Ekren, Touzi, and Zhang ([36],[39],[40]) directly work with an abstract fully nonlinear path-dependent PDE, and use a complicated definition of super-and sub-jets in their notion of viscosity solution, in particular their definitions involve the unnatural and advanced notion of nonlinear expectation. Under Dupire’s functional ltd calculus, Tang and Zhang [109] studied the optimal stochastic control problem for a path dependent stochastic system under a recursive path-dependent cost functional, and there is some mistake in the proof of uniqueness.In the section4of chapter1:We try to give a weak derivative in contin-uous space. Inspired by Dupire’s derivatives, we should give a samll perturlation space in the definition of Frechet derivative. Following this view, we choose the Sobolev space W1’2as perturlation space, and present a new weak Frechet derivatives in continuous paths space.In the section5of chapter1:We present a new weak Frechet derivative in continuous paths space. Denote continuous paths space as C. This new derivatives don’t need to consider the cadlag space, which Dupire’s derivatives need deal with. Under this new framwork, we have the related functional ltd formula of semi-martingale. Then we study the optimal stochastic control problem which driven by stochastic functional differential equations with bounded memory. For the new weak Frechet derivatives in C only considers the perturbation of the element of Soblove space (W1’2), and the restriction of Ekeland variational principle, we limit the definition of the viscosity solution into TV1’2. Following the new definition of viscosity solution, we verify that the value function of an optimal stochastic control problem should be the unique solution of the associated HJB equation from the dynamic programming principle for the optimal stochastic control problem.In the mathematical Finance, we focus on the compute of probability of default. Under the assumption of linear probability (expectation) space, we use log normal dis-tribution to describe the return of stock, and we could easily calculus probability of default by normal distribution. For general case, there is not only one probability. We need introduce volatility uncertainty (including much more probabilities) in the market.A nonlinear expectation (probability) G-expectation was established by Peng in recent years, which could be equivalent to a set of probabilities (see [30]). In the theory of G-expectation, the G-normal distribution and G-Brownian motion were introduced and the corresponding stochastic calculus of Ito’s type were established (see [78],[80],[81]). In Markovian case, the G-expectation is associated with fully nonlinear PDEs, and is applied among economic and financial models with volatility uncertainty (see [90]).In the section1of chapter2, the numerical property of G-heat equation is considered. The next equation is used to compute the nonlinear probability ([78]):where φ(x)=1{x<0}, r∈R.We show that u(t,x):=E[√(x+√X)],(t, x)∈[0.∞) x Rd, is the viscosity solution of the equation (14), where E is the nonlinear expectation.Following the work of [90],[93],[100], we prove that the the fully implicit discretiza-tion convergence to the viscosity solution of the G-heat equation.Under the same maximum volatility, we compare the nonlinear probability u(1,0) and linear probability u(1,0) of the next two equations: and By calculation, we have It is well known that the nonlinear backward stochastic differential equation (BSDE) was first introduced by Pardoux and Peng [91]. Independently, Duffie and Epstein [28] presented a stochastic differential recursive utility which corresponds to the solution of a particular BSDE. Then the BSDE point of view gives a simple formulation of recursive utilities (see [38]).Since then, the classical stochastic optimal control problem is generalized to a so called "stochastic recursive optimal control problem" in which the cost functional is described by the solution of BSDE. Peng [87] obtained the Hamilton-Jacobi-Bellman equation for this kind of problem and proved that the value function is its viscosity solution. In [88], Peng generalized his results and originally introduced the notion of stochastic backward semigroups which allows him to prove the dynamic programming principle in a very straightforward way. This backward semigroup approach is proved to be a useful tool for the stochastic optimal control problems. For instance, Wu and Yu [110] adopted this approach to study one kind of stochastic recursive optimal control problem with the cost functional described by the solution of a reflected BSDE. It is also introduced in the theory of stochastic differential games by Buckdahn and Li in [6]. We emphasize that Buckdahn et al.[7] obtained an existence result of the stochastic recursive optimal control problem.Motivated by measuring risk and other financial problems with uncertainty, Peng [78] introduced the notion of sublinear expectation space, which is a generalization of probability space. As a typical case, Peng studied a fully nonlinear expectation, called G-expectation E[·](see [82] and the references therein), and the corresponding time-conditional expectation Et Et[·] on a space of random variables completed under the norm E[|·|p]1/p. Under this G-expectation framework (G-framework for short) a new type of Brownian motion called G-Brownian motion was constructed. The stochastic cal-culus with respect to the G-Brownian motion has been established. The existence and uniqueness of solution of a SDE driven by G-Brownian motion can be proved in a way parallel to that in the classical SDE theory. But the solvability of BSDE driven by G-Brownian motion becomes a challenging problem. For a recent account and development of G-expectation theory and its applications we refer the reader to [76,77,83,106,73,32,33,94,102,103].Let us mention that there are other recent advances and their applications in s-tochastic calculus that do not require a probability space framework. Denis and Martini [31] developed quasi-sure stochastic analysis, but they did not have conditional expec-tation. This topic was further examined by Denis et al.[30] and Soner et al.[107]. It is worthing to point out that Soner et al.[108] have obtained a deep result of exis-tence and uniqueness theorem for a new type of fully nonlinear BSDE, called2BSDE. Various stochastic control (game) problems are investigated in [72,75,98,67] and the applications in finance are studied in [69,74].In the section2of chapter2, we study the stochastic differential recursive utility under G-expectation.Recently Hu et. al studied the following BSDE driven by G-Brownian motion in [50] and [49]: They proved that there exists a unique triple of processes (Y, Z, K) within our G-framework which solves the above BSDE under a standard Lipschitz conditions on f(s, y, z) and g(s, y. z) in (y, z). The decreasing G-martingale K is aggregated and the so-lution is time consistent. Some important properties of the BSDE driven by G-Brownian motion such as comparison theorem and Girsanov transformation were given in [49].We study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a BSDE driven by G-Brownian motion. In more details, the state equation is governed by the following controlled SDE driven by G-Brownian motion The objective functional is introduced by the solution Ytt,x,u of the following BSDE driven by G-Brownian motion at time t:We define the value function of our stochastic recursive optimal control problem as follows: where the control set is in the G-framework. It is well known that dynamic programming and related HJB equations is a powerful approach to solving optimal control problems (see [43],[114] and [87]). The objective is to establish the dynamic programming principle and investigate the value function in G-framework. The main result is that V is deterministic continuous viscosity solution of the following HJB equation whereRecently a similar problem was studied in Zhang [115]. The forward-backward equa-tions in [115] are simpler:the forward equation is time-homogeneous and the backward equation does not include the terms Z and K.Over the past twenty years, backward stochastic differential equations are widely used in mathematical finance, stochastic control and other fields. By analogy with the equations in continuous time, Cohen and Elliott [13] considered the backward stochastic difference equations (BSDEs) on spaces related to discrete time, finite state processes. As entities in their own right, not as approximations to the continuous ones in [8,5,66], they established fundamental results including the comparison theorem etc. For deeper discussion, the readers may refer to [11,12,13,14,15,19].So in chapter3, we develop a new generalized Girsanov transformation in this new discrete time and finite processes. In stochastic calculus, the Doleans-Dade exponential of a semimartingale X is defined to be the solution of the stochastic differential equation with initial condition Y0=1and exponentiating gives the solution In the discrete time case, Follmer showed the following version of the Doleans-Dade stochastic exponential in [42]:If P is a probability measure equivalent to P, then the martingale can be represented as where A is a P-martingale with Ao and At+1-At>-1P-a.s.In this section, we generalize Follmer’s result to study the following linear BSDE in [13]: Motivated by obtaining the explicit solution of the above equation, we develop the fol-lowing generalized Girsanov transformation. Consider the following one-step equation on the probability space ((Ω,FT,{Ft)0≤t≤T,P)) where a is an adapted process. Denote a new measure Q by We prove that Q and P are equivalent probability measures on (Ω, Ft) and Y is a martingale on (Ω,FT,{Ft)0≤t≤T,Q)-By the Girsanov transformation, we show the price dynamics of certain securities in the complete financial market.

  • 【网络出版投稿人】 山东大学
  • 【网络出版年期】2014年 10期
节点文献中: 

本文链接的文献网络图示:

本文的引文网络