Original version
Infinite Dimensional Analysis Quantum Probability and Related Topics. 2018, 21 (3), DOI: http://dx.doi.org/10.1142/S0219025718500145
Abstract
The classical maximum principle for optimal stochastic control states that if a control û is optimal, then the corresponding Hamiltonian has a maximum at u=û. The first proofs for this result assumed that the control did not enter the diffusion coefficient. Moreover, it was assumed that there were no jumps in the system. Subsequently, it was discovered by Shige Peng (still assuming no jumps) that one could also allow the diffusion coefficient to depend on the control, provided that the corresponding adjoint backward stochastic differential equation (BSDE) for the first-order derivative was extended to include an extra BSDE for the second-order derivatives. In this paper, we present an alternative approach based on Hida–Malliavin calculus and white noise theory. This enables us to handle the general case with jumps, allowing both the diffusion coefficient and the jump coefficient to depend on the control, and we do not need the extra BSDE with second-order derivatives. The result is illustrated by an example of a constrained linear-quadratic optimal control.