Gradient Enhanced Multi-output Gaussian Processes

Gradient Enhanced Multi-output Gaussian Processes#

The formulation described in the Multi-output Gaussian Processes section can be generalized to a gradient enhanced MOGP in a relatively straightforward manner. As before, isotropic training sets are considered so that for \(t \in \left\{ 1 , \ldots, T \right\}, \mathbf{X}_T = \mathbf{X}\). Then when including gradient information the training data set can be written as:

(1)#\[\mathcal{D} = \left\{\left((\mathbf{X}_1, \ldots, \mathbf{X}_T),(\mathbf{y}_1, \ldots, \mathbf{y}_T),(\nabla \mathbf{y}_1 , \ldots, \nabla \mathbf{y}_T)\right) \right\} \in \mathbb{R}^{nT + dnT}\]

Similar to the derivative GPs section, the observation vector can be augmented to include the partial derivatives of the function at each training point for each output. The observation vector \(\mathbf{y}\) is expanded into an augmented vector, \(\mathbf{y}^{GMO}\). In the general case, the predictions at the test locations \(\mathbf{X}_*\) are also augmented to include derivatives, forming a vector \(\mathbf{y}^{GMO}_*\):

(2)#\[\begin{split}\mathbf{y}^{GMO} = \begin{bmatrix} f_1(\mathbf{X}) \\ \vdots \\ f_T(\mathbf{X}) \\ \frac{\partial f_1(\mathbf{X})}{\partial x_1} \\ \vdots \\ \frac{\partial f_1(\mathbf{X})}{\partial x_d} \\ \vdots \\ \frac{\partial f_T(\mathbf{X})}{\partial x_1} \\ \vdots \\ \frac{\partial f_T(\mathbf{X})}{\partial x_d} \end{bmatrix}, \quad \mathbf{y}^{GMO}_* = \begin{bmatrix} f_1(\mathbf{X}_*) \\ \vdots \\ f_T(\mathbf{X}_*) \\ \frac{\partial f_1(\mathbf{X}_*)}{\partial x_1} \\ \vdots \\ \frac{\partial f_1(\mathbf{X}_*)}{\partial x_d} \\ \vdots \\ \frac{\partial f_T(\mathbf{X}_*)}{\partial x_1} \\ \vdots \\ \frac{\partial f_T(\mathbf{X}_*)}{\partial x_d} \end{bmatrix}.\end{split}\]

Note that since we are assuming isotropic training sets the inputs are assumed to be the same across model outputs.

The joint distribution between the augmented training observations and the augmented test predictions is a multivariate Gaussian:

(3)#\[\begin{split}\begin{pmatrix} \mathbf{y}^{GMO} \\ \mathbf{y}^{GMO}_* \end{pmatrix} \sim \mathcal{N}\left( \mathbf{0}, \begin{pmatrix} \boldsymbol{\Sigma}^{GMO}_{11} & \boldsymbol{\Sigma}^{GMO}_{12} \\ \boldsymbol{\Sigma}^{GMO}_{21} & \boldsymbol{\Sigma}^{GMO}_{22} \end{pmatrix} \right).\end{split}\]

The blocks of this covariance matrix are also augmented. Following the Kronecker structure introduced in the Multi-output Gaussian Processes section, we employ the separable covariance formulation where the spatial covariance \(K(\mathbf{X}, \mathbf{X}') = k^x(\mathbf{X}, \mathbf{X}')\) and the task correlation matrix \(k^t\) are combined. Here, \(k^t_{tt'}\) denotes the \((t,t')\)-th element of \(k^t\), representing the correlation between outputs \(t\) and \(t'\). In the Kronecker formulation, the task correlations \(k^t_{tt'}\) are scalar constants, while derivatives are applied only to the spatial covariance \(K(\mathbf{X}, \mathbf{X}')\). The training covariance block, \(\boldsymbol{\Sigma}^{GMO}_{11}\), is an \(nT(d+1) \times nT(d+1)\) matrix:

(4)#\[\begin{split}\boldsymbol{\Sigma}^{GMO}_{11} = \begin{pmatrix} k^t_{11}K(\mathbf{X}, \mathbf{X}') & \ldots & k^t_{1T}K(\mathbf{X}, \mathbf{X}') & k^t_{11}\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} & \ldots & k^t_{1T}\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} \\ k^t_{21}K(\mathbf{X}, \mathbf{X}') & \ldots & k^t_{2T}K(\mathbf{X}, \mathbf{X}') & k^t_{21}\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} & \ldots & k^t_{2T} \frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ k_{T1}^tK(\mathbf{X}, \mathbf{X}') & \ldots & k_{TT}^tK(\mathbf{X}, \mathbf{X}') & k_{T1}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} & \ldots & k^t_{TT}\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}'} \\ k_{11}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & \ldots & k_{1T}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & k_{11}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} & \ldots & k_{1T}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} \\ k_{21}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & \ldots & k_{2T}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & k_{21}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} & \ldots & k_{2T}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ k_{T1}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & \ldots & k_{TT}^t\frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}} & k_{T1}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} & \ldots & k_{TT}^t\frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X} \partial \mathbf{X}'} \end{pmatrix}.\end{split}\]

As in the MOGP case, observation noise is accounted for through \(\boldsymbol{\Sigma}_M \in \mathbb{R}^{nT(d+1) \times nT(d+1)}\), which adds appropriate noise variances to the diagonal blocks corresponding to function values and derivatives. The training-test covariance block, \(\boldsymbol{\Sigma}^{GMO}_{12}\) follows a similar format as the training covariance block but now contains the covariances between all training and test observations. The remaining blocks are defined as \(\boldsymbol{\Sigma}^{GMO}_{21} = \left(\boldsymbol{\Sigma}^{GMO}_{12}\right)^T\), and \(\boldsymbol{\Sigma}^{GMO}_{22}\) has the same structure as \(\boldsymbol{\Sigma}^{GMO}_{11}\) but is evaluated at the test points \(\mathbf{X}_*\). The posterior predictive distribution for the augmented test vector \(\mathbf{y}^{GMO}_*\) is then given by:

(5)#\[\begin{split}\begin{split} \boldsymbol{\mu}_{*} &= \boldsymbol{\Sigma}^{GMO}_{21} \left(\boldsymbol{\Sigma}^{GMO}_{11} + \boldsymbol{\Sigma}_M\right)^{-1} \mathbf{y}^{GMO} \\ \boldsymbol{\Sigma}_{*} &= \boldsymbol{\Sigma}^{GMO}_{22} - \boldsymbol{\Sigma}^{GMO}_{21} \left(\boldsymbol{\Sigma}^{GMO}_{11} + \boldsymbol{\Sigma}_M\right)^{-1} \boldsymbol{\Sigma}^{GMO}_{12} \end{split}\end{split}\]

The posterior mean \(\boldsymbol{\mu}_{*}\) now provides predictions for both function values and derivatives, while \(\boldsymbol{\Sigma}_{*}\) provides their uncertainty.

Similar to the standard GP, the kernel hyperparameters \(\boldsymbol{\psi}\) are determined by maximizing the log marginal likelihood (MLL) of the augmented observations:

(6)#\[\log p(\mathbf{y}^{GMO}|\mathbf{X}, \boldsymbol{\psi}) = -\frac{1}{2} \left(\mathbf{y}^{GMO}\right)^\top \left(\boldsymbol{\Sigma}^{GMO}_{11} + \boldsymbol{\Sigma}_M\right)^{-1} \mathbf{y}^{GMO} - \frac{1}{2}\log|\boldsymbol{\Sigma}^{GMO}_{11} + \boldsymbol{\Sigma}_M| - \frac{nT(d+1)}{2}\log 2\pi.\]

In the GEMOGP formulation, \(\boldsymbol{\psi}\) includes the spatial covariance parameters, the Cholesky factors \(\{a_i\}\) of \(k^t\), and the noise variances in \(\boldsymbol{\Sigma}_M\), all jointly optimized via MLL. References ———-

[1]

A Comparison of Numerical Optimizers in Developing High Dimensional Surrogate Models, volume Volume 2B: 45th Design Automation Conference of International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 08 2019. URL: https://doi.org/10.1115/DETC2019-97499, arXiv:https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-pdf/IDETC-CIE2019/59193/V02BT03A037/6452976/v02bt03a037-detc2019-97499.pdf, doi:10.1115/DETC2019-97499.

[2]

Matheron Georges. Principles of geostatistics. Economic geology, 58(8):1246–1266, 1963.

[3]

D. G. Krige. A statistical approach to some basic mine valuation problems on the witwatersrand. OR, 4(1):18–18, 1953. URL: http://www.jstor.org/stable/3006914 (visited on 2025-02-20).

[4]

William J. Welch, Robert J. Buck, Jerome Sacks, Henry P. Wynn, and Toby J. Mitchell. Screening, predicting, and computer experiments. Technometrics, 34(1):15–25, 1992. URL: https://www.tandfonline.com/doi/abs/10.1080/00401706.1992.10485229, arXiv:https://www.tandfonline.com/doi/pdf/10.1080/00401706.1992.10485229, doi:10.1080/00401706.1992.10485229.

[5]

Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. ISBN 9780262182539. URL: http://www.gaussianprocess.org/gpml/.

[6]

Gregoire Allaire and Sidi Mahmoud Kaber. Numerical Linear Algebra. Texts in applied mathematics. Springer, New York, NY, January 2008.

[7]

Weiyu Liu and Stephen Batill. Gradient-enhanced response surface approximations using kriging models. In 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. Reston, Virigina, September 2002. American Institute of Aeronautics and Astronautics.

[8]

Wataru Yamazaki, Markus Rumpfkeil, and Dimitri Mavriplis. Design optimization utilizing gradient/hessian enhanced surrogate model. In 28th AIAA Applied Aerodynamics Conference. Reston, Virigina, June 2010. American Institute of Aeronautics and Astronautics.

[9]

Selvakumar Ulaganathan, Ivo Couckuyt, Tom Dhaene, Joris Degroote, and Eric Laermans. Performance study of gradient-enhanced kriging. Eng. Comput., 32(1):15–34, January 2016.

[10]

Alexander I.J. Forrester and Andy J. Keane. Recent advances in surrogate-based optimization. Progress in Aerospace Sciences, 45(1):50–79, 2009. URL: https://www.sciencedirect.com/science/article/pii/S0376042108000766, doi:https://doi.org/10.1016/j.paerosci.2008.11.001.

[11]

Youwei He, Kuan Tan, Chunming Fu, and Jinliang Luo. An efficient gradient-enhanced kriging modeling method assisted by fast kriging for high-dimension problems. International journal of numerical methods for heat & fluid flow, 33(12):3967–3993, 2023.

[12]

Selvakumar Ulaganathan, Ivo Couckuyt, Tom Dhaene, Eric Laermans, and Joris Degroote. On the use of gradients in kriging surrogate models. In Proceedings of the Winter Simulation Conference 2014. IEEE, December 2014.

[13]

Liming Chen, Haobo Qiu, Liang Gao, Chen Jiang, and Zan Yang. A screening-based gradient-enhanced kriging modeling method for high-dimensional problems. Applied Mathematical Modelling, 69:15–31, 2019. URL: https://www.sciencedirect.com/science/article/pii/S0307904X18305900, doi:https://doi.org/10.1016/j.apm.2018.11.048.

[14]

Zhong-Hua Han, Yu Zhang, Chen-Xing Song, and Ke-Shi Zhang. Weighted gradient-enhanced kriging for high-dimensional surrogate modeling and design optimization. AIAA Journal, 55(12):4330–4346, 2017. URL: https://doi.org/10.2514/1.J055842, arXiv:https://doi.org/10.2514/1.J055842, doi:10.2514/1.J055842.

[15]

Yiming Yao, Fei Liu, and Qingfu Zhang. High-Throughput Multi-Objective bayesian optimization using gradients. In 2024 IEEE Congress on Evolutionary Computation (CEC), volume 2, 1–8. IEEE, June 2024.

[16]

Misha Padidar, Xinran Zhu, Leo Huang, Jacob R Gardner, and David Bindel. Scaling gaussian processes with derivative information using variational inference. Advances in Neural Information Processing Systems, 34:6442–6453, 2021. arXiv:2107.04061.

[17]

Haitao Liu, Jianfei Cai, and Yew-Soon Ong. Remarks on multi-output gaussian process regression. Knowledge-Based Systems, 144:102–121, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0950705117306123, doi:https://doi.org/10.1016/j.knosys.2017.12.034.

[18]

B. Rakitsch, Christoph Lippert, K. Borgwardt, and Oliver Stegle. It is all in the noise: efficient multi-task gaussian process inference with structured residuals. Advances in Neural Information Processing Systems, pages, 01 2013.

[19]

Edwin V Bonilla, Kian Chai, and Christopher Williams. Multi-task gaussian process prediction. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL: https://proceedings.neurips.cc/paper_files/paper/2007/file/66368270ffd51418ec58bd793f2d9b1b-Paper.pdf.

[20]

Chen Zhou Xu, Zhong Hua Han, Ke Shi Zhang, and Wen Ping Song. Improved weighted gradient-enhanced kriging model for high-dimensional aerodynamic modeling problems. In 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021, 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021. International Council of the Aeronautical Sciences, 2021.