Derivative Screening

Derivative Screening#

One approach to reducing computational cost selectively incorporates partial derivative information, using only a subset of available gradients rather than all \(d\) derivatives [9, 12]. Partial gradient-enhanced kriging (PGEK) extends this concept by systematically identifying which gradients to include [13]. PGEK employs a two-step process: first, a feature selection technique, such as Mutual Information (MI), ranks the influence of each input variable on the output; second, an empirical evaluation rule determines the optimal number of gradients to include, balancing modeling efficiency and accuracy.

Suppose that feature selection identifies a subset of \(m \leq d\) input variables \(\mathbf{X}_A = \{x_1, x_2, \ldots, x_m\}\) whose derivatives provide the best trade-off between accuracy and efficiency. The formulation follows the structure of the derivative-enhanced GPs section, but with a reduced observation vector. The observation vector \(\mathbf{y}\) is augmented to include only the selected partial derivatives, forming \(\mathbf{y}^{\text{PGEK}}\), and similarly for test predictions \(\mathbf{y}^{\text{PGEK}}_*\):

(1)#\[\begin{split}\mathbf{y}^{\text{PGEK}} = \begin{bmatrix} f(\mathbf{X}) \\ \frac{\partial f(\mathbf{X})}{\partial x_1} \\ \vdots \\ \frac{\partial f(\mathbf{X})}{\partial x_m} \end{bmatrix}, \quad \mathbf{y}^{\text{PGEK}}_* = \begin{bmatrix} f(\mathbf{X}_*) \\ \frac{\partial f(\mathbf{X}_*)}{\partial x_1} \\ \vdots \\ \frac{\partial f(\mathbf{X}_*)}{\partial x_m} \end{bmatrix}\end{split}\]

The joint distribution between the augmented training observations and test predictions remains multivariate Gaussian:

(2)#\[\begin{split}\begin{pmatrix} \mathbf{y}^{\text{PGEK}} \\ \mathbf{y}^{\text{PGEK}}_* \end{pmatrix} \sim \mathcal{N}\left( \mathbf{0}, \begin{pmatrix} \boldsymbol{\Sigma}^{\text{PGEK}}_{11} & \boldsymbol{\Sigma}^{\text{PGEK}}_{12} \\ \boldsymbol{\Sigma}^{\text{PGEK}}_{21} & \boldsymbol{\Sigma}^{\text{PGEK}}_{22} \end{pmatrix} \right)\end{split}\]

The training covariance block \(\boldsymbol{\Sigma}^{\text{PGEK}}_{11}\) is an \(n(m + 1) \times n(m + 1)\) matrix constructed using only the selected derivatives:

(3)#\[\begin{split}\boldsymbol{\Sigma}^{\text{PGEK}}_{11} = \begin{pmatrix} K(\mathbf{X}, \mathbf{X}') & \frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}_A'} \\ \frac{\partial K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}_A} & \frac{\partial^2 K(\mathbf{X}, \mathbf{X}')}{\partial \mathbf{X}_A \partial \mathbf{X}_A'} \end{pmatrix}\end{split}\]

where \(\mathbf{X}_A\) denotes the subset of selected input dimensions. The cross-covariance blocks \(\boldsymbol{\Sigma}^{\text{PGEK}}_{12}\) and \(\boldsymbol{\Sigma}^{\text{PGEK}}_{22}\) are constructed analogously. The posterior predictive distribution for \(\mathbf{y}^{\text{PGEK}}_*\) follows the standard GP conditioning formula (analogous to Equation (5)), and hyperparameters are optimized by maximizing the marginal log-likelihood (analogous to Equation (6)).

The key advantage of PGEK is the reduction in covariance matrix size from :math: \(n(d + 1) \times n(d + 1)\) to \(n(m + 1) \times n(m + 1)\), where \(m \ll d\), resulting in substantial computational savings while retaining the most informative derivative information. Studies have demonstrated that PGEK can reduce modeling time by 30-60% compared to full GEK while maintaining or even improving accuracy in some cases [13].

[1]

A Comparison of Numerical Optimizers in Developing High Dimensional Surrogate Models, volume Volume 2B: 45th Design Automation Conference of International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 08 2019. URL: https://doi.org/10.1115/DETC2019-97499, arXiv:https://asmedigitalcollection.asme.org/IDETC-CIE/proceedings-pdf/IDETC-CIE2019/59193/V02BT03A037/6452976/v02bt03a037-detc2019-97499.pdf, doi:10.1115/DETC2019-97499.

[2]

Matheron Georges. Principles of geostatistics. Economic geology, 58(8):1246–1266, 1963.

[3]

D. G. Krige. A statistical approach to some basic mine valuation problems on the witwatersrand. OR, 4(1):18–18, 1953. URL: http://www.jstor.org/stable/3006914 (visited on 2025-02-20).

[4]

William J. Welch, Robert J. Buck, Jerome Sacks, Henry P. Wynn, and Toby J. Mitchell. Screening, predicting, and computer experiments. Technometrics, 34(1):15–25, 1992. URL: https://www.tandfonline.com/doi/abs/10.1080/00401706.1992.10485229, arXiv:https://www.tandfonline.com/doi/pdf/10.1080/00401706.1992.10485229, doi:10.1080/00401706.1992.10485229.

[5]

Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. ISBN 9780262182539. URL: http://www.gaussianprocess.org/gpml/.

[6]

Gregoire Allaire and Sidi Mahmoud Kaber. Numerical Linear Algebra. Texts in applied mathematics. Springer, New York, NY, January 2008.

[7]

Weiyu Liu and Stephen Batill. Gradient-enhanced response surface approximations using kriging models. In 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. Reston, Virigina, September 2002. American Institute of Aeronautics and Astronautics.

[8]

Wataru Yamazaki, Markus Rumpfkeil, and Dimitri Mavriplis. Design optimization utilizing gradient/hessian enhanced surrogate model. In 28th AIAA Applied Aerodynamics Conference. Reston, Virigina, June 2010. American Institute of Aeronautics and Astronautics.

[9]

Selvakumar Ulaganathan, Ivo Couckuyt, Tom Dhaene, Joris Degroote, and Eric Laermans. Performance study of gradient-enhanced kriging. Eng. Comput., 32(1):15–34, January 2016.

[10]

Alexander I.J. Forrester and Andy J. Keane. Recent advances in surrogate-based optimization. Progress in Aerospace Sciences, 45(1):50–79, 2009. URL: https://www.sciencedirect.com/science/article/pii/S0376042108000766, doi:https://doi.org/10.1016/j.paerosci.2008.11.001.

[11]

Youwei He, Kuan Tan, Chunming Fu, and Jinliang Luo. An efficient gradient-enhanced kriging modeling method assisted by fast kriging for high-dimension problems. International journal of numerical methods for heat & fluid flow, 33(12):3967–3993, 2023.

[12]

Selvakumar Ulaganathan, Ivo Couckuyt, Tom Dhaene, Eric Laermans, and Joris Degroote. On the use of gradients in kriging surrogate models. In Proceedings of the Winter Simulation Conference 2014. IEEE, December 2014.

[13] (1,2)

Liming Chen, Haobo Qiu, Liang Gao, Chen Jiang, and Zan Yang. A screening-based gradient-enhanced kriging modeling method for high-dimensional problems. Applied Mathematical Modelling, 69:15–31, 2019. URL: https://www.sciencedirect.com/science/article/pii/S0307904X18305900, doi:https://doi.org/10.1016/j.apm.2018.11.048.

[14]

Zhong-Hua Han, Yu Zhang, Chen-Xing Song, and Ke-Shi Zhang. Weighted gradient-enhanced kriging for high-dimensional surrogate modeling and design optimization. AIAA Journal, 55(12):4330–4346, 2017. URL: https://doi.org/10.2514/1.J055842, arXiv:https://doi.org/10.2514/1.J055842, doi:10.2514/1.J055842.

[15]

Yiming Yao, Fei Liu, and Qingfu Zhang. High-Throughput Multi-Objective bayesian optimization using gradients. In 2024 IEEE Congress on Evolutionary Computation (CEC), volume 2, 1–8. IEEE, June 2024.

[16]

Misha Padidar, Xinran Zhu, Leo Huang, Jacob R Gardner, and David Bindel. Scaling gaussian processes with derivative information using variational inference. Advances in Neural Information Processing Systems, 34:6442–6453, 2021. arXiv:2107.04061.

[17]

Haitao Liu, Jianfei Cai, and Yew-Soon Ong. Remarks on multi-output gaussian process regression. Knowledge-Based Systems, 144:102–121, 2018. URL: https://www.sciencedirect.com/science/article/pii/S0950705117306123, doi:https://doi.org/10.1016/j.knosys.2017.12.034.

[18]

B. Rakitsch, Christoph Lippert, K. Borgwardt, and Oliver Stegle. It is all in the noise: efficient multi-task gaussian process inference with structured residuals. Advances in Neural Information Processing Systems, pages, 01 2013.

[19]

Edwin V Bonilla, Kian Chai, and Christopher Williams. Multi-task gaussian process prediction. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007. URL: https://proceedings.neurips.cc/paper_files/paper/2007/file/66368270ffd51418ec58bd793f2d9b1b-Paper.pdf.

[20]

Chen Zhou Xu, Zhong Hua Han, Ke Shi Zhang, and Wen Ping Song. Improved weighted gradient-enhanced kriging model for high-dimensional aerodynamic modeling problems. In 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021, 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021. International Council of the Aeronautical Sciences, 2021.