Utilities and Helpers

Contents

Utilities and Helpers#

jetgp.utils.build_companion_array(nvars, order, der_indices)[source]#

Build a companion array that maps each derivative index to its corresponding order in the OTI basis.

The array represents the order of each component in the OTI (Order-Truncated Imaginary) number system: - 0 for function values. - Corresponding derivative order for each derivative term.

Parameters:#

nvarsint

Number of variables (input dimensions).

orderint

Maximum derivative order considered.

der_indiceslist of lists

Derivative indices in exponent form, where each sublist represents the derivative multi-index for a specific derivative term.

Returns:#

companion_arrayndarray

A 1D array of length (1 + total derivatives), where: - The first entry is 0 (function value). - Each subsequent entry indicates the derivative order (e.g., 1 for first derivatives).

Example:#

>>> nvars = 2
>>> order = 2
>>> der_indices = [[[1, 1]], [[1, 2]], [[2, 1]]]
>>> build_companion_array(nvars, order, der_indices)
array([0, 1, 2, 1])
jetgp.utils.build_companion_array_predict(nvars, order, der_indices)[source]#

Build a companion array that maps each derivative index to its corresponding order in the OTI basis.

The array represents the order of each component in the OTI (Order-Truncated Imaginary) number system: - 0 for function values. - Corresponding derivative order for each derivative term.

Parameters:#

nvarsint

Number of variables (input dimensions).

orderint

Maximum derivative order considered.

der_indiceslist of lists

Derivative indices in exponent form, where each sublist represents the derivative multi-index for a specific derivative term.

Returns:#

companion_arrayndarray

A 1D array of length (1 + total derivatives), where: - The first entry is 0 (function value). - Each subsequent entry indicates the derivative order (e.g., 1 for first derivatives).

Example:#

>>> nvars = 2
>>> order = 2
>>> der_indices = [[[1, 1]], [[1, 2]], [[2, 1]]]
>>> build_companion_array(nvars, order, der_indices)
array([0, 1, 2, 1])
jetgp.utils.check_gp_gradient(gp, x, params, h=1e-05, fallback_axis=0)[source]#

Compare GP-predicted gradient vs finite-difference at x. Prints and returns both.

jetgp.utils.compare_OTI_indices(nvars, order, term_check)[source]#

Compare a given multi-index term against all basis terms in the OTI number system.

This function searches through all OTI basis terms (multi-indices) up to the specified order and identifies the order of the matching term.

Parameters:#

nvarsint

Number of variables (input dimensions).

orderint

Maximum derivative order considered.

term_checklist of [int, int]

The multi-index to check, given in exponent form [[var_index, exponent], …].

Returns:#

int

The order of the term in the OTI basis (1, 2, …, order). Returns -1 if no matching term is found.

Example:#

>>> nvars = 2
>>> order = 2
>>> term_check = [[1, 2]]  # Represents ∂²/∂x₁²
>>> compare_OTI_indices(nvars, order, term_check)
2
jetgp.utils.convert_index_to_exponent_form(lst)[source]#

Convert a list of indices into exponent form.

For a given list of integers, the function compresses consecutive identical elements into pairs [value, count], where ‘value’ is the element and ‘count’ is its occurrence.

Parameters:#

lstlist of int

A list of integers representing variable indices.

Returns:#

list of [int, int]

A compressed list where each entry is [value, count], representing the multiplicity of each unique value in the original list.

Example:#

>>> lst = [1, 1, 2, 2, 2, 3]
>>> convert_index_to_exponent_form(lst)
[[1, 2], [2, 3], [3, 1]]
jetgp.utils.ecl_acquisition(mu_N, var_N, threshold=0.0)[source]#

Entropy Contour Learning (ECL) acquisition function for gddegp models.

Implements the ECL acquisition function from equation (8): ECL(x | S_N, g) = -P(g(Y(x)) > 0) log P(g(Y(x)) > 0) - P(g(Y(x)) ≤ 0) log P(g(Y(x)) ≤ 0)

Where g is the affine limit state function g(Y(x)) = Y(x) - T and the failure region G is defined by g(Y(x)) ≤ 0.

Parameters:#

gp_modelgddegp

Trained gp model instance

Xarray-like, shape (n_points, n_features) or (n_features,)

Points to evaluate the acquisition function

rays_predictndarray

Ray directions for prediction. Shape should be (n_dims, n_rays)

paramsndarray

Hyperparameters for gddegp prediction

thresholdfloat, default=0.0

Threshold T for the affine limit state function g(Y(x)) = Y(x) - T Default is 0.0, meaning failure region is defined by Y(x) ≤ 0

Returns:#

ecl_valuesarray-like, shape (n_points,) or float

ECL acquisition function values (higher values indicate more informative points)

Examples:#

>>> # After training your gddegp model
>>> ecl_vals = ecl_acquisition(gp_model, candidate_points, rays, params, threshold=0.0)
>>> next_point_idx = np.argmax(ecl_vals)
>>> next_point = candidate_points[next_point_idx]
jetgp.utils.ecl_batch_acquisition(gp_model, X, rays_predict, params, threshold=0.0, batch_size=1)[source]#

Batch ECL acquisition function that selects multiple points simultaneously.

Uses a greedy approach to select the next batch_size points that maximize the ECL criterion while maintaining diversity.

Parameters:#

gp_modelgddegp

Trained gddegp model instance

Xarray-like, shape (n_candidates, n_features)

Candidate points to select from

rays_predictndarray

Ray directions for prediction

paramsndarray

Hyperparameters for gddegp prediction

thresholdfloat, default=0.0

Threshold for the limit state function

batch_sizeint, default=1

Number of points to select in the batch

Returns:#

selected_indicesarray-like, shape (batch_size,)

Indices of selected points from X

selected_pointsarray-like, shape (batch_size, n_features)

The selected points

Examples:#

>>> indices, points = ecl_batch_acquisition(gp_model, candidates, rays, params, batch_size=3)
>>> next_experiments = points
jetgp.utils.finite_difference_gradient(gp, x, params, h=1e-05)[source]#

Compute central finite difference approximation of GP mean gradient at x.

Parameters:
  • gp (object) – Trained GP model instance

  • x (ndarray, shape (1, d)) – Point at which to compute finite difference gradient

  • params (array-like) – GP hyperparameters

  • h (float) – Step size

Returns:

grad_fd – Central finite difference gradient estimate

Return type:

ndarray, shape (d, 1)

jetgp.utils.flatten_der_indices(indices)[source]#

Flatten a nested list of derivative indices.

Parameters:#

indiceslist of lists

A nested list where each sublist contains derivative index specifications.

Returns:#

list

A single flattened list containing all derivative index entries.

Example:#

>>> indices = [[[1, 1]], [[1, 2], [2, 1]]]
>>> flatten_der_indices(indices)
[[1, 1], [1, 2], [2, 1]]
jetgp.utils.gen_OTI_indices(nvars, order)[source]#

Generate the list of OTI (Order-Truncated Imaginary) basis indices in exponent form.

For a given number of variables and maximum derivative order, this function produces the multi-index representations for all basis terms in the OTI number system.

Parameters:#

nvarsint

Number of variables (input dimensions).

orderint

Maximum derivative order considered.

Returns:#

list of lists

A nested list where: - The outer list has length order (one entry per derivative order). - Each inner list contains multi-indices in exponent form for that order.

Example:#

>>> nvars = 2
>>> order = 2
>>> gen_OTI_indices(nvars, order)
[
    [[[1, 1]], [[2, 1]]],         # First-order derivatives: ∂/∂x₁, ∂/∂x₂
    # Second-order: ∂²/∂x₁², ∂²/∂x₁∂x₂, ∂²/∂x₂²
    [[[1, 2]], [[1, 1], [2, 1]], [[2, 2]]]
]
jetgp.utils.generate_bernoulli_lambda(alpha: int) callable[source]#

Generates a callable (lambda) function for the (2*alpha)-th Bernoulli polynomial.

Parameters:

alpha – A non-negative integer.

Returns:

A callable function that evaluates B_{2*alpha}(x).

jetgp.utils.generate_bernoulli_numbers(n_max: int) list[Fraction][source]#

Generates Bernoulli numbers B_0 to B_n_max using their recurrence relation.

Parameters:

n_max – The maximum order of the Bernoulli number to generate.

Returns:

A list of Bernoulli numbers as Fraction objects.

jetgp.utils.generate_bernoulli_polynomial(alpha: int) Polynomial[source]#

Generates the (2*alpha)-th Bernoulli polynomial, B_{2*alpha}(x).

Parameters:

alpha – A non-negative integer.

Returns:

A numpy.polynomial.Polynomial object representing the polynomial.

jetgp.utils.generate_submodel_noise_matricies(sigma_data, index, der_indices, num_points, base_der_indices)[source]#

Generate diagonal noise covariance matrices for each submodel component (including function values and their associated derivatives).

This function constructs a list of diagonal matrices where each matrix corresponds to a specific group of training indices (e.g., for a submodel), and includes both function value noise and associated derivative noise.

Parameters:
  • sigma_data (ndarray of shape (n_total, n_total)) – Full covariance (typically diagonal) matrix for all training data, including function values and derivatives.

  • index (list of lists of int) – List where each sublist contains indices of the function values for one submodel (e.g., a cluster or partition).

  • der_indices (list of lists) – Each sublist contains derivative directions for the submodel, corresponding to base_der_indices.

  • num_points (int) – Number of training points per function (used to compute index offsets for derivatives).

  • base_der_indices (list of lists) – Master list of derivative indices that define the ordering of blocks in the covariance matrix.

Returns:

sub_model_matricies – List of diagonal noise matrices (ndarray of shape (n_submodel_total, n_submodel_total)) for each submodel, combining noise contributions from function values and all applicable derivative components.

Return type:

list of ndarray

Raises:

Exception – If a derivative index in der_indices is not found in base_der_indices.

Example

>>> sigma_data.shape = (300, 300)
>>> index = [[0, 1, 2], [3, 4, 5]]
>>> der_indices = [[[1, 1]], [[2, 1], [1, 2]]]
>>> base_der_indices = [[[1, 1]], [[2, 1]], [[1, 2]]]
>>> num_points = 100
>>> generate_submodel_noise_matricies(sigma_data, index, der_indices, num_points, base_der_indices)
[array of shape (6, 6), array of shape (9, 9)]
jetgp.utils.generate_submodel_noise_matricies_old(sigma_data, index, der_indices, num_points, base_der_indices)[source]#

Generate diagonal noise covariance matrices for each submodel component (including function values and their associated derivatives).

This function constructs a list of diagonal matrices where each matrix corresponds to a specific group of training indices (e.g., for a submodel), and includes both function value noise and associated derivative noise.

Parameters:
  • sigma_data (ndarray of shape (n_total, n_total)) – Full covariance (typically diagonal) matrix for all training data, including function values and derivatives.

  • index (list of lists of int) – List where each sublist contains indices of the function values for one submodel (e.g., a cluster or partition).

  • der_indices (list of lists) – Each sublist contains derivative directions for the submodel, corresponding to base_der_indices.

  • num_points (int) – Number of training points per function (used to compute index offsets for derivatives).

  • base_der_indices (list of lists) – Master list of derivative indices that define the ordering of blocks in the covariance matrix.

Returns:

sub_model_matricies – List of diagonal noise matrices (ndarray of shape (n_submodel_total, n_submodel_total)) for each submodel, combining noise contributions from function values and all applicable derivative components.

Return type:

list of ndarray

Raises:

Exception – If a derivative index in der_indices is not found in base_der_indices.

Example

>>> sigma_data.shape = (300, 300)
>>> index = [[0, 1, 2], [3, 4, 5]]
>>> der_indices = [[[1, 1]], [[2, 1], [1, 2]]]
>>> base_der_indices = [[[1, 1]], [[2, 1]], [[1, 2]]]
>>> num_points = 100
>>> generate_submodel_noise_matricies(sigma_data, index, der_indices, num_points, base_der_indices)
[array of shape (6, 6), array of shape (9, 9)]
jetgp.utils.get_entropy_ridge_direction_nd(gp, x, params, threshold=0.0, h=1e-05, fallback_axis=0, normalize=True, random_dir=False, seed=None)[source]#

Get a direction tangent to the entropy level set (“ridge direction”) at x. In higher dimensions, returns either a single direction or an orthonormal basis for the tangent space.

Parameters:
  • gp (object) – Trained GP model instance with .predict

  • x (array-like, shape (1, d)) – The input location

  • params (array-like) – GP hyperparameters

  • threshold (float) – ECL threshold

  • h (float) – Finite difference step

  • fallback_axis (int) – Axis for fallback if gradient is zero

  • normalize (bool) – Normalize output direction

  • random_dir (bool) – If True, return a random direction in the ridge (level set). If False, returns first basis vector.

  • seed (int or None) – For reproducible random direction

Returns:

  • ridge_dir (ndarray, shape (d, 1)) – Ridge direction (tangent to entropy level set) at x

  • grad_H (ndarray, shape (d, 1)) – Gradient of entropy at x

  • basis (ndarray, shape (d, d-1)) – (If requested) Orthonormal basis for tangent space to entropy level set at x

jetgp.utils.get_entropy_ridge_direction_nd_2(gp, x, params, threshold=0.0, h=1e-05, fallback_axis=0, normalize=True, random_dir=False, seed=None)[source]#

Get a direction tangent to the entropy level set (“ridge direction”) at x. In higher dimensions, returns either a single direction or an orthonormal basis for the tangent space.

Parameters:
  • gp (object) – Trained GP model instance with .predict

  • x (array-like, shape (1, d)) – The input location

  • params (array-like) – GP hyperparameters

  • threshold (float) – ECL threshold

  • h (float) – Finite difference step

  • fallback_axis (int) – Axis for fallback if gradient is zero

  • normalize (bool) – Normalize output direction

  • random_dir (bool) – If True, return a random direction in the ridge (level set). If False, returns first basis vector.

  • seed (int or None) – For reproducible random direction

Returns:

  • ridge_dir (ndarray, shape (d, 1)) – Ridge direction (tangent to entropy level set) at x

  • grad_H (ndarray, shape (d, 1)) – Gradient of entropy at x

  • basis (ndarray, shape (d, d-1)) – (If requested) Orthonormal basis for tangent space to entropy level set at x

jetgp.utils.get_inverse(dist_params, samples)[source]#

Transforms uniform samples to a specified distribution via inverse CDF.

jetgp.utils.get_optimization_bounds(dist_params)[source]#

Determines optimization bounds. Prioritizes explicit bounds if provided.

jetgp.utils.get_pdf_params(dist_params)[source]#

Computes scipy-specific loc/scale parameters. Prioritizes explicit bounds if provided.

jetgp.utils.get_surrogate_gradient_ray(gp, x, params, fallback_axis=0, normalize=True, threshold=0.0)[source]#

Returns a normalized surrogate gradient direction (d x 1 column vector) at location x using the current GP model (any input dimension). If the GP mean at x is above threshold, returns -grad; else returns grad.

Parameters:
  • gp (object) – Trained GP model instance with .predict (supports arbitrary input dim)

  • x (array-like, shape (1, d)) – The input location where to compute the surrogate gradient direction

  • params (array-like) – GP hyperparameters

  • fallback_axis (int, default=0) – Axis to use if gradient norm is zero (default: 0)

  • normalize (bool, default=True) – If True, return a unit vector; else, return unnormalized gradient

  • threshold (float, default=0.0) – Threshold value for sign flip

Returns:

  • ray (ndarray, shape (d, 1)) – The chosen direction (as a column vector)

  • grad (ndarray, shape (d, 1)) – The predicted gradient as a column vector (signed as above)

jetgp.utils.jade(func, lb, ub, ieqcons=[], f_ieqcons=None, args=(), kwargs={}, pop_size=100, n_generations=100, p=0.1, c=0.1, minstep=1e-06, stagnation_limit=15, debug=False, local_opt_every=15, initial_positions=None, seed=42, local_optimizer=None, func_and_grad=None, grad_func=None)[source]#

JADE (Adaptive Differential Evolution) with optional local refinement and stagnation-based stopping criterion.

JADE: Adaptive Differential Evolution With Optional External Archive https://ieeexplore.ieee.org/abstract/document/5208221

Parameters:
  • func (callable) – Objective function to minimize.

  • lb (array-like) – Lower and upper bounds.

  • ub (array-like) – Lower and upper bounds.

  • ieqcons (list) – List of inequality constraint functions.

  • f_ieqcons (callable or None) – Single function returning array of constraint values.

  • args (tuple) – Extra arguments passed to func.

  • kwargs (dict) – Extra keyword arguments passed to func.

  • pop_size (int) – Population size.

  • n_generations (int) – Maximum number of generations.

  • p (float) – Fraction of top individuals for p-best selection.

  • c (float) – Learning rate for parameter adaptation.

  • minstep (float) – Minimum position change to accept a new best (when improving).

  • stagnation_limit (int) – Stop if no improvement for this many consecutive generations.

  • debug (bool) – Print debug information.

  • local_opt_every (int or None) – Run local optimization every this many generations. None to disable.

  • initial_positions (array-like or None) – Initial positions to seed the population.

  • seed (int) – Random seed.

jetgp.utils.local_box_around_point(x_next, delta)[source]#
jetgp.utils.matern_kernel_builder(nu, oti_module=None)[source]#

Symbolically builds the Matérn kernel function with given smoothness ν.

Parameters:
  • nu (float) – Smoothness parameter of the Matérn kernel. Should be a half-integer (e.g., 0.5, 1.5, 2.5, …).

  • oti_module (module, optional) – The PyOTI static module to use for exp/sqrt. If None, uses numpy.

Returns:

A lambdified function that evaluates the Matérn kernel as a function of distance r.

Return type:

callable

jetgp.utils.matern_kernel_grad_builder(nu, oti_module=None)[source]#

Builds the derivative df/dr of the Matérn kernel function with given smoothness ν.

Parameters:
  • nu (float) – Smoothness parameter of the Matérn kernel (half-integer, e.g. 0.5, 1.5, 2.5).

  • oti_module (module, optional) – The PyOTI static module to use for exp. If None, uses numpy.

Returns:

A lambdified function evaluating df/dr as a function of scaled distance r.

Return type:

callable

jetgp.utils.normalize_directions(sigmas_x, rays)[source]#

Normalize direction vectors (rays) based on input scaling.

This function rescales direction vectors used for directional derivatives so that they are consistent with normalized input space.

Parameters:#

sigmas_xndarray of shape (1, nvars)

Standard deviations of the input variables (used for scaling each direction).

raysndarray of shape (nvars, n_directions)

Direction vectors (columns) in the original input space.

Returns:#

transformed_raysndarray of shape (nvars, n_directions)

Normalized direction vectors.

Example:#

>>> sigmas_x = np.array([[2.0, 1.0]])
>>> rays = np.array([[1.0, 0.0], [0.0, 1.0]])
>>> normalize_directions(sigmas_x, rays)
array([[0.5, 0. ],
       [0. , 1. ]])
jetgp.utils.normalize_directions_2(sigmas_x, rays_array)[source]#

Normalize direction vectors (rays) based on input scaling.

This function rescales direction vectors used for directional derivatives so that they are consistent with normalized input space.

Parameters:#

sigmas_xndarray of shape (1, nvars)

Standard deviations of the input variables (used for scaling each direction).

raysndarray of shape (nvars, n_directions)

Direction vectors (columns) in the original input space.

Returns:#

transformed_raysndarray of shape (nvars, n_directions)

Normalized direction vectors.

Example:#

>>> sigmas_x = np.array([[2.0, 1.0]])
>>> rays = np.array([[1.0, 0.0], [0.0, 1.0]])
>>> normalize_directions(sigmas_x, rays)
array([[0.5, 0. ],
        [0. , 1. ]])
jetgp.utils.normalize_x_data_test(X_test, sigmas_x, mus_x)[source]#

Normalize test input data using the mean and standard deviation from the training inputs.

Parameters:#

X_testndarray of shape (n_samples, nvars)

Test input points to be normalized.

sigmas_xndarray of shape (1, nvars)

Standard deviations of the training inputs for each variable (used for scaling).

mus_xndarray of shape (1, nvars)

Means of the training inputs for each variable (used for centering).

Returns:#

X_train_normalizedndarray of shape (n_samples, nvars)

Normalized test inputs.

Example:#

>>> X_test = np.array([[2.0, 3.0]])
>>> sigmas_x = np.array([[1.0, 2.0]])
>>> mus_x = np.array([[0.0, 1.0]])
>>> normalize_x_data_test(X_test, sigmas_x, mus_x)
array([[2.0, 1.0]])
jetgp.utils.normalize_x_data_train(X_train)[source]#

Normalize training input data by centering and scaling each variable.

Parameters:#

X_trainndarray of shape (n_samples, nvars)

Training input points.

Returns:#

X_train_normalizedndarray of shape (n_samples, nvars)

Normalized training inputs.

mean_vec_xndarray of shape (1, nvars)

Mean values for each input variable (used for centering).

std_vec_xndarray of shape (1, nvars)

Standard deviations for each input variable (used for scaling).

Example:#

>>> X_train = np.array([[1.0, 2.0], [3.0, 4.0]])
>>> normalize_x_data_train(X_train)
(array([[-1., -1.], [ 1.,  1.]]), array([[2., 3.]]), array([[1., 1.]]))
jetgp.utils.normalize_y_data(X_train, y_train, sigma_data, der_indices)[source]#

Normalize function values, derivatives, and observational noise for training data.

This function: - Normalizes function values (y_train[0]) to have zero mean and unit variance. - Scales derivatives using the chain rule, considering input normalization. - Scales observational noise (sigma_data) accordingly.

Parameters:#

X_trainndarray of shape (n_samples, nvars)

Training input points (used to compute input normalization statistics).

y_trainlist of arrays

List where: - y_train[0] contains function values. - y_train[1:], if present, contain derivative values for each derivative component.

sigma_datafloat or None

Standard deviation of the observational noise (for function values). If provided, it will be normalized.

der_indiceslist of lists

Multi-index derivative structures for each derivative component.

Returns:#

y_train_normalizedndarray of shape (n_total,)

Normalized function values and derivatives (flattened).

mean_vec_yndarray of shape (m, 1)

Mean of function values before normalization.

std_vec_yndarray of shape (m, 1)

Standard deviation of function values before normalization.

std_vec_xndarray of shape (1, nvars)

Standard deviations of input variables.

mean_vec_xndarray of shape (1, nvars)

Means of input variables.

noise_std_normalizedfloat or None

Normalized observational noise standard deviation.

Example:#

>>> normalize_y_data(X_train, y_train, sigma_data=0.75, der_indices=[[[1, 1]]])
(y_train_normalized, mean_vec_y, std_vec_y,
 std_vec_x, mean_vec_x, noise_std_normalized)
jetgp.utils.normalize_y_data_directional(X_train, y_train, sigma_data, der_indices)[source]#

Normalize function values and directional derivatives for training data.

This function: - Normalizes function values (`y_train[0]`) to have zero mean and unit variance. - Scales directional derivatives (y_train[1:]) by the function value standard deviation (`std_vec_y`).

Parameters:#

X_trainndarray of shape (n_samples, nvars)

Training input points (used to compute input normalization statistics).

y_trainlist of arrays

List where: - y_train[0] contains function values. - y_train[1:], if present, contain directional derivative values for each direction.

der_indiceslist of lists

Directions for directional derivatives (each sublist represents a direction vector).

Returns:#

y_train_normalizedndarray of shape (n_total,)

Normalized function values and directional derivatives (flattened).

mean_vec_yndarray of shape (m, 1)

Mean of function values before normalization.

std_vec_yndarray of shape (m, 1)

Standard deviation of function values before normalization.

std_vec_xndarray of shape (1, nvars)

Standard deviations of input variables.

mean_vec_xndarray of shape (1, nvars)

Means of input variables.

Example:#

>>> normalize_y_data_directional(X_train, y_train, der_indices=[[[1, 0.5], [2, 0.5]]])
(y_train_normalized, mean_vec_y, std_vec_y, std_vec_x, mean_vec_x)
jetgp.utils.nrmse(y_true, y_pred, norm_type='minmax')[source]#

Compute the Normalized Root Mean Squared Error (NRMSE) between true and predicted values.

Parameters:#

y_truearray-like

Ground truth or reference values.

y_predarray-like

Predicted values to compare against the ground truth.

norm_typestr, default=”minmax”

The method used to normalize the RMSE: - ‘minmax’: Normalize by the range (max - min) of y_true. - ‘mean’: Normalize by the mean of y_true. - ‘std’: Normalize by the standard deviation of y_true.

Returns:#

float

The normalized root mean squared error.

Raises:#

ValueError

If norm_type is not one of {‘minmax’, ‘mean’, ‘std’}.

Example:#

>>> y_true = np.array([3, 5, 2, 7])
>>> y_pred = np.array([2.5, 5.5, 2, 8])
>>> nrmse(y_true, y_pred, norm_type="mean")
0.1443  # Example value (varies depending on input)
jetgp.utils.pso(func, lb, ub, ieqcons=[], f_ieqcons=None, args=(), kwargs={}, pop_size=100, omega=0.5, phip=0.5, phig=0.5, n_generations=100, minstep=1e-06, minfunc=1e-06, debug=False, seed=42, local_opt_every=15, initial_positions=None, local_optimizer=None, func_and_grad=None, grad_func=None)[source]#

Particle Swarm Optimization with periodic local refinement R. C. Eberhart, Y. Shi and J. Kennedy, Swarm Intelligence, CA, San Mateo:Morgan Kaufmann, 2001. https://theswissbay.ch/pdf/Gentoomen%20Library/Artificial%20Intelligence/Swarm%20Intelligence/Swarm%20intelligence%20-%20James%20Kennedy.pdf Parameters: ———– func : callable

Objective function to minimize

lbarray_like

Lower bounds for variables

ubarray_like

Upper bounds for variables

ieqconslist, optional

List of inequality constraint functions

f_ieqconscallable, optional

Function returning array of inequality constraints

argstuple, optional

Extra arguments passed to objective function

kwargsdict, optional

Extra keyword arguments passed to objective function

swarmsizeint, optional

Number of particles in swarm

omegafloat, optional

Inertia weight

phipfloat, optional

Personal best weight

phigfloat, optional

Global best weight

maxiterint, optional

Maximum number of iterations

minstepfloat, optional

Minimum step size for convergence

minfuncfloat, optional

Minimum function improvement for convergence

debugbool, optional

Whether to print debug information

seedint, optional

Random seed for reproducibility

local_opt_everyint, optional

Frequency of local optimization (every N iterations)

Returns:#

best_positionndarray

Best position found

best_valuefloat

Best function value found

jetgp.utils.reshape_y_train(y_train)[source]#

Flatten and concatenate function values and derivative observations into a single 1D array.

Parameters:#

y_trainlist of arrays

A list where: - y_train[0] contains the function values (shape: (n_samples,)) - y_train[1:], if present, contain derivative values (shape: (n_samples,)) for each derivative component.

Returns:#

ndarray of shape (n_total,)

A flattened 1D array concatenating function values and all derivatives.

Example:#

>>> y_train = [np.array([1.0, 2.0]), np.array([0.5, 1.0])]
>>> reshape_y_train(y_train)
array([1.0, 2.0, 0.5, 1.0])
jetgp.utils.robust_local_optimization(func, x0, args=(), lb=None, ub=None, debug=False)[source]#

Robust L-BFGS-B optimization with abnormal termination handling

jetgp.utils.scale_samples(samples, lower_bounds, upper_bounds)[source]#

Scale each column of samples from the unit interval [0, 1] to user-defined bounds [lb_j, ub_j].

Parameters:#

samplesndarray of shape (d, n)

A 2D array where each column represents a sample in [0, 1]^n.

lower_boundsarray-like of length n

Lower bounds for each dimension.

upper_boundsarray-like of length n

Upper bounds for each dimension.

Returns:#

ndarray of shape (d, n)

Scaled samples where each column is mapped from [0, 1] to [lb_j, ub_j] for each dimension.

Notes:#

This function assumes that each sample is a column vector, and bounds are applied column-wise.

Example:#

>>> samples = np.array([[0.5, 0.2], [0.8, 0.4]])
>>> lower_bounds = [0, 1]
>>> upper_bounds = [1, 3]
>>> scale_samples(samples, lower_bounds, upper_bounds)
array([[0.5, 1.4],
       [0.8, 1.8]])
jetgp.utils.should_accept_local_result(local_res, current_best_f, is_feasible, debug=False)[source]#

Check if local optimization result should be accepted

jetgp.utils.sobol_points(n_points, box, seed=0)[source]#
jetgp.utils.transform_cov(cov, sigma_y, sigmas_x, der_indices, X_test)[source]#

Rescale the diagonal of a covariance matrix to reflect the original (unnormalized) variance of function values and derivatives.

This function transforms the variance estimates from normalized space back to the original scale.

Parameters:#

covndarray of shape (n_total, n_total)

Covariance matrix from the GP model (including function values and derivatives).

sigma_yfloat

Standard deviation used to normalize the function values.

sigmas_xndarray of shape (1, nvars)

Standard deviations used to normalize each input dimension.

der_indiceslist of lists

Derivative multi-indices, where each sublist represents the derivative directions and orders.

X_testndarray of shape (n_samples, nvars)

Test input points corresponding to the covariance matrix blocks.

Returns:#

y_var_normalizedndarray of shape (n_total,)

Rescaled variances for function values and derivatives in the original space.

Example:#

>>> cov.shape = (n_total, n_total)
>>> transform_cov(cov, sigma_y, sigmas_x, der_indices, X_test).shape == (n_total,)
jetgp.utils.transform_cov_directional(cov, sigma_y, sigmas_x, der_indices, X_test)[source]#

Rescale the diagonal of a covariance matrix for function values and directional derivatives.

Unlike transform_cov, this function assumes directional derivatives (not multi-index derivatives), so no input scaling (sigmas_x) is applied to derivative terms.

Parameters:#

covndarray of shape (n_total, n_total)

Covariance matrix from the GP model (including function values and directional derivatives).

sigma_yfloat

Standard deviation used to normalize the function values.

sigmas_xndarray of shape (1, nvars)

Standard deviations used to normalize each input dimension (unused for derivatives here).

der_indiceslist of lists

Derivative directions (for directional derivatives).

X_testndarray of shape (n_samples, nvars)

Test input points corresponding to the covariance matrix blocks.

Returns:#

y_var_normalizedndarray of shape (n_total,)

Rescaled variances for function values and directional derivatives in the original space.

Example:#

>>> cov.shape = (n_total, n_total)
>>> transform_cov_directrional(cov, sigma_y, sigmas_x, der_indices, X_test).shape == (n_total,)
jetgp.utils.transform_predictions(y_pred, mu_y, sigma_y, sigmas_x, der_indices, X_test)[source]#

Rescale predicted function values and derivatives from normalized space back to their original scale.

This function transforms both function value predictions and multi-index derivatives back to the original units after GP prediction.

Parameters:#

y_predndarray of shape (n_total,)

Predicted mean values from the GP model in normalized space (includes function values and derivatives).

mu_yfloat

Mean of the original function values (before normalization).

sigma_yfloat

Standard deviation of the original function values (before normalization).

sigmas_xndarray of shape (1, nvars)

Standard deviations of the input variables (used for rescaling derivatives).

der_indiceslist of lists

Multi-index derivative structures for each derivative component.

X_testndarray of shape (n_samples, nvars)

Test input points corresponding to the prediction blocks.

Returns:#

y_train_normalizedndarray of shape (n_total, 1)

Rescaled function values and derivatives in the original scale.

Example:#

>>> y_pred.shape = (n_total,)
>>> transform_predictions(y_pred, mu_y, sigma_y, sigmas_x, der_indices, X_test).shape
(n_total, 1)
jetgp.utils.transform_predictions_directional(y_pred, mu_y, sigma_y, sigmas_x, der_indices, X_test)[source]#

Rescale predicted function values and directional derivatives from normalized space back to their original scale.

This function assumes the derivatives are directional (not multi-index), so it applies only output scaling (sigma_y) to derivatives.

Parameters:#

y_predndarray of shape (n_total,)

Predicted mean values from the GP model in normalized space (includes function values and directional derivatives).

mu_yfloat

Mean of the original function values (before normalization).

sigma_yfloat

Standard deviation of the original function values (before normalization).

sigmas_xndarray of shape (1, nvars)

Standard deviations of the input variables (not used here but included for compatibility).

der_indiceslist of lists

Directions for directional derivatives (each sublist represents a direction vector).

X_testndarray of shape (n_samples, nvars)

Test input points corresponding to the prediction blocks.

Returns:#

y_train_normalizedndarray of shape (n_total, 1)

Rescaled function values and directional derivatives in the original scale.

Example:#

>>> y_pred.shape = (n_total,)
>>> transform_predictions_directional(y_pred, mu_y, sigma_y, sigmas_x, der_indices, X_test).shape
(n_total, 1)
class jetgp.kernel_funcs.kernel_funcs.KernelFactory(dim, normalize, differences_by_dim, n_order, true_noise_std=None, smoothness_parameter=None, oti_module=None)[source]#

Factory for generating different kernel functions (SE, RQ, SineExp, Matérn) in isotropic and anisotropic forms with caching for improved performance.

dim#

Dimensionality of the input space.

Type:

int

normalize#

Whether to normalize inputs (scaling differences to [-3, 3]).

Type:

bool

differences_by_dim#

Pairwise differences between input points, by dimension.

Type:

list of arrays

true_noise_std#

Known noise standard deviation (for adjusting noise bounds).

Type:

float, optional

bounds#

Hyperparameter bounds (log10 space).

Type:

list of tuples

nu#

Smoothness parameter for the Matérn kernel.

Type:

float

n_order#

Order of derivatives for kernel smoothness.

Type:

int

SI_kernel_anisotropic(differences_by_dim, length_scales)[source]#

Anisotropic SI kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell_1, …, ell_dim, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

SI_kernel_isotropic(differences_by_dim, length_scales)[source]#

Isotropic SI kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

clear_caches()[source]#

Clear all caches. Call when training data changes.

create_kernel(kernel_name, kernel_type)[source]#

Returns a kernel function based on specified name and type.

Parameters:
  • kernel_name (str) – Name of the kernel (‘SE’, ‘RQ’, ‘SineExp’, ‘Matern’).

  • kernel_type (str) – Type of kernel (‘anisotropic’ or ‘isotropic’).

Returns:

The selected kernel function.

Return type:

callable

get_bounds_from_data()[source]#

Computes bounds for hyperparameters based on the observed data range.

matern_kernel_anisotropic(differences_by_dim, length_scales)[source]#

Anisotropic Matérn kernel (half-integer ν) with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell_1, …, ell_dim, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

matern_kernel_isotropic(differences_by_dim, length_scales)[source]#

Isotropic Matérn kernel (half-integer ν) with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

rq_kernel_anisotropic(differences_by_dim, length_scales)[source]#

Anisotropic Rational Quadratic (RQ) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell_1, …, ell_dim, alpha, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

rq_kernel_isotropic(differences_by_dim, length_scales)[source]#

Isotropic Rational Quadratic (RQ) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell, alpha, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

se_kernel_anisotropic(differences_by_dim, length_scales)[source]#

Anisotropic Squared Exponential (SE) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell_1, …, ell_dim, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

se_kernel_isotropic(differences_by_dim, length_scales)[source]#

Isotropic Squared Exponential (SE) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

sine_exp_kernel_anisotropic(differences_by_dim, length_scales)[source]#

Anisotropic Sine-Exponential (Periodic) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell_1, …, ell_dim, p_1, …, p_dim, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

sine_exp_kernel_isotropic(differences_by_dim, length_scales)[source]#

Isotropic Sine-Exponential (Periodic) kernel with caching.

Parameters:
  • differences_by_dim (list of ndarray) – Pairwise differences by dimension.

  • length_scales (list) – Hyperparameters: [ell, p, sigma_f]

Returns:

Kernel matrix values.

Return type:

ndarray

jetgp.kernel_funcs.kernel_funcs.get_oti_module(n_bases, n_order, auto_compile=True, otilib_path=None, use_sparse=False)[source]#

Dynamically import the correct PyOTI static library. If the module doesn’t exist and auto_compile=True, attempts to compile it. Falls back to pyoti.sparse if compilation fails or is disabled.

Parameters:
  • n_bases (int) – Number of bases (dimension of the input space).

  • n_order (int) – Derivative order for the GP. The OTI order will be 2*n_order.

  • auto_compile (bool, optional (default=False)) – If True, attempt to compile missing modules automatically. Requires jetgp.cmod_writer and jetgp.build_static to be available.

  • otilib_path (str, optional) – Path to otilib-master directory. If None, attempts auto-detection.

  • -------

  • module – The appropriate pyoti.static.onummXnY module, or pyoti.sparse as fallback.