Skip to content

DOptimal

sgptools.objectives.DOptimal

Bases: Objective

Computes the D-optimal design metric.

D-optimality seeks to minimize the determinant of the posterior covariance matrix \(|K(X, X)|\). The objective returns the negative log-determinant of \(K(X, X)\), which is maximized during optimization. tf.linalg.slogdet is used for numerical stability.

Source code in sgptools/objectives.py
class  DOptimal(Objective):            
    """
    Computes the D-optimal design metric.

    D-optimality seeks to minimize the determinant of the posterior
    covariance matrix $|K(X, X)|$. The objective returns
    the negative log-determinant of $K(X, X)$, which is maximized during
    optimization. `tf.linalg.slogdet` is used for numerical stability.
    """
    def __call__(self, X: tf.Tensor) -> tf.Tensor:
        """
        Computes the negative log-determinant of the covariance matrix $-log|K(X, X)|$.

        Args:
            X (tf.Tensor): The input points (e.g., sensing locations) for which
                           the objective is to be computed. Shape: (M, D).

        Returns:
            tf.Tensor: The computed D-optimal metric value.

        Usage:
            ```python
            import gpflow
            import numpy as np
            # Assume kernel is defined
            # X_objective = np.random.rand(100, 2) # Not used by D-Optimal but required by base class
            # kernel = gpflow.kernels.SquaredExponential()
            # noise_variance = 0.1

            d_optimal_objective = DOptimal(
                X_objective=X_objective,
                kernel=kernel,
                noise_variance=noise_variance
            )
            X_sensing = tf.constant(np.random.rand(10, 2), dtype=tf.float64)
            d_optimal_value = d_optimal_objective(X_sensing)
            ```
        """
        # K(X, X)
        K_X_X = self.kernel(X)
        _, logdet_K_X_X = tf.linalg.slogdet(self.jitter_fn(K_X_X))
        return -logdet_K_X_X

__call__(X)

Computes the negative log-determinant of the covariance matrix \(-log|K(X, X)|\).

Parameters:

Name Type Description Default
X Tensor

The input points (e.g., sensing locations) for which the objective is to be computed. Shape: (M, D).

required

Returns:

Type Description
Tensor

tf.Tensor: The computed D-optimal metric value.

Usage
import gpflow
import numpy as np
# Assume kernel is defined
# X_objective = np.random.rand(100, 2) # Not used by D-Optimal but required by base class
# kernel = gpflow.kernels.SquaredExponential()
# noise_variance = 0.1

d_optimal_objective = DOptimal(
    X_objective=X_objective,
    kernel=kernel,
    noise_variance=noise_variance
)
X_sensing = tf.constant(np.random.rand(10, 2), dtype=tf.float64)
d_optimal_value = d_optimal_objective(X_sensing)
Source code in sgptools/objectives.py
def __call__(self, X: tf.Tensor) -> tf.Tensor:
    """
    Computes the negative log-determinant of the covariance matrix $-log|K(X, X)|$.

    Args:
        X (tf.Tensor): The input points (e.g., sensing locations) for which
                       the objective is to be computed. Shape: (M, D).

    Returns:
        tf.Tensor: The computed D-optimal metric value.

    Usage:
        ```python
        import gpflow
        import numpy as np
        # Assume kernel is defined
        # X_objective = np.random.rand(100, 2) # Not used by D-Optimal but required by base class
        # kernel = gpflow.kernels.SquaredExponential()
        # noise_variance = 0.1

        d_optimal_objective = DOptimal(
            X_objective=X_objective,
            kernel=kernel,
            noise_variance=noise_variance
        )
        X_sensing = tf.constant(np.random.rand(10, 2), dtype=tf.float64)
        d_optimal_value = d_optimal_objective(X_sensing)
        ```
    """
    # K(X, X)
    K_X_X = self.kernel(X)
    _, logdet_K_X_X = tf.linalg.slogdet(self.jitter_fn(K_X_X))
    return -logdet_K_X_X