gsum.models.
ConjugateGaussianProcess
(kernel=None, center=0, disp=1, df=1, scale=1, sd=None, basis=None, nugget=1e-10, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, copy_X_train=True, random_state=None)[source]¶A conjugacy-based Gaussian Process class.
The kernel for the correlation matrix. The covariance matrix is the kernel multiplied by the squared scale.
The prior central values for the parameters of the mean function.
The dispersion parameter for the normal prior placed on the mean. This, multiplied by the squared scale parameter from the inverse chi squared prior, determines the variance of the mean. The smaller the dispersion, the better determined is the mean. Set this to zero for a mean that is known to be mean.
The degrees of freedom parameter for the inverse chi squared prior placed on the marginal variance. This is a measure of how well the marginal standard deviation (or variance) is known, with larger degrees of freedom implying a better known standard deviation. Set this to infinity for a standard deviation that is known to be scale, or use the sd keyword argument.
The scale parameter of the scaled inverse chi squared prior placed on the marginal variance of the Gaussian process. Approximately the prior standard deviation for the Gaussian process.
A convenience argument that sets the marginal standard deviation for the Gaussian process. This is equivalent to setting df0 to infinity and scale0 to sd (i.e., a delta function prior on the standard deviation).
Value added to the diagonal of the correlation matrix during fitting. Larger values correspond to increased noise level in the observations. This can also prevent a potential numerical issue during fitting, by ensuring that the calculated values form a positive definite matrix.
Can either be one of the internally supported optimizers for optimizing the kernel’s parameters, specified by a string, or an externally defined optimizer passed as a callable. If a callable is passed, it must have the signature:
def optimizer(obj_func, initial_theta, bounds):
# * 'obj_func' is the objective function to be minimized, which
# takes the hyperparameters theta as parameter and an
# optional flag eval_gradient, which determines if the
# gradient is returned additionally to the function value
# * 'initial_theta': the initial value for theta, which can be
# used by local optimizers
# * 'bounds': the bounds on the values of theta
....
# Returned are the best found hyperparameters theta and
# the corresponding value of the target function.
return theta_opt, func_min
Per default, the ‘fmin_l_bfgs_b’ algorithm from scipy.optimize is used. If None is passed, the kernel’s parameters are kept fixed. Available internal optimizers are:
'fmin_l_bfgs_b'
The number of restarts of the optimizer for finding the kernel’s parameters which maximize the log-marginal likelihood. The first run of the optimizer is performed from the kernel’s initial parameters, the remaining ones (if any) from thetas sampled log-uniform randomly from the space of allowed theta-values. If greater than 0, all bounds must be finite. Note that n_restarts_optimizer == 0 implies that one run is performed.
If True, a persistent copy of the training data is stored in the object. Otherwise, just a reference to the training data is stored, which might cause predictions to change if the data is modified externally.
The generator used to initialize the centers. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
log_marginal_likelihood
(self, theta=None, eval_gradient=False, X=None, y=None)[source]¶Returns log-marginal likelihood of theta for training data.
Kernel hyperparameters for which the log-marginal likelihood is
evaluated. If None, and fit() has been called, the precomputed
log_marginal_likelihood of self.kernel_.theta
is returned.
If True, the gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta is returned additionally. If True, theta must not be None.
The observed data to use. Defaults to y passed in fit.
Log-marginal likelihood of theta for training data.
Gradient of the log-marginal likelihood with respect to the kernel hyperparameters at position theta. Only returned when eval_gradient is True.
gsum.models.
ConjugateStudentProcess
(kernel=None, center=0, disp=1, df=1, scale=1, sd=None, basis=None, nugget=1e-10, optimizer='fmin_l_bfgs_b', n_restarts_optimizer=0, copy_X_train=True, random_state=None)[source]¶A conjugacy-based Student-t Process class.
predict
(self, X, return_std=False, return_cov=False, Xc=None, y=None, pred_noise=False)[source]¶Returns the predictive GP at the points X
Locations at which to predict the new y values
Whether the marginal standard deviation of the predictive process is to be returned
Whether the covariance matrix of the predictive process is to be returned
Locations at which to condition. Defaults to X used in fit. This does not affect the X used to update hyperparameters.
Points upon which to condition. Defaults to the y used in fit. This does not affect the y used to update hyperparameters.
Adds noise_sd to the diagonal of the covariance matrix if return_cov == True.
Mean of predictive distribution at query points
Standard deviation of predictive distribution at query points. Only returned when return_std is True.
Covariance of joint predictive distribution at query points. Only returned when return_cov is True.
gsum.models.
TruncationGP
(kernel=None, ratio=0.5, ref=1, excluded=None, ratio_kws=None, **kwargs)[source]¶A Gaussian Process Truncation class
gsum.models.
TruncationTP
(kernel=None, ratio=0.5, ref=1, excluded=None, ratio_kws=None, **kwargs)[source]¶A Student-t Process Truncation class
predict
(self, X, order, return_std=False, return_cov=False, Xc=None, y=None, pred_noise=False, kind='both')[source]¶Returns the predictive GP at the points X
Locations at which to predict the new y values
The order of the GP to predict
Whether the marginal standard deviation of the predictive process is to be returned
Whether the covariance matrix of the predictive process is to be returned
Locations at which to condition. Defaults to X used in fit. This does not affect the X used to update hyperparameters.
Points upon which to condition. Defaults to the y used in fit. This does not affect the y used to update hyperparameters.
Adds noise_sd to the diagonal of the covariance matrix if return_cov == True.
gsum.models.
TruncationPointwise
(df=1, scale=1, excluded=None)[source]¶A conjugacy-based implementation of the pointwise convergence model from Furnstahl et al. (2015)
Implements the following model
where the \(c_n\) are iid Gaussian random variables and \(\bar c^2\) has a scaled inverse chi squared conjugate prior
Conditioning on the partial sums \(y_0\), \(\dots,\) \(y_k\), allow one to estimate \(\bar c\), and thus the full summation \(y_\infty\).
The degrees of freedom hyperparameter \(\nu_0\) for the scaled inverse chi squared prior on \(\bar c\)
The scale hyperparameter \(\tau_0\) for the scaled inverse chi squared prior on \(\bar c\)
The orders to be excluded from both the hyperparameter updating and from the truncation error distribution. Defaults to None.
interval
(self, alpha, orders=None)[source]¶A convenience method to call interval on the truncation error distribution object.
log_likelihood
(self, ratio=None, ref=None)[source]¶Computes the log likelihood for the ratio and ref parameters given the data passed to fit.
That is
The ratio, or EFT expansion parameter, in the geometric sum, used to extract the coefficients.
The multiplicative reference scale used to extract the coefficients.
The log likelihood
logpdf
(self, y, orders=None)[source]¶A convenience method to call logpdf on the truncation error distribution object.