Function References
Core functions
MaternRegression.TimeVarType — Typeconst TimeVarType = Union{AbstractVector, AbstractRange}A supertype for specifying the set of training inputs.
MaternRegression.create_Matern3HalfsKernel — Functioncreate_Matern3HalfsKernel(λ::T, b::T)::Matern3HalfsKernel{T} where T <: AbstractFloatCreates a Matern covariance function variable with parameters λ and b. Given two time inputs x and z, the distance between two time inputs, this covariance function has the following formula:
output = b*(1 + λ*norm(x-z))*exp(-λ*norm(x-z))create_Matern3HalfsKernel(λ::T)::Matern3HalfsKernel{T} where T <: AbstractFloatReturns create_Matern3HalfsKernel(λ, one(T))
create_Matern3HalfsKernel(::Type{T})::Matern3HalfsKernel{T} where T <: AbstractFloatReturns a template or dummy variable by calling create_Matern3HalfsKernel(one(T), one(T))
MaternRegression.create_sdegp — Functioncreate_sdegp(
θ_sde::Kernel1D,
ts::TimeVarType,
y::Vector{T},
σ²::T,
) where T <: RealTo run Gaussian process query via the state-space implementation, one needs to do a cache phase then a query phase. This function runs the cache phase.
Inputs:
θ_sdeis a Matern kernel variabletsis the set of training inputs.yis an array of training outputs.σ²is the Gaussian process regression's observation variance.
Returns a variable of SDEGP data type, which is used in the query phase.
MaternRegression.query_sdegp — Functionquery_sdegp(
S::SDEGP,
tqs,
ts,
) where T <: RealThis function allocates the output buffers mqs and vqs, then calls query_sdegp!. See query_sdegp! for details on the inputs.
Returns mqs and vqs.
MaternRegression.query_sdegp! — Functionquery_sdegp!(
mqs::Vector{T}, # mutates, output.
vqs::Vector{T}, # mutates, output.
S::SDEGP,
tqs,
ts::TimeVarType,
) where T <: RealTo run Gaussian process query via the state-space implementation, one needs to do a cache phase then a query phase. This function runs the query phase.
Inputs:
mqsBuffer that is mutated and stores the queried predictive means corresponding to entries intqs.vqsBuffer that is mutated and stores the queried predictive variances corresponding to entries intqs.Sis the output of the cached phase; seecreate_sdegp.tqsis the set of query inputs.tsis the set of training inputs.
The size of mqs and vqs must be the same as tqs.
ts should be the same training inputs used to create the cache S, otherwise S needs to be recomputed via create_sdegp.
Returns nothing.
MaternRegression.setup_ml — Functionsetup_ml(
θ_sde::Kernel1D,
ts::TimeVarType,
y::Vector{T};
σ² = one(T),
) where T <: RealReturns a buffer variable of type MLBuffers for use with eval_ml!, which computes the marginal likelihood. This avoids additional creation and allocation of this buffer variable if the marginal likelihood is to be computed multiple times for different hyperparameters.
MaternRegression.eval_ml — Functioneval_ml(
trait::GainTrait,
p::Vector{T},
ts::Union{AbstractRange, AbstractVector},
y::Vector;
zero_tol = eps(T)*100,
) where T <: AbstractFloatInputs:
traitis a trait variable that specifies the order of the hyperparameters inp. See trait-based dispatch in the Julia documentation. Iftypeof(trait) <: InferGain, then the hyperparameters inpare ordered[λ; σ²; b]. Iftypeof(trait) <: UnityGain, then the hyperparameters inpare ordered[λ; σ²], andbis set to1.pis an ordered set of hyperparameter as an array.tsis the set of training inputs.yis the set of training outputs.zero_tolneeds to be a small positive number. It is the lower bound on the covariance function hyperparameters.
Returns the log marginal likelihood over the training set. Some additive constants might be dropped.
MaternRegression.eval_ml! — Functioneval_ml!(
B::MLBuffers, # mutates, buffer
θ_sde::Kernel1D,
ts::TimeVarType,
y::Vector,
σ²::Real,
)Inputs:
bufferis the return variable fromsetup_ml.θ_sdeis the covariance function variable.tsis the set of training inputs.yis the set of training outputs.σ²is the observation noise variance. Increase this if you experience numerical issues.
Returns the log marginal likelihood over the training set. Additive constants are dropped.
eval_ml!(
trait::GainTrait,
buffer::MLBuffers,
p::Vector{T},
ts::TimeVarType,
y::Vector{T};
zero_tol = eps(T)*100,
) where T <: AbstractFloatInputs:
traitis a trait variable that specifies the order of the hyperparameters inp. See trait-based dispatch in the Julia documentation. Iftypeof(trait) <: InferGain, then the hyperparameters inpare ordered[λ; σ²; b]. Iftypeof(trait) <: UnityGain, then the hyperparameters inpare ordered[λ; σ²], andbis set to1.bufferis the return variable fromsetup_ml.pis an ordered set of hyperparameter as an array.tsis the set of training inputs.yis the set of training outputs.zero_tolneeds to be a small positive number. It is the lower bound on the covariance function hyperparameters.
Returns the log marginal likelihood over the training set. Some additive constants might be dropped.
eval_ml!(
buffer::MLBuffers,
λ::T,
σ²::T,
ts::TimeVarType,
y::Vector{T};
zero_tol = eps(T)*100,
) where T <: RealInputs:
bufferis the return variable fromsetup_ml.λandσ²are hyperparameters. Thebhyperparameter is set to 1.tsis the set of training inputs.yis the set of training outputs.zero_tolneeds to be a small positive number. It is the lower bound on the covariance function hyperparameters.
Returns the log marginal likelihood over the training set. Some additive constants might be dropped.
eval_ml!(
buffer::MLBuffers,
λ::T,
σ²::T,
b::T,
ts::TimeVarType,
y::Vector{T};
zero_tol = eps(T)*100,
) where T <: RealInputs:
bufferis the return variable fromsetup_ml.λ,σ², andbare hyperparameters.tsis the set of training inputs.yis the set of training outputs.zero_tolneeds to be a small positive number. It is the lower bound on the covariance function hyperparameters.
Returns the log marginal likelihood over the training set. Some additive constants might be dropped.
Hyperparameter optimization
MaternRegression.GainTrait — Typeabstract type GainTrait endSubtypes are InferGain and UnityGain
MaternRegression.InferGain — Typestruct InferGain <: GainTrait endThis trait specifies that the hyperparameter ordering of the parameter array is λ, σ², b.
MaternRegression.UnityGain — Typestruct UnityGain <: GainTrait endThis trait specifies that the hyperparameter ordering of the parameter array is λ, σ². The b hyperparameter is fixed at 1.
MaternRegression.hp_optim — Functionhp_optim(
alg_trait::UseMetaheuristics,
model_trait::GainTrait,
ts::TimeVarType,
y::Vector,
lbs::Vector{T},
ubs::Vector{T};
f_calls_limit = 10_000,
p0s::Vector{Vector{T}} = generate_grid(model_trait, lbs, ubs, 10),
)Checks if the weak dependencies are loaded in the user's working scope, then checks if the corresponding package extensions are loaded. If so, call the appropriate hyperparameter optimization routine from the package extension.
alg_trait: seeUseMetaheuristics.model_traitis a trait variable that specifies the order of the hyperparameters inp. See trait-based dispatch in the Julia documentation. Iftypeof(trait) <: InferGain, then the hyperparameters inpare ordered[λ; σ²; b]. Iftypeof(trait) <: UnityGain, then the hyperparameters inpare ordered[λ; σ²], andbis set to1.pis used internally byhy_optim.tsis the set of training inputs.yis the set of training outputs.lbsandubsare lower and upper bounds for the hyperparameter vector,p. They follow the ordering as specified bymodel_trait.f_calls_limitis a soft upperbound on the number of marginal likelihood evaluations used during optimization such that the evolutionary algorithm inMetaheuristics.jltries to not exceed.p0sis a nestedVectorarray that contains hyperparameter states that you want to force the optimization algorithm to evaluate. These states can be interpreted as initial guesses to the solution. The default creates a uniform grid of10x10x10(iftypeof(model_trait) <: InferGain) or10x10(iftypeof(model_trait) <: UnityGain) from the lower and upper bounds specified.
See the tutorial for the return type and subsequent usage.
MaternRegression.UseMetaheuristics — Typestruct UseMetaheuristics <: ExtensionPkgs
pkg::Module
endContainer for verifying whether the weak dependencies for MetaheuristicsExt.jl is loaded in the user's working scope.
The only dependency required to be loaded in the user's working scope is Metaheuristics.
MaternRegression.parse_ml_result — Functionparse_ml_result(trait::GainTrait, p::Vector{T}) where T <: AbstractFloatReturns a Matern covariance function variable with the hyperparameters specified in p. The ordering of the hyperparameters in p is specified by trait.
Hyperparameter inference
MaternRegression.hp_inference — Functionhp_inference(
alg_trait::UseDynamicHMC,
gain_trait::GainTrait,
N_draws::Integer,
α::Real,
β::Real,
ts::TimeVarType,
y::Vector,
)Checks if the weak dependencies are loaded in the user's working scope, then checks if the corresponding package extensions are loaded. If so, call the appropriate hyperparameter inference routine from the package extension.
alg_trait: seeUseDynamicHMC.gain_traitis a trait variable that specifies the order of the hyperparameters inp. See trait-based dispatch in the Julia documentation. Iftypeof(trait) <: InferGain, then the hyperparameters inpare ordered[λ; σ²; b]. Iftypeof(trait) <: UnityGain, then the hyperparameters inpare ordered[λ; σ²], andbis set to1.pis used internally byhy_optim.αandβare the shared hyperparameters of the inverse gamma prior for the hyperparameters.tsis the set of training inputs.yis the set of training outputs.
See the tutorial for the return type and subsequent usage.
MaternRegression.UseDynamicHMC — Typestruct UseDynamicHMC <: ExtensionPkgs
pkg_list::Vector{Module}
endContainer for verifying whether the weak dependencies for DynamicHMCExt.jl is loaded in the user's working scope.
The dependencies are:
FiniteDiffSimpleUnPackTransformVariablesTransformedLogDensitiesLogDensityProblemsDynamicHMC
MaternRegression.simulate_sdegps — Functionsimulate_sdegps(
λ_set::Vector{T},
σ²_set::Vector{T},
M::Integer,
ts::TimeVarType,
tqs,
y::Vector,
) where T <: RealReturns the drawn samples of the ensemble of Gaussian process models that are specified by λ_set and σ²_set. The number of ensemble models is the length of λ_set.
The gain b is set to 1 for the simulation.
Inputs:
λ_setcontain samples of theλparameter. Same length asσ²_set.σ²_setcontain samples of theσ²parameter.Mis the number of samplessimulate_sdegpssimulates per model.tsis the ordered set of training inputs.tqsis the ordered set of query inputs.yis the ordered set of training outputs.
The output, S, is a M x N x K array, where N is the number of ensemble models, and K is the number of query inputs.
simulate_sdegps(
λ_set::Vector{T},
σ²_set::Vector{T},
b_set::Vector{T},
M::Integer,
ts::TimeVarType,
tqs,
y::Vector,
) where T <: RealReturns the drawn samples of the ensemble of Gaussian process models that are specified by λ_set, σ²_set, and b_set. The number of ensemble models is the length of λ_set.
Inputs:
λ_setcontain samples of theλparameter. Same length asσ²_set.σ²_setcontain samples of theσ²parameter.b_setcontain samples of thebparameter.Mis the number of samplessimulate_sdegpssimulates per model.tsis the ordered set of training inputs.tqsis the ordered set of query inputs.yis the ordered set of training outputs.
The output, S, is a M x N x K array, where N is the number of ensemble models, and K is the number of query inputs.
MaternRegression.compute_mean_var — Functioncompute_mean_var(S::Array{T,3}) where T <: RealAssumes S is a M x N x K array of drawn samples, where:
Mis the number of samples drawn from a model.Nis the number of models.Kis the number of query positions.
compute_mean_var computes the empirical mean and variance for each query position.
Outputs:
mqsthe empirical means. LengthK.vqsthe empirical variances. LengthK.