Package paramz :: Module model :: Class Model
[hide private]
[frames] | no frames]

Class Model

source code


Instance Methods [hide private]
 
__init__(self, name)
x.__init__(...) initializes x; see help(type(x)) for signature
source code
 
optimize(self, optimizer=None, start=None, messages=False, max_iters=1000, ipython_notebook=True, clear_after_finish=False, **kwargs)
Optimize the model using self.log_likelihood and self.log_likelihood_gradient, as well as self.priors.
source code
 
optimize_restarts(self, num_restarts=10, robust=False, verbose=True, parallel=False, num_processes=None, **kwargs)
Perform random restarts of the model, and set the model to the best seen solution.
source code
 
objective_function(self)
The objective function for the given algorithm.
source code
 
objective_function_gradients(self)
The gradients for the objective function for the given algorithm.
source code
 
_grads(self, x)
Gets the gradients from the likelihood and the priors.
source code
 
_objective(self, x)
The objective function passed to the optimizer.
source code
 
_objective_grads(self, x) source code
 
_checkgrad(self, target_param=None, verbose=False, step=1e-06, tolerance=0.001, df_tolerance=1e-12)
Check the gradient of the ,odel by comparing to a numerical estimate.
source code
 
_repr_html_(self)
Representation of the model in html for notebook display.
source code
 
__str__(self, VT100=True)
str(x)
source code

Inherited from parameterized.Parameterized: __getitem__, __setattr__, __setitem__, __setstate__, build_pydot, copy, get_property_string, grep_param_names, link_parameter, link_parameters, unlink_parameter

Inherited from core.parameter_core.Parameterizable: disable_caching, enable_caching, initialize_parameter, parameters_changed, save, traverse, traverse_parents

Inherited from core.parameter_core.OptimizationHandlable: parameter_names, parameter_names_flat, randomize

Inherited from core.constrainable.Constrainable: constrain, constrain_bounded, constrain_fixed, constrain_negative, constrain_positive, fix, unconstrain, unconstrain_bounded, unconstrain_fixed, unconstrain_negative, unconstrain_positive, unfix

Inherited from core.indexable.Indexable: add_index_operation, remove_index_operation

Inherited from core.nameable.Nameable: hierarchy_name

Inherited from core.gradcheckable.Gradcheckable: checkgrad

Inherited from core.pickleable.Pickleable: __deepcopy__, __getstate__, pickle

Inherited from core.parentable.Parentable: has_parent

Inherited from core.updateable.Updateable: toggle_update, trigger_update, update_model, update_toggle

Inherited from core.observable.Observable: add_observer, change_priority, notify_observers, remove_observer, set_updates

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __sizeof__, __subclasshook__

Class Variables [hide private]
  _fail_count = 0
  _allowed_failures = 10

Inherited from core.parentable.Parentable: _parent_, _parent_index_

Properties [hide private]

Inherited from parameterized.Parameterized: flattened_parameters

Inherited from parameterized.Parameterized (private): _description_str

Inherited from core.parameter_core.Parameterizable: gradient, num_params, param_array, unfixed_param_array

Inherited from core.parameter_core.OptimizationHandlable: gradient_full, optimizer_array

Inherited from core.constrainable.Constrainable: is_fixed

Inherited from core.nameable.Nameable: name

Inherited from core.parentable.Parentable: _highest_parent_

Inherited from object: __class__

Method Details [hide private]

__init__(self, name)
(Constructor)

source code 

x.__init__(...) initializes x; see help(type(x)) for signature

Overrides: object.__init__
(inherited documentation)

optimize(self, optimizer=None, start=None, messages=False, max_iters=1000, ipython_notebook=True, clear_after_finish=False, **kwargs)

source code 

Optimize the model using self.log_likelihood and self.log_likelihood_gradient, as well as self.priors.

kwargs are passed to the optimizer. They can be:

:param max_iters: maximum number of function evaluations :type max_iters: int :messages: True: Display messages during optimisation, "ipython_notebook": :type messages: bool"string :param optimizer: which optimizer to use (defaults to self.preferred optimizer) :type optimizer: string

Valid optimizers are:

  • 'scg': scaled conjugate gradient method, recommended for stability. See also GPy.inference.optimization.scg
  • 'fmin_tnc': truncated Newton method (see scipy.optimize.fmin_tnc)
  • 'simplex': the Nelder-Mead simplex method (see scipy.optimize.fmin),
  • 'lbfgsb': the l-bfgs-b method (see scipy.optimize.fmin_l_bfgs_b),
  • 'lbfgs': the bfgs method (see scipy.optimize.fmin_bfgs),
  • 'sgd': stochastic gradient decsent (see scipy.optimize.sgd). For experts only!

optimize_restarts(self, num_restarts=10, robust=False, verbose=True, parallel=False, num_processes=None, **kwargs)

source code 

Perform random restarts of the model, and set the model to the best seen solution.

If the robust flag is set, exceptions raised during optimizations will be handled silently. If _all_ runs fail, the model is reset to the existing parameter values.

\*\*kwargs are passed to the optimizer.

:param num_restarts: number of restarts to use (default 10) :type num_restarts: int :param robust: whether to handle exceptions silently or not (default False) :type robust: bool :param parallel: whether to run each restart as a separate process. It relies on the multiprocessing module. :type parallel: bool :param num_processes: number of workers in the multiprocessing pool :type numprocesses: int :param max_f_eval: maximum number of function evaluations :type max_f_eval: int :param max_iters: maximum number of iterations :type max_iters: int :param messages: whether to display during optimisation :type messages: bool

.. note:

   If num_processes is None, the number of workes in the
   multiprocessing pool is automatically set to the number of processors
   on the current machine.

objective_function(self)

source code 

The objective function for the given algorithm.

This function is the true objective, which wants to be minimized. Note that all parameters are already set and in place, so you just need to return the objective function here.

For probabilistic models this is the negative log_likelihood (including the MAP prior), so we return it here. If your model is not probabilistic, just return your objective to minimize here!

objective_function_gradients(self)

source code 

The gradients for the objective function for the given algorithm. The gradients are w.r.t. the *negative* objective function, as this framework works with *negative* log-likelihoods as a default.

You can find the gradient for the parameters in self.gradient at all times. This is the place, where gradients get stored for parameters.

This function is the true objective, which wants to be minimized. Note that all parameters are already set and in place, so you just need to return the gradient here.

For probabilistic models this is the gradient of the negative log_likelihood (including the MAP prior), so we return it here. If your model is not probabilistic, just return your *negative* gradient here!

_grads(self, x)

source code 

Gets the gradients from the likelihood and the priors.

Failures are handled robustly. The algorithm will try several times to return the gradients, and will raise the original exception if the objective cannot be computed.

:param x: the parameters of the model. :type x: np.array

_objective(self, x)

source code 

The objective function passed to the optimizer. It combines the likelihood and the priors.

Failures are handled robustly. The algorithm will try several times to return the objective, and will raise the original exception if the objective cannot be computed.

:param x: the parameters of the model. :parameter type: np.array

_checkgrad(self, target_param=None, verbose=False, step=1e-06, tolerance=0.001, df_tolerance=1e-12)

source code 

Check the gradient of the ,odel by comparing to a numerical
estimate.  If the verbose flag is passed, individual
components are tested (and printed)

:param verbose: If True, print a "full" checking of each parameter
:type verbose: bool
:param step: The size of the step around which to linearise the objective
:type step: float (default 1e-6)
:param tolerance: the tolerance allowed (see note)
:type tolerance: float (default 1e-3)

Note:-
   The gradient is considered correct if the ratio of the analytical
   and numerical gradients is within <tolerance> of unity.

   The *dF_ratio* indicates the limit of numerical accuracy of numerical gradients.
   If it is too small, e.g., smaller than 1e-12, the numerical gradients are usually
   not accurate enough for the tests (shown with blue).

Overrides: core.gradcheckable.Gradcheckable._checkgrad

_repr_html_(self)

source code 

Representation of the model in html for notebook display.

Overrides: parameterized.Parameterized._repr_html_

__str__(self, VT100=True)
(Informal representation operator)

source code 

str(x)

Overrides: object.__str__
(inherited documentation)