Package paramz :: Package core :: Module parameter_core :: Class OptimizationHandlable
[hide private]
[frames] | no frames]

Class OptimizationHandlable

source code


This enables optimization handles on an Object as done in GPy 0.4.

`..._optimizer_copy_transformed`: make sure the transformations and constraints etc are handled

Instance Methods [hide private]
 
__init__(self, name, default_constraint=None, *a, **kw)
x.__init__(...) initializes x; see help(type(x)) for signature
source code
 
_trigger_params_changed(self, trigger_parent=True)
First tell all children to update, then update yourself.
source code
 
_size_transformed(self)
As fixes are not passed to the optimiser, the size of the model for the optimiser is the size of all parameters minus the size of the fixes.
source code
 
_transform_gradients(self, g)
Transform the gradients by multiplying the gradient factor for each constraint to it.
source code
 
parameter_names(self, add_self=False, adjust_for_printing=False, recursive=True, intermediate=False)
Get the names of all parameters of this model or parameter.
source code
 
parameter_names_flat(self, include_fixed=False)
Return the flattened parameter names for all subsequent parameters of this parameter.
source code
 
randomize(self, rand_gen=None, *args, **kwargs)
Randomize the model.
source code
 
_propagate_param_grad(self, parray, garray)
For propagating the param_array and gradient_array.
source code
 
_connect_parameters(self) source code

Inherited from constrainable.Constrainable: constrain, constrain_bounded, constrain_fixed, constrain_negative, constrain_positive, fix, unconstrain, unconstrain_bounded, unconstrain_fixed, unconstrain_negative, unconstrain_positive, unfix

Inherited from indexable.Indexable: __setstate__, add_index_operation, remove_index_operation

Inherited from nameable.Nameable: hierarchy_name

Inherited from gradcheckable.Gradcheckable: checkgrad

Inherited from gradcheckable.Gradcheckable (private): _checkgrad

Inherited from pickleable.Pickleable: __deepcopy__, __getstate__, copy, pickle

Inherited from parentable.Parentable: has_parent

Inherited from parentable.Parentable (private): _notify_parent_change

Inherited from updateable.Updateable: toggle_update, trigger_update, update_model, update_toggle

Inherited from observable.Observable: add_observer, change_priority, notify_observers, remove_observer, set_updates

Inherited from object: __delattr__, __format__, __getattribute__, __hash__, __new__, __reduce__, __reduce_ex__, __repr__, __setattr__, __sizeof__, __str__, __subclasshook__

Class Variables [hide private]

Inherited from parentable.Parentable: _parent_, _parent_index_

Properties [hide private]
  optimizer_array
Array for the optimizer to work on.
  num_params
Return the number of parameters of this parameter_handle.
  gradient_full
Note to users: This does not return the gradient in the right shape! Use self.gradient for the right gradient array.

Inherited from constrainable.Constrainable: is_fixed

Inherited from nameable.Nameable: name

Inherited from parentable.Parentable: _highest_parent_

Inherited from object: __class__

Method Details [hide private]

__init__(self, name, default_constraint=None, *a, **kw)
(Constructor)

source code 

x.__init__(...) initializes x; see help(type(x)) for signature

Overrides: object.__init__
(inherited documentation)

_trigger_params_changed(self, trigger_parent=True)

source code 

First tell all children to update, then update yourself.

If trigger_parent is True, we will tell the parent, otherwise not.

parameter_names(self, add_self=False, adjust_for_printing=False, recursive=True, intermediate=False)

source code 

Get the names of all parameters of this model or parameter. It starts
from the parameterized object you are calling this method on.

Note: This does not unravel multidimensional parameters,
      use parameter_names_flat to unravel parameters!

:param bool add_self: whether to add the own name in front of names
:param bool adjust_for_printing: whether to call `adjust_name_for_printing` on names
:param bool recursive: whether to traverse through hierarchy and append leaf node names
:param bool intermediate: whether to add intermediate names, that is parameterized objects

parameter_names_flat(self, include_fixed=False)

source code 

Return the flattened parameter names for all subsequent parameters
of this parameter. We do not include the name for self here!

If you want the names for fixed parameters as well in this list,
set include_fixed to True.
    if not hasattr(obj, 'cache'):
        obj.cache = FunctionCacher()
:param bool include_fixed: whether to include fixed names here.

randomize(self, rand_gen=None, *args, **kwargs)

source code 

Randomize the model. Make this draw from the rand_gen if one exists, else draw random normal(0,1)

:param rand_gen: np random number generator which takes args and kwargs :param flaot loc: loc parameter for random number generator :param float scale: scale parameter for random number generator :param args, kwargs: will be passed through to random number generator

_propagate_param_grad(self, parray, garray)

source code 

For propagating the param_array and gradient_array. This ensures the in memory view of each subsequent array.

1.) connect param_array of children to self.param_array 2.) tell all children to propagate further


Property Details [hide private]

optimizer_array

Array for the optimizer to work on. This array always lives in the space for the optimizer. Thus, it is untransformed, going from Transformations.

Setting this array, will make sure the transformed parameters for this model will be set accordingly. It has to be set with an array, retrieved from this method, as e.g. fixing will resize the array.

The optimizer should only interfere with this array, such that transformations are secured.

Get Method:
unreachable.optimizer_array(self) - Array for the optimizer to work on.
Set Method:
unreachable.optimizer_array(self, p) - Make sure the optimizer copy does not get touched, thus, we only want to set the values *inside* not the array itself.

num_params

Return the number of parameters of this parameter_handle. Param objects will always return 0.

Get Method:
unreachable.num_params(self) - Return the number of parameters of this parameter_handle.

gradient_full

Note to users: This does not return the gradient in the right shape! Use self.gradient for the right gradient array.

To work on the gradient array, use this as the gradient handle. This method exists for in memory use of parameters. When trying to access the true gradient array, use this.

Get Method:
unreachable.gradient_full(self) - Note to users: This does not return the gradient in the right shape! Use self.gradient for the right gradient array.