Metrics#

class t3w.AverageMetric(minibatch_metric: IMiniBatchMetric)#

The average metric across data.

eval() float#

get target metric value based on internal statistics.

reset() None#

clear internal statistics.

update(mb: IMiniBatch) None#

clear internal statistics.

synchronize() None#

synchronize local statistics.

minibatch_metric: IMiniBatchMetric#

The dataset metric has a standard behavior to composite a datum metric instance, and the calling of the datum metric is delegated to update().

higher_better: bool#

specifies whether higher value of the metric implies better performance. This can be useful for e.g. metric based best model saving. Always explicitly specify this class variable in your subclass definition.

training: bool#
class t3w.LearningRate(param_group=0)#

This class report current learning rate through the standard metric interface.

This is not a typical metric but it is a commonly used one agnostic to tasks. So we implement it early here. It is also a good demonstration of how to use the exposed TopLevelModule as the IMiniBatch.model attribute. Since the metric computation is applied after the user_model’s forward(), the model attribute is absolutely available in a IMiniBatchMetric.forward() method.

__init__(param_group=0) None#
Parameters:

param_group (int, optional) – to show learning rate of which. Defaults to 0.

forward(mb: IMiniBatch) MiniBatchFloats#
Parameters:

mb (IMiniBatch) – a mini-batch during training.

Returns:

the learning rate of self.param_group

Return type:

FloatScalarTensor

higher_better: bool#

specifies whether higher value of the metric implies better performance. This can be useful for e.g. metric based best model saving. Always explicitly specify this class variable in your subclass definition.

training: bool#