The BetaML.Nn Module

BetaML.NnModule
BetaML.Nn module

Implement the functionality required to define an artificial Neural Network, train it with data, forecast data and assess its performances.

Common type of layers and optimisation algorithms are already provided, but you can define your own ones subclassing respectively the Layer and OptimisationAlgorithm abstract types.

The module provide the following type or functions. Use ?[type or function] to access their full signature and detailed documentation:

Model definition:

  • DenseLayer: Classical feed-forward layer with user-defined activation function
  • DenseNoBiasLayer: Classical layer without the bias parameter
  • VectorFunctionLayer: Parameterless layer whose activation function run over the ensable of its nodes rather than on each one individually
  • buildNetwork: Build the chained network and define a cost function
  • getParams(nn): Retrieve current weigthts
  • getGradient(nn): Retrieve the current gradient of the weights
  • setParams!(nn): Update the weigths of the network
  • show(nn): Print a representation of the Neural Network

Each layer can use a default activation function, one of the functions provided in the Utils module (relu, tanh, softmax,...) or you can specify your own function. The derivative of the activation function can be optionally be provided, in such case training will be quicker, altought this difference tends to vanish with bigger datasets. You can alternativly implement your own layers defining a new type as subtype of the abstract type Layer. Each user-implemented layer must define the following methods:

  • A suitable constructor
  • forward(layer,x)
  • backward(layer,x,nextGradient)
  • getParams(layer)
  • getGradient(layer,x,nextGradient)
  • setParams!(layer,w)
  • size(layer)

Model training:

  • trainingInfo(nn): Default callback function during training
  • train!(nn): Training function
  • singleUpdate!(θ,▽;optAlg): The parameter update made by the specific optimisation algorithm
  • SGD: The default optimisation algorithm
  • ADAM: A faster moment-based optimisation algorithm (added in v0.2.2)

To define your own optimisation algorithm define a subtype of OptimisationAlgorithm and implement the function singleUpdate!(θ,▽;optAlg) and eventually initOptAlg(⋅) specific for it.

Model predictions and assessment:

  • predict(nn): Return the output given the data
  • loss(nn): Compute avg. network loss on a test set
  • Utils.accuracy(nn): Categorical output accuracy

While high-level functions operating on the dataset expect it to be in the standard format (nRecords × nDimensions matrices) it is custom to represent the chain of a neural network as a flow of column vectors, so all low-level operations (operating on a single datapoint) expect both the input and the output as a column vector.

source

Module Index

Detailed API

BetaML.Nn.ADAMType

ADAM(;η, λ, β₁, β₂, ϵ)

The ADAM[https://arxiv.org/pdf/1412.6980.pdf] algorithm, an adaptive moment estimation optimiser.

Fields:

  • η: Learning rate (stepsize, α in the paper), as a function of the current epoch [def: t -> 0.001 (i.e. fixed)]
  • λ: Multiplicative constant to the learning rate [def: 1]
  • β₁: Exponential decay rate for the first moment estimate [range: ∈ [0,1], def: 0.9]
  • β₂: Exponential decay rate for the second moment estimate [range: ∈ [0,1], def: 0.999]
  • ϵ: Epsilon value to avoid division by zero [def: 10^-8]
source
BetaML.Nn.DenseLayerType

DenseLayer

Representation of a layer in the network

Fields:

  • w: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)
  • wb: Biases (n)
  • f: Activation function
  • df: Derivative of the activation function
source
BetaML.Nn.DenseNoBiasLayerType

DenseNoBiasLayer

Representation of a layer without bias in the network

Fields:

  • w: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)
  • f: Activation function
  • df: Derivative of the activation function
source
BetaML.Nn.LearnableType

Learnable(data)

Structure representing the learnable parameters of a layer or its gradient.

The learnable parameters of a layers are given in the form of a N-tuple of Array{Float64,N2} where N2 can change (e.g. we can have a layer with the first parameter being a matrix, and the second one being a scalar). We wrap the tuple on its own structure a bit for some efficiency gain, but above all to define standard mathematic operations on the gradients without doing "type pyracy" with respect to Base tuples.

source
BetaML.Nn.NNType

NN

Representation of a Neural Network

Fields:

  • layers: Array of layers objects
  • cf: Cost function
  • dcf: Derivative of the cost function
  • trained: Control flag for trained networks
source
BetaML.Nn.SGDType

SGD(;η=t -> 1/(1+t), λ=2)

Stochastic Gradient Descent algorithm (default)

Fields:

  • η: Learning rate, as a function of the current epoch [def: t -> 1/(1+t)]
  • λ: Multiplicative constant to the learning rate [def: 2]
source
BetaML.Nn.VectorFunctionLayerType

VectorFunctionLayer

Representation of a (weightless) VectorFunction layer in the network. Vector function layer expects a vector activation function, i.e. a function taking the whole output of the previous layer in input rather than working on a single node as "normal" activation functions. Useful for example for the SoftMax function.

Fields:

  • nₗ: Number of nodes of the previous layer
  • n: Number of nodes in output
  • f: Activation function (vector)
  • df: Derivative of the (vector) activation function
source
Base.sizeMethod
size(layer)

SGet the dimensions of the layers in terms of (dimensions in input , dimensions in output)

Notes:

  • You need to use import Base.size before defining this function for your layer
source
BetaML.Nn.backwardMethod

backward(layer,x,nextGradient)

Compute backpropagation for this layer

Parameters:

  • layer: Worker layer
  • x: Input to the layer
  • nextGradient: Derivative of the overaall loss with respect to the input of the next layer (output of this layer)

Return:

  • The evaluated gradient of the loss with respect to this layer inputs
source
BetaML.Nn.buildNetworkMethod

buildNetwork(layers,cf;dcf,name)

Instantiate a new Feedforward Neural Network

Parameters:

  • layers: Array of layers objects
  • cf: Cost function
  • dcf: Derivative of the cost function [def: nothing]
  • name: Name of the network [def: "Neural Network"]

Notes:

  • Even if the network ends with a single output note, the cost function and its derivative should always expect y and ŷ as column vectors.
source
BetaML.Nn.forwardMethod

forward(layer,x)

Predict the output of the layer given the input

Parameters:

  • layer: Worker layer
  • x: Input to the layer

Return:

  • An Array{T,1} of the prediction (even for a scalar)
source
BetaML.Nn.getGradientMethod

getGradient(layer,x,nextGradient)

Compute backpropagation for this layer

Parameters:

  • layer: Worker layer
  • x: Input to the layer
  • nextGradient: Derivative of the overaall loss with respect to the input of the next layer (output of this layer)

Return:

  • The evaluated gradient of the loss with respect to this layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the getParams() and setParams() functions. Note that starting from BetaML 0.2.2 this tuple needs to be wrapped in its Learnable type.
source
BetaML.Nn.getGradientMethod

getGradient(nn,xbatch,ybatch)

Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)

Parameters:

  • nn: Worker network
  • xbatch: Input to the network (n,d)
  • ybatch: Label input (n,d)

#Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
source
BetaML.Nn.getGradientMethod

getGradient(nn,x,y)

Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)

Parameters:

  • nn: Worker network
  • x: Input to the network (d,1)
  • y: Label input (d,1)

#Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
source
BetaML.Nn.getNParamsMethod

getNParams(layer)

Return the number of parameters of a layer.

It doesn't need to be implemented by each layer type, as it uses getParams().

source
BetaML.Nn.getParamsMethod

getParams(layer)

Get the layers current value of its trainable parameters

Parameters:

  • layer: Worker layer

Return:

  • The current value of the layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the getGradient() and setParams() functions. Note that starting from BetaML 0.2.2 this tuple needs to be wrapped in its Learnable type.
source
BetaML.Nn.getParamsMethod

getParams(nn)

Retrieve current weigthts

Parameters:

  • nn: Worker network

Notes:

  • The output is a vector of tuples of each layer's input weigths and bias weigths
source
BetaML.Nn.initOptAlg!Method

initOptAlg!(optAlg::ADAM;θ,batchSize,x,y)

Initialize the ADAM algorithm with the parameters m and v as zeros and check parameter bounds

source
BetaML.Nn.initOptAlg!Method

initOptAlg!(optAlg;θ,batchSize,x,y)

Initialize the optimisation algorithm

Parameters:

  • optAlg: The Optimisation algorithm to use
  • θ: Current parameters
  • batchSize: The size of the batch
  • x: The training (input) data
  • y: The training "labels" to match

Notes:

  • Only a few optimizers need this function and consequently ovverride it. By default it does nothing, so if you want write your own optimizer and don't need to initialise it, you don't have to override this method
source
BetaML.Nn.lossMethod

loss(fnn,x,y)

Compute avg. network loss on a test set (or a single (1 × d) data point)

Parameters:

  • fnn: Worker network
  • x: Input to the network (n) or (n x d)
  • y: Label input (n) or (n x d)
source
BetaML.Nn.setParams!Method
 setParams!(layer,w)

Set the trainable parameters of the layer with the given values

Parameters:

  • layer: Worker layer
  • w: The new parameters to set (Learnable)

Notes:

  • The format of the tuple wrapped by Learnable must be consistent with those of the getParams() and getGradient() functions.
source
BetaML.Nn.setParams!Method

setParams!(nn,w)

Update weigths of the network

Parameters:

  • nn: Worker network
  • w: The new weights to set
source
BetaML.Nn.showMethod

show(nn)

Print a representation of the Neural Network (layers, dimensions..)

Parameters:

  • nn: Worker network
source
BetaML.Nn.singleUpdate!Method

singleUpdate!(θ,▽;nEpoch,nBatch,batchSize,xbatch,ybatch,optAlg)

Perform the parameters update based on the average batch gradient.

Parameters:

  • θ: Current parameters
  • : Average gradient of the batch
  • nEpoch: Count of current epoch
  • nBatch: Count of current batch
  • nBatches: Number of batches per epoch
  • xbatch: Data associated to the current batch
  • ybatch: Labels associated to the current batch
  • optAlg: The Optimisation algorithm to use for the update

Notes:

  • This function is overridden so that each optimisation algorithm implement their

own version

  • Most parameters are not used by any optimisation algorithm. They are provided

to support the largest possible class of optimisation algorithms

  • Some optimisation algorithms may change their internal structure in this function
source
BetaML.Nn.train!Method

train!(nn,x,y;epochs,batchSize,sequential,optAlg,verbosity,cb)

Train a neural network with the given x,y data

Parameters:

  • nn: Worker network
  • x: Training input to the network (records x dimensions)
  • y: Label input (records x dimensions)
  • epochs: Number of passages over the training set [def: 100]
  • batchSize: Size of each individual batch [def: min(size(x,1),32)]
  • sequential: Wether to run all data sequentially instead of random [def: false]
  • optAlg: The optimisation algorithm to update the gradient at each batch [def: ADAM()]
  • verbosity: A verbosity parameter for the trade off information / efficiency [def: STD]
  • cb: A callback to provide information. [def: trainingInfo]

Return:

  • A named tuple with the following information
    • epochs: Number of epochs actually ran
    • ϵ_epochs: The average error on each epoch (if verbosity > LOW)
    • θ_epochs: The parameters at each epoch (if verbosity > STD)

Notes:

  • Currently supported algorithms:
    • SGD, the classical (Stochastic) Gradient Descent optimiser
    • ADAM, an adaptive moment estimation optimiser
  • Look at the individual optimisation algorithm (?[Name OF THE ALGORITHM]) for info on its parameter, e.g. ?SGD for the Stochastic Gradient Descent.
  • You can implement your own optimisation algorithm using a subtype of OptimisationAlgorithm and implementing its constructor and the update function singleUpdate!(⋅) (type ?singleUpdate! for details).
  • You can implement your own callback function, altought the one provided by default is already pretty generic (its output depends on the verbosity parameter). @see trainingInfo for informations on the cb parameters.
  • Both the callback function and the singleUpdate! function of the optimisation algorithm can be used to stop the training algorithm, respectively returning true or stop=true.
  • The verbosity can be set to any of NONE,LOW,STD,HIGH,FULL.
  • The update is done computing the average gradient for each batch and then calling singleUpdate! to let the optimisation algorithm perform the parameters update
source
BetaML.Nn.OptimisationAlgorithmType
OptimisationAlgorithm

Abstract type representing an Optimisation algorithm.

Currently supported algorithms:

  • SGD (Stochastic) Gradient Descent

See ?[Name OF THE ALGORITHM] for their details

You can implement your own optimisation algorithm using a subtype of OptimisationAlgorithm and implementing its constructor and the update function singleUpdate(⋅) (type ?singleUpdate for details).

source
BetaML.Nn.trainingInfoMethod

trainingInfo(nn,x,y;n,batchSize,epochs,verbosity,nEpoch,nBatch)

Default callback funtion to display information during training, depending on the verbosity level

Parameters:

  • nn: Worker network
  • x: Batch input to the network (batchSize,d)
  • y: Batch label input (batchSize,d)
  • n: Size of the full training set
  • nBatches : Number of baches per epoch
  • epochs: Number of epochs defined for the training
  • verbosity: Verbosity level defined for the training (NONE,LOW,STD,HIGH,FULL)
  • nEpoch: Counter of the current epoch
  • nBatch: Counter of the current batch

#Notes:

  • Reporting of the error (loss of the network) is expensive. Use verbosity=NONE for better performances
source