The BetaML.Nn Module
BetaML.Nn
— ModuleBetaML.Nn module
Implement the functionality required to define an artificial Neural Network, train it with data, forecast data and assess its performances.
Common type of layers and optimisation algorithms are already provided, but you can define your own ones subclassing respectively the AbstractLayer
and OptimisationAlgorithm
abstract types.
The module provide the following type or functions. Use ?[type or function]
to access their full signature and detailed documentation:
Model definition:
DenseLayer
: Classical feed-forward layer with user-defined activation functionDenseNoBiasLayer
: Classical layer without the bias parameterVectorFunctionLayer
: Parameterless layer whose activation function run over the ensable of its nodes rather than on each one individuallybuildNetwork
: Build the chained network and define a cost functiongetParams(nn)
: Retrieve current weigthtsgetGradient(nn)
: Retrieve the current gradient of the weightssetParams!(nn)
: Update the weigths of the networkshow(nn)
: Print a representation of the Neural Network
Each layer can use a default activation function, one of the functions provided in the Utils
module (relu
, tanh
, softmax
,...) or you can specify your own function. The derivative of the activation function can be optionally be provided, in such case training will be quicker, altought this difference tends to vanish with bigger datasets. You can alternativly implement your own layer defining a new type as subtype of the abstract type AbstractLayer
. Each user-implemented layer must define the following methods:
- A suitable constructor
forward(layer,x)
backward(layer,x,nextGradient)
getParams(layer)
getGradient(layer,x,nextGradient)
setParams!(layer,w)
size(layer)
Model training:
trainingInfo(nn)
: Default callback function during trainingtrain!(nn)
: Training functionsingleUpdate!(θ,▽;optAlg)
: The parameter update made by the specific optimisation algorithmSGD
: The default optimisation algorithmADAM
: A faster moment-based optimisation algorithm (added in v0.2.2)
To define your own optimisation algorithm define a subtype of OptimisationAlgorithm
and implement the function singleUpdate!(θ,▽;optAlg)
and eventually initOptAlg(⋅)
specific for it.
Model predictions and assessment:
predict(nn)
: Return the output given the dataloss(nn)
: Compute avg. network loss on a test setUtils.accuracy(ŷ,y)
: Categorical output accuracy
While high-level functions operating on the dataset expect it to be in the standard format (nRecords × nDimensions matrices) it is custom to represent the chain of a neural network as a flow of column vectors, so all low-level operations (operating on a single datapoint) expect both the input and the output as a column vector.
Module Index
BetaML.Nn.ADAM
BetaML.Nn.DenseLayer
BetaML.Nn.DenseNoBiasLayer
BetaML.Nn.Learnable
BetaML.Nn.NN
BetaML.Nn.OptimisationAlgorithm
BetaML.Nn.RNNLayer
BetaML.Nn.SGD
BetaML.Nn.ScalarFunctionLayer
BetaML.Nn.VectorFunctionLayer
BetaML.Nn.backward
BetaML.Nn.buildNetwork
BetaML.Nn.forward
BetaML.Nn.getGradient
BetaML.Nn.getGradient
BetaML.Nn.getGradient
BetaML.Nn.getNParams
BetaML.Nn.getNParams
BetaML.Nn.getParams
BetaML.Nn.getParams
BetaML.Nn.initOptAlg!
BetaML.Nn.initOptAlg!
BetaML.Nn.loss
BetaML.Nn.setParams!
BetaML.Nn.setParams!
BetaML.Nn.show
BetaML.Nn.singleUpdate!
BetaML.Nn.train!
BetaML.Nn.trainingInfo
Detailed API
BetaML.Nn.ADAM
— TypeADAM(;η, λ, β₁, β₂, ϵ)
The ADAM algorithm, an adaptive moment estimation optimiser.
Fields:
η
: Learning rate (stepsize, α in the paper), as a function of the current epoch [def: t -> 0.001 (i.e. fixed)]λ
: Multiplicative constant to the learning rate [def: 1]β₁
: Exponential decay rate for the first moment estimate [range: ∈ [0,1], def: 0.9]β₂
: Exponential decay rate for the second moment estimate [range: ∈ [0,1], def: 0.999]ϵ
: Epsilon value to avoid division by zero [def: 10^-8]
BetaML.Nn.DenseLayer
— TypeDenseLayer
Representation of a layer in the network
Fields:
w
: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)wb
: Biases (n)f
: Activation functiondf
: Derivative of the activation function
BetaML.Nn.DenseNoBiasLayer
— TypeDenseNoBiasLayer
Representation of a layer without bias in the network
Fields:
w
: Weigths matrix with respect to the input from previous layer or data (n x n pr. layer)f
: Activation functiondf
: Derivative of the activation function
BetaML.Nn.Learnable
— TypeLearnable(data)
Structure representing the learnable parameters of a layer or its gradient.
The learnable parameters of a layers are given in the form of a N-tuple of Array{Float64,N2} where N2 can change (e.g. we can have a layer with the first parameter being a matrix, and the second one being a scalar). We wrap the tuple on its own structure a bit for some efficiency gain, but above all to define standard mathematic operations on the gradients without doing "type pyracy" with respect to Base tuples.
BetaML.Nn.NN
— TypeNN
Representation of a Neural Network
Fields:
layers
: Array of layers objectscf
: Cost functiondcf
: Derivative of the cost functiontrained
: Control flag for trained networks
BetaML.Nn.OptimisationAlgorithm
— TypeOptimisationAlgorithm
Abstract type representing an Optimisation algorithm.
Currently supported algorithms:
SGD
(Stochastic) Gradient DescentADAM
The ADAM algorithm, an adaptive moment estimation optimiser.
See ?[Name OF THE ALGORITHM]
for their details
You can implement your own optimisation algorithm using a subtype of OptimisationAlgorithm
and implementing its constructor and the update function singleUpdate(⋅)
(type ?singleUpdate
for details).
BetaML.Nn.RNNLayer
— TypeRNNLayer
Representation of a layer in the network
Fields:
wx
: Weigths matrix with respect to the input from data (n by n_input)ws
: Weigths matrix with respect to the layer state (n x n )wb
: Biases (n)f
: Activation functiondf
: Derivative of the activation functions
: State
BetaML.Nn.SGD
— TypeSGD(;η=t -> 1/(1+t), λ=2)
Stochastic Gradient Descent algorithm (default)
Fields:
η
: Learning rate, as a function of the current epoch [def: t -> 1/(1+t)]λ
: Multiplicative constant to the learning rate [def: 2]
BetaML.Nn.ScalarFunctionLayer
— TypeScalarFunctionLayer
Representation of a ScalarFunction layer in the network. ScalarFunctionLayer applies the activation function directly to the output of the previous layer (i.e., without passing for a weigth matrix), but using an optional learnable parameter (an array) used as second argument, similarly to [VectorFunctionLayer
(@ref). Differently from VectorFunctionLayer
, the function is applied scalarwise to each node.
The number of nodes in input must be set to the same as in the previous layer
Fields:
w
: Weigths (parameter) array passes as second argument to the activation function (if not empty)n
: Number of nodes in output (≡ number of nodes in input )f
: Activation function (vector)dfx
: Derivative of the (vector) activation function with respect to the layer inputs (x)dfw
: Derivative of the (vector) activation function with respect to the optional learnable weigths (w)
Notes:
- The output
size
of this layer is the same as those of the previous layers.
BetaML.Nn.VectorFunctionLayer
— TypeVectorFunctionLayer
Representation of a VectorFunction layer in the network. Vector function layer expects a vector activation function, i.e. a function taking the whole output of the previous layer an input rather than working on a single node as "normal" activation functions would do. Useful for example with the SoftMax function in classification or with the pool1D
function to implement a "pool" layer in 1 dimensions. By default it is weightless, i.e. it doesn't apply any transformation to the output coming from the previous layer except the activation function. However, by passing the parameter wsize
(a touple or array - tested only 1D) you can pass the learnable parameter to the activation function too. It is your responsability to be sure the activation function accept only X or also this learnable array (as second argument). The number of nodes in input must be set to the same as in the previous layer (and if you are using this for classification, to the number of classes, i.e. the previous layer must be set equal to the number of classes in the predictions).
Fields:
w
: Weigths (parameter) array passes as second argument to the activation function (if not empty)nₗ
: Number of nodes in input (i.e. length of previous layer)n
: Number of nodes in output (automatically inferred in the constructor)f
: Activation function (vector)dfx
: Derivative of the (vector) activation function with respect to the layer inputs (x)dfw
: Derivative of the (vector) activation function with respect to the optional learnable weigths (w)
Notes:
- The output
size
of this layer is given by the size of the output function,
that not necessarily is the same as the previous layers.
Base.size
— Methodsize(layer)
Get the dimensions of the layers in terms of (dimensions in input , dimensions in output)
Notes:
- You need to use
import Base.size
before defining this function for your layer
BetaML.Nn.backward
— Methodbackward(layer,x,nextGradient)
Compute backpropagation for this layer
Parameters:
layer
: Worker layerx
: Input to the layernextGradient
: Derivative of the overal loss with respect to the input of the next layer (output of this layer)
Return:
- The evaluated gradient of the loss with respect to this layer inputs
BetaML.Nn.buildNetwork
— MethodbuildNetwork(layers,cf;dcf,name)
Instantiate a new Feedforward Neural Network
Parameters:
layers
: Array of layers objectscf
: Cost functiondcf
: Derivative of the cost function [def:nothing
]name
: Name of the network [def: "Neural Network"]
Notes:
- Even if the network ends with a single output note, the cost function and its derivative should always expect y and ŷ as column vectors.
BetaML.Nn.forward
— Methodforward(layer,x)
Predict the output of the layer given the input
Parameters:
layer
: Worker layerx
: Input to the layer
Return:
- An Array{T,1} of the prediction (even for a scalar)
BetaML.Nn.getGradient
— MethodgetGradient(layer,x,nextGradient)
Compute backpropagation for this layer
Parameters:
layer
: Worker layerx
: Input to the layernextGradient
: Derivative of the overaall loss with respect to the input of the next layer (output of this layer)
Return:
- The evaluated gradient of the loss with respect to this layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the
getParams()
andsetParams()
functions. Note that starting from BetaML 0.2.2 this tuple needs to be wrapped in itsLearnable
type.
BetaML.Nn.getGradient
— MethodgetGradient(nn,xbatch,ybatch)
Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)
Parameters:
nn
: Worker networkxbatch
: Input to the network (n,d)ybatch
: Label input (n,d)
#Notes:
- The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.getGradient
— MethodgetGradient(nn,x,y)
Retrieve the current gradient of the weigthts (i.e. derivative of the cost with respect to the weigths)
Parameters:
nn
: Worker networkx
: Input to the network (d,1)y
: Label input (d,1)
#Notes:
- The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.getNParams
— MethodgetNParams(layer)
Return the number of parameters of a layer.
It doesn't need to be implemented by each layer type, as it uses getParams().
BetaML.Nn.getNParams
— MethodgetNParams(nn) - Return the number of trainable parameters of the neural network.
BetaML.Nn.getParams
— MethodgetParams(layer)
Get the layers current value of its trainable parameters
Parameters:
layer
: Worker layer
Return:
- The current value of the layer's trainable parameters as tuple of matrices. It is up to you to decide how to organise this tuple, as long you are consistent with the
getGradient()
andsetParams()
functions. Note that starting from BetaML 0.2.2 this tuple needs to be wrapped in itsLearnable
type.
BetaML.Nn.getParams
— MethodgetParams(nn)
Retrieve current weigthts
Parameters:
nn
: Worker network
Notes:
- The output is a vector of tuples of each layer's input weigths and bias weigths
BetaML.Nn.initOptAlg!
— MethodinitOptAlg!(optAlg::ADAM;θ,batchSize,x,y,rng)
Initialize the ADAM algorithm with the parameters m and v as zeros and check parameter bounds
BetaML.Nn.initOptAlg!
— MethodinitOptAlg!(optAlg;θ,batchSize,x,y)
Initialize the optimisation algorithm
Parameters:
optAlg
: The Optimisation algorithm to useθ
: Current parametersbatchSize
: The size of the batchx
: The training (input) datay
: The training "labels" to matchrng
: Random Number Generator (seeFIXEDSEED
) [deafult:Random.GLOBAL_RNG
]
Notes:
- Only a few optimizers need this function and consequently ovverride it. By default it does nothing, so if you want write your own optimizer and don't need to initialise it, you don't have to override this method
BetaML.Nn.loss
— Methodloss(fnn,x,y)
Compute avg. network loss on a test set (or a single (1 × d) data point)
Parameters:
fnn
: Worker networkx
: Input to the network (n) or (n x d)y
: Label input (n) or (n x d)
BetaML.Nn.setParams!
— Method setParams!(layer,w)
Set the trainable parameters of the layer with the given values
Parameters:
layer
: Worker layerw
: The new parameters to set (Learnable)
Notes:
- The format of the tuple wrapped by Learnable must be consistent with those of the
getParams()
andgetGradient()
functions.
BetaML.Nn.setParams!
— MethodsetParams!(nn,w)
Update weigths of the network
Parameters:
nn
: Worker networkw
: The new weights to set
BetaML.Nn.show
— Methodshow(nn)
Print a representation of the Neural Network (layers, dimensions..)
Parameters:
nn
: Worker network
BetaML.Nn.singleUpdate!
— MethodsingleUpdate!(θ,▽;nEpoch,nBatch,batchSize,xbatch,ybatch,optAlg)
Perform the parameters update based on the average batch gradient.
Parameters:
θ
: Current parameters▽
: Average gradient of the batchnEpoch
: Count of current epochnBatch
: Count of current batchnBatches
: Number of batches per epochxbatch
: Data associated to the current batchybatch
: Labels associated to the current batchoptAlg
: The Optimisation algorithm to use for the update
Notes:
- This function is overridden so that each optimisation algorithm implement their
own version
- Most parameters are not used by any optimisation algorithm. They are provided
to support the largest possible class of optimisation algorithms
- Some optimisation algorithms may change their internal structure in this function
BetaML.Nn.train!
— Methodtrain!(nn,x,y;epochs,batchSize,sequential,optAlg,verbosity,cb)
Train a neural network with the given x,y data
Parameters:
nn
: Worker networkx
: Training input to the network (records x dimensions)y
: Label input (records x dimensions)epochs
: Number of passages over the training set [def:100
]batchSize
: Size of each individual batch [def:min(size(x,1),32)
]sequential
: Wether to run all data sequentially instead of random [def:false
]optAlg
: The optimisation algorithm to update the gradient at each batch [def:ADAM()
]verbosity
: A verbosity parameter for the trade off information / efficiency [def:STD
]cb
: A callback to provide information. [def:trainingInfo
]rng
: Random Number Generator (seeFIXEDSEED
) [deafult:Random.GLOBAL_RNG
]
Return:
- A named tuple with the following information
epochs
: Number of epochs actually ranϵ_epochs
: The average error on each epoch (ifverbosity > LOW
)θ_epochs
: The parameters at each epoch (ifverbosity > STD
)
Notes:
- Currently supported algorithms:
SGD
, the classical (Stochastic) Gradient Descent optimiserADAM
, an adaptive moment estimation optimiser
- Look at the individual optimisation algorithm (
?[Name OF THE ALGORITHM]
) for info on its parameter, e.g.?SGD
for the Stochastic Gradient Descent. - You can implement your own optimisation algorithm using a subtype of
OptimisationAlgorithm
and implementing its constructor and the update functionsingleUpdate!(⋅)
(type?singleUpdate!
for details). - You can implement your own callback function, altought the one provided by default is already pretty generic (its output depends on the
verbosity
parameter). SeetrainingInfo
for informations on the cb parameters. - Both the callback function and the
singleUpdate!
function of the optimisation algorithm can be used to stop the training algorithm, respectively returningtrue
orstop=true
. - The verbosity can be set to any of
NONE
,LOW
,STD
,HIGH
,FULL
. - The update is done computing the average gradient for each batch and then calling
singleUpdate!
to let the optimisation algorithm perform the parameters update
BetaML.Nn.trainingInfo
— MethodtrainingInfo(nn,x,y;n,batchSize,epochs,verbosity,nEpoch,nBatch)
Default callback funtion to display information during training, depending on the verbosity level
Parameters:
nn
: Worker networkx
: Batch input to the network (batchSize,d)y
: Batch label input (batchSize,d)n
: Size of the full training setnBatches
: Number of baches per epochepochs
: Number of epochs defined for the trainingverbosity
: Verbosity level defined for the training (NONE,LOW,STD,HIGH,FULL)nEpoch
: Counter of the current epochnBatch
: Counter of the current batch
#Notes:
- Reporting of the error (loss of the network) is expensive. Use
verbosity=NONE
for better performances