The BetaML.Perceptron Module
BetaML.Perceptron
— ModulePerceptron module
Provide linear and kernel classifiers.
Provide the following supervised models:
PerceptronClassifier
: Train data using the classical perceptronKernelPerceptronClassifier
: Train data using the kernel perceptronPegasosClassifier
: Train data using the pegasos algorithm
All algorithms are multiclass, with PerceptronClassifier
and PegasosClassifier
employing a one-vs-all strategy, while KernelPerceptronClassifier
employs a one-vs-one approach, and return a "probability" for each class in term of a dictionary for each record. Use mode(ŷ)
to return a single class prediction per record.
These models are available in the MLJ framework as PerceptronClassifier
,KernelPerceptronClassifier
and PegasosClassifier
respectivly.
Module Index
BetaML.Perceptron.KernelPerceptronC_hp
BetaML.Perceptron.KernelPerceptronClassifier
BetaML.Perceptron.PegasosC_hp
BetaML.Perceptron.PegasosClassifier
BetaML.Perceptron.PerceptronC_hp
BetaML.Perceptron.PerceptronClassifier
Detailed API
BetaML.Perceptron.KernelPerceptronC_hp
— Typemutable struct KernelPerceptronC_hp <: BetaMLHyperParametersSet
Hyperparameters for the KernelPerceptronClassifier
model
Parameters:
kernel
: Kernel function to employ. See?radial_kernel
or?polynomial_kernel
for details or check?BetaML.Utils
to verify if other kernels are defined (you can alsways define your own kernel) [def:radial_kernel
]initial_errors
: Initial distribution of the number of errors errors [def:nothing
, i.e. zeros]. If provided, this should be a nModels-lenght vector of nRecords integer values vectors , where nModels is computed as(n_classes * (n_classes - 1)) / 2
epochs
: Maximum number of epochs, i.e. passages trough the whole training sample [def:100
]shuffle
: Whether to randomly shuffle the data at each iteration (epoch) [def:true
]tunemethod
: The method - and its parameters - to employ for hyperparameters autotuning. SeeSuccessiveHalvingSearch
for the default method. To implement automatic hyperparameter tuning during the (first)fit!
call simply setautotune=true
and eventually change the defaulttunemethod
options (including the parameter ranges, the resources to employ and the loss function to adopt).
BetaML.Perceptron.KernelPerceptronClassifier
— Typemutable struct KernelPerceptronClassifier <: BetaMLSupervisedModel
A "kernel" version of the Perceptron
model (supervised) with user configurable kernel function.
For the parameters see ? KernelPerceptronC_hp
and ?BML_options
Limitations:
- data must be numerical
- online training (retraining) is not supported
Example:
julia> using BetaML
julia> X = [1.8 2.5; 0.5 20.5; 0.6 18; 0.7 22.8; 0.4 31; 1.7 3.7];
julia> y = ["a","b","b","b","b","a"];
julia> quadratic_kernel(x,y) = polynomial_kernel(x,y;degree=2)
quadratic_kernel (generic function with 1 method)
julia> mod = KernelPerceptronClassifier(epochs=100, kernel= quadratic_kernel)
KernelPerceptronClassifier - A "kernelised" version of the perceptron classifier (unfitted)
julia> ŷ = fit!(mod,X,y) |> mode
Running function BetaML.Perceptron.#KernelPerceptronClassifierBinary#17 at /home/lobianco/.julia/dev/BetaML/src/Perceptron/Perceptron_kernel.jl:133
Type `]dev BetaML` to modify the source code (this would change its location on disk)
***
*** Training kernel perceptron for maximum 100 iterations. Random shuffle: true
Avg. error after iteration 1 : 0.5
Avg. error after iteration 10 : 0.16666666666666666
*** Avg. error after epoch 13 : 0.0 (all elements of the set has been correctly classified)
6-element Vector{String}:
"a"
"b"
"b"
"b"
"b"
BetaML.Perceptron.PegasosC_hp
— Typemutable struct PegasosC_hp <: BetaMLHyperParametersSet
Hyperparameters for the PegasosClassifier
model.
Parameters:
learning_rate::Function
: Learning rate [def: (epoch -> 1/sqrt(epoch))]learning_rate_multiplicative::Float64
: Multiplicative term of the learning rate [def:0.5
]initial_parameters::Union{Nothing, Matrix{Float64}}
: Initial parameters. If given, should be a matrix of n-classes by feature dimension + 1 (to include the constant term as the first element) [def:nothing
, i.e. zeros]epochs::Int64
: Maximum number of epochs, i.e. passages trough the whole training sample [def:1000
]shuffle::Bool
: Whether to randomly shuffle the data at each iteration (epoch) [def:true
]force_origin::Bool
: Whether to force the parameter associated with the constant term to remain zero [def:false
]return_mean_hyperplane::Bool
: Whether to return the average hyperplane coefficients instead of the final ones [def:false
]tunemethod::AutoTuneMethod
: The method - and its parameters - to employ for hyperparameters autotuning. SeeSuccessiveHalvingSearch
for the default method. To implement automatic hyperparameter tuning during the (first)fit!
call simply setautotune=true
and eventually change the defaulttunemethod
options (including the parameter ranges, the resources to employ and the loss function to adopt).
BetaML.Perceptron.PegasosClassifier
— Typemutable struct PegasosClassifier <: BetaMLSupervisedModel
The PegasosClassifier
model, a linear, gradient-based classifier. Multiclass is supported using a one-vs-all approach.
See ?PegasosC_hp
and ?BML_options
for applicable hyperparameters and options.
Example:
julia> using BetaML
julia> X = [1.8 2.5; 0.5 20.5; 0.6 18; 0.7 22.8; 0.4 31; 1.7 3.7];
julia> y = ["a","b","b","b","b","a"];
julia> mod = PegasosClassifier(epochs=100,learning_rate = (epoch -> 0.05) )
PegasosClassifier - a loss-based linear classifier without regularisation term (unfitted)
julia> ŷ = fit!(mod,X,y) |> mode
***
*** Training pegasos for maximum 100 iterations. Random shuffle: true
Avg. error after iteration 1 : 0.5
*** Avg. error after epoch 3 : 0.0 (all elements of the set has been correctly classified)
6-element Vector{String}:
"a"
"b"
"b"
"b"
"b"
"a"
BetaML.Perceptron.PerceptronC_hp
— Typemutable struct PerceptronC_hp <: BetaMLHyperParametersSet
Hyperparameters for the PerceptronClassifier
model
Parameters:
initial_parameters::Union{Nothing, Matrix{Float64}}
: Initial parameters. If given, should be a matrix of n-classes by feature dimension + 1 (to include the constant term as the first element) [def:nothing
, i.e. zeros]epochs::Int64
: Maximum number of epochs, i.e. passages trough the whole training sample [def:1000
]shuffle::Bool
: Whether to randomly shuffle the data at each iteration (epoch) [def:true
]force_origin::Bool
: Whether to force the parameter associated with the constant term to remain zero [def:false
]return_mean_hyperplane::Bool
: Whether to return the average hyperplane coefficients instead of the final ones [def:false
]tunemethod::AutoTuneMethod
: The method - and its parameters - to employ for hyperparameters autotuning. SeeSuccessiveHalvingSearch
for the default method. To implement automatic hyperparameter tuning during the (first)fit!
call simply setautotune=true
and eventually change the defaulttunemethod
options (including the parameter ranges, the resources to employ and the loss function to adopt).
BetaML.Perceptron.PerceptronClassifier
— Typemutable struct PerceptronClassifier <: BetaMLSupervisedModel
The classical "perceptron" linear classifier (supervised).
For the parameters see ?PerceptronC_hp
and ?BML_options
.
Notes:
- data must be numerical
- online fitting (re-fitting with new data) is not supported
Example:
julia> using BetaML
julia> X = [1.8 2.5; 0.5 20.5; 0.6 18; 0.7 22.8; 0.4 31; 1.7 3.7];
julia> y = ["a","b","b","b","b","a"];
julia> mod = PerceptronClassifier(epochs=100,return_mean_hyperplane=false)
PerceptronClassifier - The classic linear perceptron classifier (unfitted)
julia> ŷ = fit!(mod,X,y) |> mode
Running function BetaML.Perceptron.#perceptronBinary#84 at /home/lobianco/.julia/dev/BetaML/src/Perceptron/Perceptron_classic.jl:150
Type `]dev BetaML` to modify the source code (this would change its location on disk)
***
*** Training perceptron for maximum 100 iterations. Random shuffle: true
Avg. error after iteration 1 : 0.5
*** Avg. error after epoch 5 : 0.0 (all elements of the set has been correctly classified)
6-element Vector{String}:
"a"
"b"
"b"
"b"
"b"
"a"