The BetaML.Perceptron Module
BetaML.Perceptron — ModulePerceptron moduleProvide linear and kernel classifiers.
See a runnable example on myBinder
perceptron: Train data using the classical perceptronkernelPerceptron: Train data using the kernel perceptronpegasus: Train data using the pegasus algorithmpredict: Predict data using parameters from one of the above algorithms
Module Index
BetaML.Perceptron.kernelPerceptronBetaML.Perceptron.pegasusBetaML.Perceptron.perceptronBetaML.Perceptron.predictBetaML.Perceptron.predict
Detailed API
BetaML.Perceptron.kernelPerceptron — MethodkernelPerceptron(x,y;K,T,α,nMsgs,rShuffle)
Train a Kernel Perceptron algorithm based on x and y
Parameters:
x: Feature matrix of the training data (n × d)y: Associated labels of the training data, in the format of ⨦ 1K: Kernel function to emplpy. See?radialKernelor?polynomialKernelfor details or check?BetaML.Utilsto verify if other kernels are defined (you can alsways define your own kernel) [def:radialKernel]T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]α: Initial distribution of the errors [def:zeros(length(y))]nMsg: Maximum number of messages to show if all iterations are donerShuffle: Wheter to randomly shuffle the data at each iteration [def:false]
Return a named tuple with:
x: the x data (eventually shuffled ifrShuffle=true)y: the labelα: the errors associated to each recorderrors: the number of errors in the last iterationbesterrors: the minimum number of errors in classifying the data ever reachediterations: the actual number of iterations performedseparated: a flag if the data has been successfully separated
Notes:
- The trained data can then be used to make predictions using the function
predict(). If the optionrandomShufflehas been used, it is important to use there the returned (x,y,α) as these would have been shuffle compared with the original (x,y).
Example:
julia> kernelPerceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.pegasus — Methodpegasus(x,y;θ,θ₀,λ,η,T,nMsgs,rShuffle,forceOrigin)
Train the peagasus algorithm based on x and y (labels)
Parameters:
x: Feature matrix of the training data (n × d)y: Associated labels of the training data, in the format of ⨦ 1θ: Initial value of the weights (parameter) [def:zeros(d)]θ₀: Initial value of the weight (parameter) associated to the constant term [def:0]λ: Multiplicative term of the learning rateη: Learning rate [def: (t -> 1/sqrt(t))]T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]nMsg: Maximum number of messages to show if all iterations are donerShuffle: Wheter to randomly shuffle the data at each iteration [def:false]forceOrigin: Wheter to forceθ₀to remain zero [def:false]
Return a named tuple with:
θ: The final weights of the classifierθ₀: The final weight of the classifier associated to the constant termavgθ: The average weights of the classifieravgθ₀: The average weight of the classifier associated to the constant termerrors: The number of errors in the last iterationbesterrors: The minimum number of errors in classifying the data ever reachediterations: The actual number of iterations performedseparated: Weather the data has been successfully separated
Notes:
- The trained parameters can then be used to make predictions using the function
predict().
Example:
julia> pegasus([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.perceptron — Methodperceptron(x,y;θ,θ₀,T,nMsgs,rShuffle,forceOrigin)
Train a perceptron algorithm based on x and y (labels)
Parameters:
x: Feature matrix of the training data (n × d)y: Associated labels of the training data, in the format of ⨦ 1θ: Initial value of the weights (parameter) [def:zeros(d)]θ₀: Initial value of the weight (parameter) associated to the constant term [def:0]T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]nMsg: Maximum number of messages to show if all iterations are donerShuffle: Wheter to randomly shuffle the data at each iteration [def:false]forceOrigin: Wheter to forceθ₀to remain zero [def:false]
Return a named tuple with:
θ: The final weights of the classifierθ₀: The final weight of the classifier associated to the constant termavgθ: The average weights of the classifieravgθ₀: The average weight of the classifier associated to the constant termerrors: The number of errors in the last iterationbesterrors: The minimum number of errors in classifying the data ever reachediterations: The actual number of iterations performedseparated: Weather the data has been successfully separated
Notes:
- The trained parameters can then be used to make predictions using the function
predict().
Example:
julia> perceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.predict — Functionpredict(x,θ,θ₀)
Predict a binary label {-1,1} given the feature vector and the linear coefficients
Parameters:
x: Feature matrix of the training data (n × d)θ: The trained parametersθ₀: The trained bias barameter [def:0]
Return :
y: Vector of the predicted labels
Example:
julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])BetaML.Perceptron.predict — Methodpredict(x,xtrain,ytrain,α;K)
Predict a binary label {-1,1} given the feature vector and the training data together with their errors (as trained by a kenrnel perceptron algorithm)
Parameters:
x: Feature matrix of the training data (n × d)xtrain: The feature vectors used for the trainingytrain: The labels of the training setα: The errors associated to each recordK: The kernel function used for the training and to be used for the prediction [def:radialKernel]
Return :
y: Vector of the predicted labels
Example:
julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])