The BetaML.Perceptron Module
BetaML.Perceptron — ModulePerceptron moduleProvide linear and kernel classifiers.
See a runnable example on myBinder
- perceptron: Train data using the classical perceptron
- kernelPerceptron: Train data using the kernel perceptron
- pegasus: Train data using the pegasus algorithm
- predict: Predict data using parameters from one of the above algorithms
Module Index
- BetaML.Perceptron.kernelPerceptron
- BetaML.Perceptron.pegasus
- BetaML.Perceptron.perceptron
- BetaML.Perceptron.predict
- BetaML.Perceptron.predict
Detailed API
BetaML.Perceptron.kernelPerceptron — MethodkernelPerceptron(x,y;K,T,α,nMsgs,rShuffle)
Train a Kernel Perceptron algorithm based on x and y
Parameters:
- x: Feature matrix of the training data (n × d)
- y: Associated labels of the training data, in the format of ⨦ 1
- K: Kernel function to emplpy. See- ?radialKernelor- ?polynomialKernelfor details or check- ?BetaML.Utilsto verify if other kernels are defined (you can alsways define your own kernel) [def:- radialKernel]
- T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
- α: Initial distribution of the errors [def:- zeros(length(y))]
- nMsg: Maximum number of messages to show if all iterations are done
- rShuffle: Wheter to randomly shuffle the data at each iteration [def:- false]
Return a named tuple with:
- x: the x data (eventually shuffled if- rShuffle=true)
- y: the label
- α: the errors associated to each record
- errors: the number of errors in the last iteration
- besterrors: the minimum number of errors in classifying the data ever reached
- iterations: the actual number of iterations performed
- separated: a flag if the data has been successfully separated
Notes:
- The trained data can then be used to make predictions using the function predict(). If the optionrandomShufflehas been used, it is important to use there the returned (x,y,α) as these would have been shuffle compared with the original (x,y).
Example:
julia> kernelPerceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.pegasus — Methodpegasus(x,y;θ,θ₀,λ,η,T,nMsgs,rShuffle,forceOrigin)
Train the peagasus algorithm based on x and y (labels)
Parameters:
- x: Feature matrix of the training data (n × d)
- y: Associated labels of the training data, in the format of ⨦ 1
- θ: Initial value of the weights (parameter) [def:- zeros(d)]
- θ₀: Initial value of the weight (parameter) associated to the constant term [def:- 0]
- λ: Multiplicative term of the learning rate
- η: Learning rate [def: (t -> 1/sqrt(t))]
- T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
- nMsg: Maximum number of messages to show if all iterations are done
- rShuffle: Wheter to randomly shuffle the data at each iteration [def:- false]
- forceOrigin: Wheter to force- θ₀to remain zero [def:- false]
Return a named tuple with:
- θ: The final weights of the classifier
- θ₀: The final weight of the classifier associated to the constant term
- avgθ: The average weights of the classifier
- avgθ₀: The average weight of the classifier associated to the constant term
- errors: The number of errors in the last iteration
- besterrors: The minimum number of errors in classifying the data ever reached
- iterations: The actual number of iterations performed
- separated: Weather the data has been successfully separated
Notes:
- The trained parameters can then be used to make predictions using the function predict().
Example:
julia> pegasus([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.perceptron — Methodperceptron(x,y;θ,θ₀,T,nMsgs,rShuffle,forceOrigin)
Train a perceptron algorithm based on x and y (labels)
Parameters:
- x: Feature matrix of the training data (n × d)
- y: Associated labels of the training data, in the format of ⨦ 1
- θ: Initial value of the weights (parameter) [def:- zeros(d)]
- θ₀: Initial value of the weight (parameter) associated to the constant term [def:- 0]
- T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
- nMsg: Maximum number of messages to show if all iterations are done
- rShuffle: Wheter to randomly shuffle the data at each iteration [def:- false]
- forceOrigin: Wheter to force- θ₀to remain zero [def:- false]
Return a named tuple with:
- θ: The final weights of the classifier
- θ₀: The final weight of the classifier associated to the constant term
- avgθ: The average weights of the classifier
- avgθ₀: The average weight of the classifier associated to the constant term
- errors: The number of errors in the last iteration
- besterrors: The minimum number of errors in classifying the data ever reached
- iterations: The actual number of iterations performed
- separated: Weather the data has been successfully separated
Notes:
- The trained parameters can then be used to make predictions using the function predict().
Example:
julia> perceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])BetaML.Perceptron.predict — Functionpredict(x,θ,θ₀)
Predict a binary label {-1,1} given the feature vector and the linear coefficients
Parameters:
- x: Feature matrix of the training data (n × d)
- θ: The trained parameters
- θ₀: The trained bias barameter [def:- 0]
Return :
- y: Vector of the predicted labels
Example:
julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])BetaML.Perceptron.predict — Methodpredict(x,xtrain,ytrain,α;K)
Predict a binary label {-1,1} given the feature vector and the training data together with their errors (as trained by a kenrnel perceptron algorithm)
Parameters:
- x: Feature matrix of the training data (n × d)
- xtrain: The feature vectors used for the training
- ytrain: The labels of the training set
- α: The errors associated to each record
- K: The kernel function used for the training and to be used for the prediction [def:- radialKernel]
Return :
- y: Vector of the predicted labels
Example:
julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])