The BetaML.Perceptron Module

Module Index

Detailed API

BetaML.Perceptron.kernelPerceptronMethod

kernelPerceptron(x,y;K,T,α,nMsgs,rShuffle)

Train a Kernel Perceptron algorithm based on x and y

Parameters:

  • x: Feature matrix of the training data (n × d)
  • y: Associated labels of the training data, in the format of ⨦ 1
  • K: Kernel function to emplpy. See ?radialKernel or ?polynomialKernelfor details or check ?BetaML.Utils to verify if other kernels are defined (you can alsways define your own kernel) [def: radialKernel]
  • T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
  • α: Initial distribution of the errors [def: zeros(length(y))]
  • nMsg: Maximum number of messages to show if all iterations are done
  • rShuffle: Wheter to randomly shuffle the data at each iteration [def: false]

Return a named tuple with:

  • x: the x data (eventually shuffled if rShuffle=true)
  • y: the label
  • α: the errors associated to each record
  • errors: the number of errors in the last iteration
  • besterrors: the minimum number of errors in classifying the data ever reached
  • iterations: the actual number of iterations performed
  • separated: a flag if the data has been successfully separated

Notes:

  • The trained data can then be used to make predictions using the function predict(). If the option randomShuffle has been used, it is important to use there the returned (x,y,α) as these would have been shuffle compared with the original (x,y).

Example:

julia> kernelPerceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])
source
BetaML.Perceptron.pegasusMethod

pegasus(x,y;θ,θ₀,λ,η,T,nMsgs,rShuffle,forceOrigin)

Train the peagasus algorithm based on x and y (labels)

Parameters:

  • x: Feature matrix of the training data (n × d)
  • y: Associated labels of the training data, in the format of ⨦ 1
  • θ: Initial value of the weights (parameter) [def: zeros(d)]
  • θ₀: Initial value of the weight (parameter) associated to the constant term [def: 0]
  • λ: Multiplicative term of the learning rate
  • η: Learning rate [def: (t -> 1/sqrt(t))]
  • T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
  • nMsg: Maximum number of messages to show if all iterations are done
  • rShuffle: Wheter to randomly shuffle the data at each iteration [def: false]
  • forceOrigin: Wheter to force θ₀ to remain zero [def: false]

Return a named tuple with:

  • θ: The final weights of the classifier
  • θ₀: The final weight of the classifier associated to the constant term
  • avgθ: The average weights of the classifier
  • avgθ₀: The average weight of the classifier associated to the constant term
  • errors: The number of errors in the last iteration
  • besterrors: The minimum number of errors in classifying the data ever reached
  • iterations: The actual number of iterations performed
  • separated: Weather the data has been successfully separated

Notes:

  • The trained parameters can then be used to make predictions using the function predict().

Example:

julia> pegasus([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])
source
BetaML.Perceptron.perceptronMethod

perceptron(x,y;θ,θ₀,T,nMsgs,rShuffle,forceOrigin)

Train a perceptron algorithm based on x and y (labels)

Parameters:

  • x: Feature matrix of the training data (n × d)
  • y: Associated labels of the training data, in the format of ⨦ 1
  • θ: Initial value of the weights (parameter) [def: zeros(d)]
  • θ₀: Initial value of the weight (parameter) associated to the constant term [def: 0]
  • T: Maximum number of iterations across the whole set (if the set is not fully classified earlier) [def: 1000]
  • nMsg: Maximum number of messages to show if all iterations are done
  • rShuffle: Wheter to randomly shuffle the data at each iteration [def: false]
  • forceOrigin: Wheter to force θ₀ to remain zero [def: false]

Return a named tuple with:

  • θ: The final weights of the classifier
  • θ₀: The final weight of the classifier associated to the constant term
  • avgθ: The average weights of the classifier
  • avgθ₀: The average weight of the classifier associated to the constant term
  • errors: The number of errors in the last iteration
  • besterrors: The minimum number of errors in classifying the data ever reached
  • iterations: The actual number of iterations performed
  • separated: Weather the data has been successfully separated

Notes:

  • The trained parameters can then be used to make predictions using the function predict().

Example:

julia> perceptron([1.1 2.1; 5.3 4.2; 1.8 1.7], [-1,1,-1])
source
BetaML.Perceptron.predictFunction

predict(x,θ,θ₀)

Predict a binary label {-1,1} given the feature vector and the linear coefficients

Parameters:

  • x: Feature matrix of the training data (n × d)
  • θ: The trained parameters
  • θ₀: The trained bias barameter [def: 0]

Return :

  • y: Vector of the predicted labels

Example:

julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])
source
BetaML.Perceptron.predictMethod

predict(x,xtrain,ytrain,α;K)

Predict a binary label {-1,1} given the feature vector and the training data together with their errors (as trained by a kenrnel perceptron algorithm)

Parameters:

  • x: Feature matrix of the training data (n × d)
  • xtrain: The feature vectors used for the training
  • ytrain: The labels of the training set
  • α: The errors associated to each record
  • K: The kernel function used for the training and to be used for the prediction [def: radialKernel]

Return :

  • y: Vector of the predicted labels

Example:

julia> predict([1.1 2.1; 5.3 4.2; 1.8 1.7], [3.2,1.2])
source