LossFunction

Normal LossFunctions

reduction specifies the reduction to apply to the output:

  • none : do nothing and return the vector of losses.
  • sum : return the sum of the losses.
  • mean : return the mean of the losses.

These functions take predictive values and teacher data as arguments, but only support the following types, respectively.

  • Vector and Vector
  • Vector and Number
  • Matrix(just added an axis to the vector) and Vector
  • Matrix(just added an axis to the vector) and Number
  • Matrix and Matrix
HorseML.LossFunction.mseFunction
mse(y, t; reduction="mean")

Mean Square Error. This is the expression:

\[MSE(y, t) = \frac{\sum_{i=1}^{n} (t_{i}-y_{i})^{2}}{n}\]

source
HorseML.LossFunction.ceeFunction
cee(y, t; reduction="mean")

Cross Entropy Error. This is the expression:

\[CEE(y, t) = \frac{\sum_{i=1}^{n} t\ln y}{n}\]

source
HorseML.LossFunction.maeFunction
mae(y, t)

Mean Absolute Error. This is the expression:

\[MAE(y, t) = \frac{\sum_{i=1}^{n} |t_{i}-y_{i}|}{n}\]

source
HorseML.LossFunction.huberFunction
huber(y, t; δ=1, reduction="mean")

Huber-Loss. If δ is large, it will be a function like mse, and if it is small, it will be a function like mae. This is the expression:

\[a = |t_{i}-y_{i}| \\ Huber(y, t) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \begin{array}{ll} \frac{1}{2}a^{2} & (a \leq \delta) \\ \delta(a-\frac{1}{2}\delta) & (a \gt \delta) \end{array} \right.\]

source
HorseML.LossFunction.logcosh_lossFunction
logcosh_loss(y, t; reduction="mean")

Log Cosh. Basically, it's mae, but if the loss is small, it will be close to mse. This is the expression:

\[Logcosh(y, t) = \frac{\sum_{i=1}^{n} \log(\cosh(t_{i}-y_{i}))}{n}\]

source
HorseML.LossFunction.poissonFunction
Poisson(y, t; reduction="mean")

Poisson Loss, Distribution of predicted value and loss of Poisson distribution. This is the expression:

\[Poisson(y, t) = \frac{\sum_{i=1}^{n} y_{i}-t_{i} \ln y}{n}\]

source
HorseML.LossFunction.hingeFunction
hinge(y, t; reduction="mean")

Hinge Loss, for SVM. This is the expression:

\[Hinge(y, t) = \frac{\sum_{i=1}^{n} \max(1-y_{i}t_{i}, 0)}{n}\]

source
HorseML.LossFunction.smooth_hingeFunction
smooth_hinge(y, t; reduction="mean")

Smoothing Hinge Loss. This is the expression:

\[smoothHinge(y, t) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \begin{array}{ll} 0 & (t_{i}y_{i} \geq 1) \\ \frac{1}{2}(1-t_{i}y_{i})^{2} & (0 \lt t_{i}y_{i} \lt 1) \\ \frac{1}{2} - t_{i}y_{i} & (t_{i}y_{i} \leq 0) \end{array} \right.\]

source

LossFunctions for Clustering

HorseML.LossFunction.dmFunction
dm(x, y; reduction="mean")

Distortion Measure. This is used as an evalution function of the Kmeans model. This is the expression:

\[DM(x, y, μ) = \sum^{N-1}_{n=0} \sum^{K-1}_{k=0} y_{nk}|| x_{n} - \mu_{k} ||^2\]

#Example

julia> model = Kmeans(3);

julia> dm(x, model(x), model.μ);
source
HorseML.LossFunction.nlhFunction
nlh(x, π, μ, σ)

nagative log-likehood. This is used as an evalution function of the GMM model. This is the expression:

\[E(x, \pi, \mu, \sigma) = - \sum^{N-1}_{n=0} \{ log \sum^{K}_{k=0} π_{k} N(x_{n}|\mu_{k}, \sigma_{k}) \}\]

Example

julia> model = GMM(3);

julia> π, μ, σ = model.π, model.μ, model.σ;

julia> nlh(x, π, μ, σ);
source