LossFunction

reduction specifies the reduction to apply to the output:

  • none : do nothing and return the vector of losses.
  • sum : return the sum of the losses.
  • mean : return the mean of the losses.

These functions must have both the input y and t as vectors.

LearningHorse.LossFunction.huberFunction
huber(y, t; δ=1, reduction="mean")

Huber-Loss. If δ is large, it will be a function like mse, and if it is small, it will be a function like mae. This is the expression:

\[a = |t_{i}-y_{i}| \\ Huber(y, t) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \begin{array}{ll} \frac{1}{2}a^{2} & (a \leq \delta) \\ \delta(a-\frac{1}{2}\delta) & (a \gt \delta) \end{array} \right.\]

source
LearningHorse.LossFunction.logcosh_lossFunction
logcosh_loss(y, t; reduction="mean")

Log Cosh. Basically, it's mae, but if the loss is small, it will be close to mse. This is the expression:

\[Logcosh(y, t) = \frac{\sum_{i=1}^{n} \log(\cosh(t_{i}-y_{i}))}{n}\]

source
LearningHorse.LossFunction.poissonFunction
Poisson(y, t; reduction="mean")

Poisson Loss, Distribution of predicted value and loss of Poisson distribution. This is the expression:

\[Poisson(y, t) = \frac{\sum_{i=1}^{n} y_{i}-t_{i} \ln y}{n}\]

source
LearningHorse.LossFunction.smooth_hingeFunction
smooth_hinge(y, t; reduction="mean")

Smoothing Hinge Loss. This is the expression:

\[smoothHinge(y, t) = \frac{1}{n} \sum_{i=1}^{n} \left\{ \begin{array}{ll} 0 & (t_{i}y_{i} \geq 1) \\ \frac{1}{2}(1-t_{i}y_{i})^{2} & (0 \lt t_{i}y_{i} \lt 1) \\ \frac{1}{2} - t_{i}y_{i} & (t_{i}y_{i} \leq 0) \end{array} \right.\]

source