NeuralNetwork

Basics

To build a neural network with LearningHorse, use the NetWork type.

LearningHorse.NeuralNetwork.NetWorkType
NetWork(layers...)

Connect multiple layers, and build a NeuralNetwork. NetWork also supports index. You can also add layers later using the add_layer!() Function.

Example

julia> N = NetWork(Dense(10=>5, relu), Dense(5=>1, relu))

julia> N[1]

Dense(IO:10=>5, σ:relu)
source
LearningHorse.NeuralNetwork.@epochsMacro
@epochs n ex

This macro cruns ex n times. Basically this is useful for learning NeuralNetwork.

Example

julia> a = 1

julia> @epochs 1000 a+=1 progress:1000/1000 julia>a 1001

source

Layers

LearningHorse.NeuralNetwork.ConvType
Conv(kernel, in=>out, σ; stride = 1, pading = 0, set_w = "Xavier")

This is the traditional convolution layer. kernel is a tuple of integers that specifies the kernel size, it must have one or two elements. And, in and out specifies number of input and out channels.

The input data must have a dimensions WHCB(weight, width, channel, batch). If you want to use a data which has a dimentions WHC, you must be add a dimentioins of B.

stride and padding are single integers or tuple(stride is tuple of 2 elements, padding is tuple of 2 elements), and if you specifies KeepSize to padding, we adjust sizes of input and return a matrix which has the same sizes. set_w is Xavier or He, it decide a method to create a first parameter. This parameter is the same as Dense().

Example

julia> C = Conv((2, 2), 2=>2, relu)
Convolution(k:(2, 2), IO:2 => 2, σ:relu)

julia> C(rand(10, 10, 2, 5)) |> size
(9, 9, 2, 5)
Warning

When you specidies same to padding, in some cases, it will be returned one size smaller. Because of its expression.

julia> C = Conv((2, 2), 2=>2, relu, padding = KeepSize)
Convolution(k:(2, 2), IO:2 => 2, σ:relu

julia> C(rand(10, 10, 2, 5)) |> size
(9, 9, 2, 5)
source
LearningHorse.NeuralNetwork.DenseType
Dense(in=>out, σ; set_w = "Xavier", set_b = zeros)

Crate a traditinal Dense layer, whose forward propagation is given by: y = σ.(W * x .+ b) The input of x should be a Vactor of length in, (Sorry for you can't learn using batch. I'll implement)

Example

julia> D = Dense(5=>2, relu)
Dense(IO:5=>2, σ:relu)

julia> D(rand(Float64, 5)) |> size
(2,)
source
LearningHorse.NeuralNetwork.DropoutType
Dropout(p)

This layer dropout the input data.

Example

julia> D = Dropout(0.25)
Dropout(0.25)

julia> D(rand(10))
10-element Array{Float64,1}:
 0.0
 0.3955865029078952
 0.8157710047424143
 1.0129613533211907
 0.8060508293474877
 1.1067504108970596
 0.1461289547292684
 0.0
 0.04581776023870532
 1.2794087133638332
source

Activations

LearningHorse.NeuralNetwork.σFunction
σ(x)

Standard sigmoid activation function. Also, this function can be called with σ. This is the expression:

\[\sigma(x) = \frac{1}{1+e^{-x}}\]

source
LearningHorse.NeuralNetwork.hardσFunction
hardsigmoid(x) = max(0, min(1, (x + 2.5) / 6))

Piecewise linear approximation of sigmoid. Also, this function can be called with hardσ. This is the expression:

\[hardsigmoid(x) = \left\{ \begin{array}{ll} 1 & (x \geq \frac{1}{4}) \\ \frac{1}{5} x & (- \frac{1}{4} \lt x \lt \frac{1}{4}) \\ 0 & (x \leq - \frac{1}{4}) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.hardtanhFunction
hardtanh(x)

Linear tanh function. This is the expression:

\[hardtanh(x) = \left\{ \begin{array}{ll} 1 & (x \geq 1) \\ x & (-1 \lt x \lt 1) \\ -1 & (x \leq -1) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.reluFunction
relu(x) = max(0, x)

relu is Rectified Linear Unit. This is the expression:

\[relu(x) = \left\{ \begin{array}{ll} x & (x \geq 0) \\ 0 & (x \lt 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.leakyreluFunction
leakyrelu(x; α=0.01) = (x>0) ? x : α*x

Leaky Rectified Linear Unit. This is the expression:

\[leakyrelu(x) = \left\{ \begin{array}{ll} \alpha x & (x \lt 0) \\ x & (x \geq 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.relu6Function
relu6(x)

Relu function with an upper limit of 6. This is the expression:

\[relu6(x) = \left\{ \begin{array}{ll} 6 & (x \gt 6) \\ x & (x \geq 0) \\ 0 & (x \lt 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.rreluType
rrelu(min, max)

Randomized Rectified Linear Unit. The expression is the as leakyrelu. but α is a random number between min and max. Also, since this function is defined as a structure, use it as follows:

Dense(10=>5, rrelu(0.001, 0.1))
source
LearningHorse.NeuralNetwork.eluFunction
elu(x, α=1)

Exponential Linear Unit activation function. You can also specify the coefficient explicitly, e.g. elu(x, 1). This is the expression:

\[elu(x, α) = \left\{ \begin{array}{ll} x & (x \geq 0) \\ \alpha(e^x-1) & (x \lt 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.geluFunction
gelu(x)

Gaussian Error Linear Unit. This is the expression($\phi$ is a distribution function of standard normal distribution.):

\[gelu(x) = x\phi(x)\]

However, in the implementation, it is calculated with the following expression.

\[\sigma(x) = \frac{1}{1+e^{-x}} \\ gelu(x) = x\sigma(1.702x)\]

source
LearningHorse.NeuralNetwork.seluFunction
selu(x)

Scaled exponential linear units. This is the expression

\[\lambda = 1.0507009873554804934193349852946 \\ \alpha = 1.6732632423543772848170429916717 \\ selu(x) = \lambda \left\{ \begin{array}{ll} x & (x \geq 0) \\ \alpha(e^x-1) & (x \lt 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.celuFunction
celu(x; α=1)

Continuously Differentiable Exponential Linear Unit. This is the expression:

\[\alpha = 1 \\ celu(x) = \left\{ \begin{array}{ll} x & (x \geq 0) \\ \alpha(e^\frac{x}{\alpha}-1) & (x \lt 0) \end{array} \right.\]

source
LearningHorse.NeuralNetwork.softshrinkFunction
softshrink(x; λ=0.5)

This is the expression:

\[\lambda=0.5 \\ softshrink(x) = \left\{ \begin{array}{ll} x-\lambda & (x \gt \lambda) \\ 0 & (-\lambda \leq x \leq \lambda) \\ x+\lambda & (x \lt -\lambda) \\ \end{array} \right.\]

source
LearningHorse.NeuralNetwork.treluFunction
trelu(x; θ=1)

Threshold gated Rectified Linear Unit. This is the expression:

\[\theta = 1 \\ trelu(x) = \left\{ \begin{array}{ll} x & (x \gt 0) \\ 0 & (x \leq 0) \end{array} \right.\]

source

Optimizers

LearningHorse.NeuralNetwork.MomentumType
Momentum(η=0.01, α=0.9, velocity)

Momentum gradient descent optimizer with learning rate η and parameter of velocity α.

Parameters

  • learning rate : η
  • parameter of velocity : α

Example

source