Simd Library Documentation.

Home | Release Notes | Download | Documentation | Forum | SourceForge | GitHub

Simd::Neural is C++ framework for running and learning of Convolutional Neural Network. More...

Namespaces

 Simd::Neural
 Contains Framework for learning of Convolutional Neural Network.
 

Data Structures

struct  Function
 Activation Function structure. More...
 
struct  Index
 Index structure. More...
 
class  Layer
 Layer class. More...
 
class  InputLayer
 InputLayer class. More...
 
class  ConvolutionalLayer
 ConfolutionLayer class. More...
 
class  MaxPoolingLayer
 MaxPoolingLayer class. More...
 
class  FullyConnectedLayer
 FullyConnectedLayer class. More...
 
class  DropoutLayer
 DroputLayer class. More...
 
struct  TrainOptions
 Contains a set of training options. More...
 
class  Network
 Network class. More...
 

Enumerations

enum  Type {
  Identity,
  Tanh,
  Sigmoid,
  Relu,
  LeakyRelu
}
 
enum  Type {
  Input,
  Convolutional,
  MaxPooling,
  FullyConnected,
  Dropout
}
 
enum  Method {
  Fast,
  Check,
  Train
}
 
enum  InitType { Xavier }
 
enum  LossType { Mse }
 
enum  UpdateType { AdaptiveGradient }
 

Detailed Description

Simd::Neural is C++ framework for running and learning of Convolutional Neural Network.

Enumeration Type Documentation

enum Type

Describes types of activation function. It is used in order to create a Layer in Network.

Enumerator
Identity 

Identity:

f(x) = x;
df(y) = 1;
Tanh 

Hyperbolic Tangent:

                f(x) = (exp(x) - exp(-x))/(exp(x) + exp(-x));
                df(y) = 1 - y*y;

See implementation details: SimdNeuralRoughTanh and SimdNeuralDerivativeTanh.

Sigmoid 

Sigmoid:

                f(x) = 1/(1 + exp(-x));
                df(y) = (1 - y)*y;

See implementation details: SimdNeuralRoughSigmoid2 and SimdNeuralDerivativeSigmoid.

Relu 

ReLU (Rectified Linear Unit):

                f(x) = max(0, x);
                df(y) = y > 0 ? 1 : 0;

See implementation details: SimdNeuralRelu and SimdNeuralDerivativeRelu.

LeakyRelu 

Leaky ReLU(Rectified Linear Unit):

                f(x) = x > 0 ? x : 0.01*x;
                df(y) = y > 0 ? 1 : 0.01;

See implementation details: SimdNeuralRelu and SimdNeuralDerivativeRelu.

enum Type

Describes types of network layers.

Enumerator
Input 

Layer type corresponding to Simd::Neural::InputLayer.

Convolutional 

Layer type corresponding to Simd::Neural::ConvolutionalLayer.

MaxPooling 

Layer type corresponding to Simd::Neural::MaxPoolingLayer.

FullyConnected 

Layer type corresponding to Simd::Neural::FullyConnectedLayer.

Dropout 

Layer type corresponding to Simd::Neural::DropoutLayer.

enum Method

Describes method of forward propagation in the network layer.

Enumerator
Fast 

The fastest method. It is incompatible with train process.

Check 

Control checking during train process.

Train 

Forward propagation in train process.

enum InitType

Describes method to initialize weights of neural network.

Enumerator
Xavier 

Use fan-in and fan-out for scaling Xavier Glorot, Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks" Proc. AISTATS 10, May 2010, vol.9, pp249-256

enum LossType

Describes loss function.

Enumerator
Mse 

Mean-Squared-Error loss function for regression.

enum UpdateType

Method of weights' updating.

Enumerator
AdaptiveGradient 

Adaptive gradients method. J Duchi, E Hazan and Y Singer, "Adaptive subgradient methods for online learning and stochastic optimization" The Journal of Machine Learning Research, pages 2121-2159, 2011.

Note
See SimdNeuralAdaptiveGradientUpdate.