Deep learning framework by BAIR
Created by
Yangqing Jia
Lead Developer
Evan Shelhamer
ReLU
./include/caffe/layers/relu_layer.hpp
./src/caffe/layers/relu_layer.cpp
./src/caffe/layers/relu_layer.cu
Sample (as seen in ./models/bvlc_reference_caffenet/train_val.prototxt
)
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
Given an input value x, The ReLU
layer computes the output as x if x > 0 and negative_slope * x if x <= 0. When the negative slope parameter is not set, it is equivalent to the standard ReLU function of taking max(x, 0). It also supports in-place computation, meaning that the bottom and the top blob could be the same to preserve memory consumption.
ReLUParameter relu_param
)
negative_slope
[default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0../src/caffe/proto/caffe.proto
:// Message that stores parameters used by ReLULayer
message ReLUParameter {
// Allow non-zero slope for negative inputs to speed up optimization
// Described in:
// Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013). Rectifier nonlinearities
// improve neural network acoustic models. In ICML Workshop on Deep Learning
// for Audio, Speech, and Language Processing.
optional float negative_slope = 1 [default = 0];
enum Engine {
DEFAULT = 0;
CAFFE = 1;
CUDNN = 2;
}
optional Engine engine = 2 [default = DEFAULT];
}