Deep learning framework by BAIR
Created by
Yangqing Jia
Lead Developer
Evan Shelhamer
ReLU
./include/caffe/layers/relu_layer.hpp
./src/caffe/layers/relu_layer.cpp
./src/caffe/layers/relu_layer.cu
Sample (as seen in ./models/bvlc_reference_caffenet/train_val.prototxt
)
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}
Given an input value x, The ReLU
layer computes the output as x if x > 0 and negative_slope * x if x <= 0. When the negative slope parameter is not set, it is equivalent to the standard ReLU function of taking max(x, 0). It also supports in-place computation, meaning that the bottom and the top blob could be the same to preserve memory consumption.
ReLUParameter relu_param
)
negative_slope
[default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0../src/caffe/proto/caffe.proto
: