# Caffe

Deep learning framework by BAIR

Created by
Yangqing Jia
Given an input value x, The ReLU layer computes the output as x if x > 0 and negative_slope * x if x <= 0. When the negative slope parameter is not set, it is equivalent to the standard ReLU function of taking max(x, 0). It also supports in-place computation, meaning that the bottom and the top blob could be the same to preserve memory consumption.
• Parameters (ReLUParameter relu_param)
• negative_slope [default 0]: specifies whether to leak the negative part by multiplying it with the slope value rather than setting it to 0.
• From ./src/caffe/proto/caffe.proto: