Caffe
Class List
Here are the classes, structs, unions and interfaces with brief descriptions:
[detail level 1234]
 NcaffeA layer factory that allows one to register layers. During runtime, registered layers can be called by passing a LayerParameter protobuffer to the CreateLayer function:
 Ndb
 CAbsValLayerComputes $ y = |x| $
 CAccuracyLayerComputes the classification accuracy for a one-of-many classification task
 CAdaDeltaSolver
 CAdaGradSolver
 CAdamSolverAdamSolver, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Described in [1]
 CArgMaxLayerCompute the index of the $ K $ max values for each datum across all dimensions $ (C \times H \times W) $
 CBaseConvolutionLayerAbstract base class that factors out the BLAS code common to ConvolutionLayer and DeconvolutionLayer
 CBaseDataLayerProvides base for data layers that feed blobs to the Net
 CBasePrefetchingDataLayer
 CBatch
 CBatchNormLayerNormalizes the input to have 0-mean and/or unit (1) variance across the batch
 CBatchReindexLayerIndex into the input blob along its first axis
 CBiasLayerComputes a sum of two input Blobs, with the shape of the latter Blob "broadcast" to match the shape of the former. Equivalent to tiling the latter Blob, then computing the elementwise sum
 CBilinearFillerFills a Blob with coefficients for bilinear interpolation
 CBlobA wrapper around SyncedMemory holders serving as the basic computational unit through which Layers, Nets, and Solvers interact
 CBlockingQueue
 CBNLLLayerComputes $ y = x + \log(1 + \exp(-x)) $ if $ x > 0 $; $ y = \log(1 + \exp(x)) $ otherwise
 CCaffe
 CConcatLayerTakes at least two Blobs and concatenates them along either the num or channel dimension, outputting the result
 CConstantFillerFills a Blob with constant values $ x = 0 $
 CContrastiveLossLayerComputes the contrastive loss $ E = \frac{1}{2N} \sum\limits_{n=1}^N \left(y\right) d^2 + \left(1-y\right) \max \left(margin-d, 0\right)^2 $ where $ d = \left| \left| a_n - b_n \right| \right|_2 $. This can be used to train siamese networks
 CConvolutionLayerConvolves the input image with a bank of learned filters, and (optionally) adds biases
 CCPUTimer
 CCropLayerTakes a Blob and crop it, to the shape specified by the second input Blob, across all dimensions after the specified axis
 CDataLayer
 CDataTransformerApplies common transformations to the input data, such as scaling, mirroring, substracting the image mean..
 CDeconvolutionLayerConvolve the input with a bank of learned filters, and (optionally) add biases, treating filters and convolution parameters in the opposite sense as ConvolutionLayer
 CDropoutLayerDuring training only, sets a random portion of $x$ to 0, adjusting the rest of the vector magnitude accordingly
 CDummyDataLayerProvides data to the Net generated by a Filler
 CEltwiseLayerCompute elementwise operations, such as product and sum, along multiple input Blobs
 CELULayerExponential Linear Unit non-linearity $ y = \left\{ \begin{array}{lr} x & \mathrm{if} \; x > 0 \\ \alpha (\exp(x)-1) & \mathrm{if} \; x \le 0 \end{array} \right. $
 CEmbedLayerA layer for learning "embeddings" of one-hot vector input. Equivalent to an InnerProductLayer with one-hot vectors as input, but for efficiency the input is the "hot" index of each column itself
 CEuclideanLossLayerComputes the Euclidean (L2) loss $ E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n \right| \right|_2^2 $ for real-valued regression tasks
 CExpLayerComputes $ y = \gamma ^ {\alpha x + \beta} $, as specified by the scale $ \alpha $, shift $ \beta $, and base $ \gamma $
 CFillerFills a Blob with constant or randomly-generated data
 CFilterLayerTakes two+ Blobs, interprets last Blob as a selector and filter remaining Blobs accordingly with selector data (0 means that the corresponding item has to be filtered, non-zero means that corresponding item needs to stay)
 CFlattenLayerReshapes the input Blob into flat vectors
 CGaussianFillerFills a Blob with Gaussian-distributed values $ x = a $
 CHDF5DataLayerProvides data to the Net from HDF5 files
 CHDF5OutputLayerWrite blobs to disk as HDF5 files
 CHingeLossLayerComputes the hinge loss for a one-of-many classification task
 CIm2colLayerA helper for image operations that rearranges image regions into column vectors. Used by ConvolutionLayer to perform convolution by matrix multiplication
 CImageDataLayerProvides data to the Net from image files
 CInfogainLossLayerA generalization of MultinomialLogisticLossLayer that takes an "information gain" (infogain) matrix specifying the "value" of all label pairs
 CInnerProductLayerAlso known as a "fully-connected" layer, computes an inner product with a set of learned weights, and (optionally) adds biases
 CInputLayerProvides data to the Net by assigning tops directly
 CInternalThread
 CLayerAn interface for the units of computation which can be composed into a Net
 CLayerRegisterer
 CLayerRegistry
 CLogLayerComputes $ y = log_{\gamma}(\alpha x + \beta) $, as specified by the scale $ \alpha $, shift $ \beta $, and base $ \gamma $
 CLossLayerAn interface for Layers that take two Blobs as input – usually (1) predictions and (2) ground-truth labels – and output a singleton Blob representing the loss
 CLRNLayerNormalize the input in a local region across or within feature maps
 CLSTMLayerProcesses sequential inputs using a "Long Short-Term Memory" (LSTM) [1] style recurrent neural network (RNN). Implemented by unrolling the LSTM computation through time
 CLSTMUnitLayerA helper for LSTMLayer: computes a single timestep of the non-linearity of the LSTM, producing the updated cell and hidden states
 CMemoryDataLayerProvides data to the Net from memory
 CMSRAFillerFills a Blob with values $ x \sim N(0, \sigma^2) $ where $ \sigma^2 $ is set inversely proportional to number of incoming nodes, outgoing nodes, or their average
 CMultinomialLogisticLossLayerComputes the multinomial logistic loss for a one-of-many classification task, directly taking a predicted probability distribution as input
 CMVNLayerNormalizes the input to have 0-mean and/or unit (1) variance
 CNesterovSolver
 CNetConnects Layers together into a directed acyclic graph (DAG) specified by a NetParameter
 CNeuronLayerAn interface for layers that take one blob as input ( $ x $) and produce one equally-sized blob as output ( $ y $), where each element of the output depends only on the corresponding input element
 CParameterLayer
 CPoolingLayerPools the input image by taking the max, average, etc. within regions
 CPositiveUnitballFillerFills a Blob with values $ x \in [0, 1] $ such that $ \forall i \sum_j x_{ij} = 1 $
 CPowerLayerComputes $ y = (\alpha x + \beta) ^ \gamma $, as specified by the scale $ \alpha $, shift $ \beta $, and power $ \gamma $
 CPReLULayerParameterized Rectified Linear Unit non-linearity $ y_i = \max(0, x_i) + a_i \min(0, x_i) $. The differences from ReLULayer are 1) negative slopes are learnable though backprop and 2) negative slopes can vary across channels. The number of axes of input blob should be greater than or equal to 2. The 1st axis (0-based) is seen as channels
 CPythonLayer
 CRecurrentLayerAn abstract class for implementing recurrent behavior inside of an unrolled network. This Layer type cannot be instantiated – instead, you should use one of its implementations which defines the recurrent architecture, such as RNNLayer or LSTMLayer
 CReductionLayerCompute "reductions" – operations that return a scalar output Blob for an input Blob of arbitrary size, such as the sum, absolute sum, and sum of squares
 CReLULayerRectified Linear Unit non-linearity $ y = \max(0, x) $. The simple max is fast to compute, and the function does not saturate
 CReshapeLayer
 CRMSPropSolver
 CRNNLayerProcesses time-varying inputs using a simple recurrent neural network (RNN). Implemented as a network unrolling the RNN computation in time
 CScaleLayerComputes the elementwise product of two input Blobs, with the shape of the latter Blob "broadcast" to match the shape of the former. Equivalent to tiling the latter Blob, then computing the elementwise product. Note: for efficiency and convenience, this layer can additionally perform a "broadcast" sum too when bias_term: true is set
 CSGDSolverOptimizes the parameters of a Net using stochastic gradient descent (SGD) with momentum
 CSigmoidCrossEntropyLossLayerComputes the cross-entropy (logistic) loss $ E = \frac{-1}{n} \sum\limits_{n=1}^N \left[ p_n \log \hat{p}_n + (1 - p_n) \log(1 - \hat{p}_n) \right] $, often used for predicting targets interpreted as probabilities
 CSigmoidLayerSigmoid function non-linearity $ y = (1 + \exp(-x))^{-1} $, a classic choice in neural networks
 CSignalHandler
 CSilenceLayerIgnores bottom blobs while producing no top blobs. (This is useful to suppress outputs during testing.)
 CSliceLayerTakes a Blob and slices it along either the num or channel dimension, outputting multiple sliced Blob results
 CSoftmaxLayerComputes the softmax function
 CSoftmaxWithLossLayerComputes the multinomial logistic loss for a one-of-many classification task, passing real-valued predictions through a softmax to get a probability distribution over classes
 CSolverAn interface for classes that perform optimization on Nets
 CSolverRegisterer
 CSolverRegistry
 CSplitLayerCreates a "split" path in the network by copying the bottom Blob into multiple top Blobs to be used by multiple consuming layers
 CSPPLayerDoes spatial pyramid pooling on the input image by taking the max, average, etc. within regions so that the result vector of different sized images are of the same size
 CSyncedMemoryManages memory allocation and synchronization between the host (CPU) and device (GPU)
 CTanHLayerTanH hyperbolic tangent non-linearity $ y = \frac{\exp(2x) - 1}{\exp(2x) + 1} $, popular in auto-encoders
 CThresholdLayerTests whether the input exceeds a threshold: outputs 1 for inputs above threshold; 0 otherwise
 CTileLayerCopy a Blob along specified dimensions
 CTimer
 CUniformFillerFills a Blob with uniformly distributed values $ x\sim U(a, b) $
 CWindowDataLayerProvides data to the Net from windows of images files, specified by a window data file. This layer is DEPRECATED and only kept for archival purposes for use by the original R-CNN
 CXavierFillerFills a Blob with values $ x \sim U(-a, +a) $ where $ a $ is set inversely proportional to number of incoming nodes, outgoing nodes, or their average