Caffe
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
caffe::Blob< Dtype > Class Template Reference

A wrapper around SyncedMemory holders serving as the basic computational unit through which Layers, Nets, and Solvers interact. More...

#include <blob.hpp>

Public Member Functions

 Blob (const int num, const int channels, const int height, const int width)
 Deprecated; use Blob(const vector<int>& shape).
 
 Blob (const vector< int > &shape)
 
void Reshape (const int num, const int channels, const int height, const int width)
 Deprecated; use Reshape(const vector<int>& shape).
 
void Reshape (const vector< int > &shape)
 Change the dimensions of the blob, allocating new memory if necessary. More...
 
void Reshape (const BlobShape &shape)
 
void ReshapeLike (const Blob &other)
 
string shape_string () const
 
const vector< int > & shape () const
 
int shape (int index) const
 Returns the dimension of the index-th axis (or the negative index-th axis from the end, if index is negative). More...
 
int num_axes () const
 
int count () const
 
int count (int start_axis, int end_axis) const
 Compute the volume of a slice; i.e., the product of dimensions among a range of axes. More...
 
int count (int start_axis) const
 Compute the volume of a slice spanning from a particular first axis to the final axis. More...
 
int CanonicalAxisIndex (int axis_index) const
 Returns the 'canonical' version of a (usually) user-specified axis, allowing for negative indexing (e.g., -1 for the last axis). More...
 
int num () const
 Deprecated legacy shape accessor num: use shape(0) instead.
 
int channels () const
 Deprecated legacy shape accessor channels: use shape(1) instead.
 
int height () const
 Deprecated legacy shape accessor height: use shape(2) instead.
 
int width () const
 Deprecated legacy shape accessor width: use shape(3) instead.
 
int LegacyShape (int index) const
 
int offset (const int n, const int c=0, const int h=0, const int w=0) const
 
int offset (const vector< int > &indices) const
 
void CopyFrom (const Blob< Dtype > &source, bool copy_diff=false, bool reshape=false)
 Copy from a source Blob. More...
 
Dtype data_at (const int n, const int c, const int h, const int w) const
 
Dtype diff_at (const int n, const int c, const int h, const int w) const
 
Dtype data_at (const vector< int > &index) const
 
Dtype diff_at (const vector< int > &index) const
 
const shared_ptr< SyncedMemory > & data () const
 
const shared_ptr< SyncedMemory > & diff () const
 
const Dtype * cpu_data () const
 
void set_cpu_data (Dtype *data)
 
const int * gpu_shape () const
 
const Dtype * gpu_data () const
 
void set_gpu_data (Dtype *data)
 
const Dtype * cpu_diff () const
 
const Dtype * gpu_diff () const
 
Dtype * mutable_cpu_data ()
 
Dtype * mutable_gpu_data ()
 
Dtype * mutable_cpu_diff ()
 
Dtype * mutable_gpu_diff ()
 
void Update ()
 
void FromProto (const BlobProto &proto, bool reshape=true)
 
void ToProto (BlobProto *proto, bool write_diff=false) const
 
Dtype asum_data () const
 Compute the sum of absolute values (L1 norm) of the data.
 
Dtype asum_diff () const
 Compute the sum of absolute values (L1 norm) of the diff.
 
Dtype sumsq_data () const
 Compute the sum of squares (L2 norm squared) of the data.
 
Dtype sumsq_diff () const
 Compute the sum of squares (L2 norm squared) of the diff.
 
void scale_data (Dtype scale_factor)
 Scale the blob data by a constant factor.
 
void scale_diff (Dtype scale_factor)
 Scale the blob diff by a constant factor.
 
void ShareData (const Blob &other)
 Set the data_ shared_ptr to point to the SyncedMemory holding the data_ of Blob other – useful in Layers which simply perform a copy in their Forward pass. More...
 
void ShareDiff (const Blob &other)
 Set the diff_ shared_ptr to point to the SyncedMemory holding the diff_ of Blob other – useful in Layers which simply perform a copy in their Forward pass. More...
 
bool ShapeEquals (const BlobProto &other)
 
template<>
void Update ()
 
template<>
void Update ()
 
template<>
unsigned int asum_data () const
 
template<>
int asum_data () const
 
template<>
unsigned int asum_diff () const
 
template<>
int asum_diff () const
 
template<>
unsigned int sumsq_data () const
 
template<>
int sumsq_data () const
 
template<>
unsigned int sumsq_diff () const
 
template<>
int sumsq_diff () const
 
template<>
void scale_data (unsigned int scale_factor)
 
template<>
void scale_data (int scale_factor)
 
template<>
void scale_diff (unsigned int scale_factor)
 
template<>
void scale_diff (int scale_factor)
 
template<>
void ToProto (BlobProto *proto, bool write_diff) const
 
template<>
void ToProto (BlobProto *proto, bool write_diff) const
 

Protected Member Functions

 DISABLE_COPY_AND_ASSIGN (Blob)
 

Protected Attributes

shared_ptr< SyncedMemorydata_
 
shared_ptr< SyncedMemorydiff_
 
shared_ptr< SyncedMemoryshape_data_
 
vector< int > shape_
 
int count_
 
int capacity_
 

Detailed Description

template<typename Dtype>
class caffe::Blob< Dtype >

A wrapper around SyncedMemory holders serving as the basic computational unit through which Layers, Nets, and Solvers interact.

TODO(dox): more thorough description.

Member Function Documentation

◆ CanonicalAxisIndex()

template<typename Dtype>
int caffe::Blob< Dtype >::CanonicalAxisIndex ( int  axis_index) const
inline

Returns the 'canonical' version of a (usually) user-specified axis, allowing for negative indexing (e.g., -1 for the last axis).

Parameters
axis_indexthe axis index. If 0 <= index < num_axes(), return index. If -num_axes <= index <= -1, return (num_axes() - (-index)), e.g., the last axis index (num_axes() - 1) if index == -1, the second to last if index == -2, etc. Dies on out of range index.

◆ CopyFrom()

template<typename Dtype>
void caffe::Blob< Dtype >::CopyFrom ( const Blob< Dtype > &  source,
bool  copy_diff = false,
bool  reshape = false 
)

Copy from a source Blob.

Parameters
sourcethe Blob to copy from
copy_diffif false, copy the data; if true, copy the diff
reshapeif false, require this Blob to be pre-shaped to the shape of other (and die otherwise); if true, Reshape this Blob to other's shape if necessary

◆ count() [1/2]

template<typename Dtype>
int caffe::Blob< Dtype >::count ( int  start_axis,
int  end_axis 
) const
inline

Compute the volume of a slice; i.e., the product of dimensions among a range of axes.

Parameters
start_axisThe first axis to include in the slice.
end_axisThe first axis to exclude from the slice.

◆ count() [2/2]

template<typename Dtype>
int caffe::Blob< Dtype >::count ( int  start_axis) const
inline

Compute the volume of a slice spanning from a particular first axis to the final axis.

Parameters
start_axisThe first axis to include in the slice.

◆ Reshape()

template<typename Dtype >
void caffe::Blob< Dtype >::Reshape ( const vector< int > &  shape)

Change the dimensions of the blob, allocating new memory if necessary.

This function can be called both to create an initial allocation of memory, and to adjust the dimensions of a top blob during Layer::Reshape or Layer::Forward. When changing the size of blob, memory will only be reallocated if sufficient memory does not already exist, and excess memory will never be freed.

Note that reshaping an input blob and immediately calling Net::Backward is an error; either Net::Forward or Net::Reshape need to be called to propagate the new input shape to higher layers.

◆ shape()

template<typename Dtype>
int caffe::Blob< Dtype >::shape ( int  index) const
inline

Returns the dimension of the index-th axis (or the negative index-th axis from the end, if index is negative).

Parameters
indexthe axis index, which may be negative as it will be "canonicalized" using CanonicalAxisIndex. Dies on out of range index.

◆ ShareData()

template<typename Dtype >
void caffe::Blob< Dtype >::ShareData ( const Blob< Dtype > &  other)

Set the data_ shared_ptr to point to the SyncedMemory holding the data_ of Blob other – useful in Layers which simply perform a copy in their Forward pass.

This deallocates the SyncedMemory holding this Blob's data_, as shared_ptr calls its destructor when reset with the "=" operator.

◆ ShareDiff()

template<typename Dtype >
void caffe::Blob< Dtype >::ShareDiff ( const Blob< Dtype > &  other)

Set the diff_ shared_ptr to point to the SyncedMemory holding the diff_ of Blob other – useful in Layers which simply perform a copy in their Forward pass.

This deallocates the SyncedMemory holding this Blob's diff_, as shared_ptr calls its destructor when reset with the "=" operator.


The documentation for this class was generated from the following files: