Caffe
Classes | Public Member Functions | Static Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
caffe::Net< Dtype > Class Template Reference

Connects Layers together into a directed acyclic graph (DAG) specified by a NetParameter. More...

#include <net.hpp>

Classes

class  Callback
 

Public Member Functions

 Net (const NetParameter &param)
 
 Net (const string &param_file, Phase phase, const int level=0, const vector< string > *stages=NULL)
 
void Init (const NetParameter &param)
 Initialize a network with a NetParameter.
 
const vector< Blob< Dtype > * > & Forward (Dtype *loss=NULL)
 Run Forward and return the result.
 
const vector< Blob< Dtype > * > & ForwardPrefilled (Dtype *loss=NULL)
 DEPRECATED; use Forward() instead.
 
Dtype ForwardFromTo (int start, int end)
 
Dtype ForwardFrom (int start)
 
Dtype ForwardTo (int end)
 
const vector< Blob< Dtype > * > & Forward (const vector< Blob< Dtype > * > &bottom, Dtype *loss=NULL)
 DEPRECATED; set input blobs then use Forward() instead.
 
void ClearParamDiffs ()
 Zeroes out the diffs of all net parameters. Should be run before Backward.
 
void Backward ()
 
void BackwardFromTo (int start, int end)
 
void BackwardFrom (int start)
 
void BackwardTo (int end)
 
void Reshape ()
 Reshape all layers from bottom to top. More...
 
Dtype ForwardBackward ()
 
void Update ()
 Updates the network weights based on the diff values computed.
 
void ShareWeights ()
 Shares weight data of owner blobs with shared blobs. More...
 
void ShareTrainedLayersWith (const Net *other)
 For an already initialized net, implicitly copies (i.e., using no additional memory) the pre-trained layers from another Net.
 
void CopyTrainedLayersFrom (const NetParameter &param)
 For an already initialized net, copies the pre-trained layers from another Net.
 
void CopyTrainedLayersFrom (const string trained_filename)
 
void CopyTrainedLayersFromBinaryProto (const string trained_filename)
 
void CopyTrainedLayersFromHDF5 (const string trained_filename)
 
void ToProto (NetParameter *param, bool write_diff=false) const
 Writes the net to a proto.
 
void ToHDF5 (const string &filename, bool write_diff=false) const
 Writes the net to an HDF5 file.
 
const string & name () const
 returns the network name.
 
const vector< string > & layer_names () const
 returns the layer names
 
const vector< string > & blob_names () const
 returns the blob names
 
const vector< shared_ptr< Blob< Dtype > > > & blobs () const
 returns the blobs
 
const vector< shared_ptr< Layer< Dtype > > > & layers () const
 returns the layers
 
Phase phase () const
 returns the phase: TRAIN or TEST
 
const vector< vector< Blob< Dtype > * > > & bottom_vecs () const
 returns the bottom vecs for each layer – usually you won't need this unless you do per-layer checks such as gradients.
 
const vector< vector< Blob< Dtype > * > > & top_vecs () const
 returns the top vecs for each layer – usually you won't need this unless you do per-layer checks such as gradients.
 
const vector< int > & top_ids (int i) const
 returns the ids of the top blobs of layer i
 
const vector< int > & bottom_ids (int i) const
 returns the ids of the bottom blobs of layer i
 
const vector< vector< bool > > & bottom_need_backward () const
 
const vector< Dtype > & blob_loss_weights () const
 
const vector< bool > & layer_need_backward () const
 
const vector< shared_ptr< Blob< Dtype > > > & params () const
 returns the parameters
 
const vector< Blob< Dtype > * > & learnable_params () const
 
const vector< float > & params_lr () const
 returns the learnable parameter learning rate multipliers
 
const vector< bool > & has_params_lr () const
 
const vector< float > & params_weight_decay () const
 returns the learnable parameter decay multipliers
 
const vector< bool > & has_params_decay () const
 
const map< string, int > & param_names_index () const
 
const vector< int > & param_owners () const
 
const vector< string > & param_display_names () const
 
int num_inputs () const
 Input and output blob numbers.
 
int num_outputs () const
 
const vector< Blob< Dtype > * > & input_blobs () const
 
const vector< Blob< Dtype > * > & output_blobs () const
 
const vector< int > & input_blob_indices () const
 
const vector< int > & output_blob_indices () const
 
bool has_blob (const string &blob_name) const
 
const shared_ptr< Blob< Dtype > > blob_by_name (const string &blob_name) const
 
bool has_layer (const string &layer_name) const
 
const shared_ptr< Layer< Dtype > > layer_by_name (const string &layer_name) const
 
void set_debug_info (const bool value)
 
const vector< Callback * > & before_forward () const
 
void add_before_forward (Callback *value)
 
const vector< Callback * > & after_forward () const
 
void add_after_forward (Callback *value)
 
const vector< Callback * > & before_backward () const
 
void add_before_backward (Callback *value)
 
const vector< Callback * > & after_backward () const
 
void add_after_backward (Callback *value)
 

Static Public Member Functions

static void FilterNet (const NetParameter &param, NetParameter *param_filtered)
 Remove layers that the user specified should be excluded given the current phase, level, and stage.
 
static bool StateMeetsRule (const NetState &state, const NetStateRule &rule, const string &layer_name)
 return whether NetState state meets NetStateRule rule
 

Protected Member Functions

void AppendTop (const NetParameter &param, const int layer_id, const int top_id, set< string > *available_blobs, map< string, int > *blob_name_to_idx)
 Append a new top blob to the net.
 
int AppendBottom (const NetParameter &param, const int layer_id, const int bottom_id, set< string > *available_blobs, map< string, int > *blob_name_to_idx)
 Append a new bottom blob to the net.
 
void AppendParam (const NetParameter &param, const int layer_id, const int param_id)
 Append a new parameter blob to the net.
 
void ForwardDebugInfo (const int layer_id)
 Helper for displaying debug info in Forward.
 
void BackwardDebugInfo (const int layer_id)
 Helper for displaying debug info in Backward.
 
void UpdateDebugInfo (const int param_id)
 Helper for displaying debug info in Update.
 
 DISABLE_COPY_AND_ASSIGN (Net)
 

Protected Attributes

string name_
 The network name.
 
Phase phase_
 The phase: TRAIN or TEST.
 
vector< shared_ptr< Layer< Dtype > > > layers_
 Individual layers in the net.
 
vector< string > layer_names_
 
map< string, int > layer_names_index_
 
vector< bool > layer_need_backward_
 
vector< shared_ptr< Blob< Dtype > > > blobs_
 the blobs storing intermediate results between the layer.
 
vector< string > blob_names_
 
map< string, int > blob_names_index_
 
vector< bool > blob_need_backward_
 
vector< vector< Blob< Dtype > * > > bottom_vecs_
 
vector< vector< int > > bottom_id_vecs_
 
vector< vector< bool > > bottom_need_backward_
 
vector< vector< Blob< Dtype > * > > top_vecs_
 top_vecs stores the vectors containing the output for each layer
 
vector< vector< int > > top_id_vecs_
 
vector< Dtype > blob_loss_weights_
 
vector< vector< int > > param_id_vecs_
 
vector< int > param_owners_
 
vector< string > param_display_names_
 
vector< pair< int, int > > param_layer_indices_
 
map< string, int > param_names_index_
 
vector< int > net_input_blob_indices_
 blob indices for the input and the output of the net
 
vector< int > net_output_blob_indices_
 
vector< Blob< Dtype > * > net_input_blobs_
 
vector< Blob< Dtype > * > net_output_blobs_
 
vector< shared_ptr< Blob< Dtype > > > params_
 The parameters in the network.
 
vector< Blob< Dtype > * > learnable_params_
 
vector< int > learnable_param_ids_
 
vector< float > params_lr_
 the learning rate multipliers for learnable_params_
 
vector< bool > has_params_lr_
 
vector< float > params_weight_decay_
 the weight decay multipliers for learnable_params_
 
vector< bool > has_params_decay_
 
size_t memory_used_
 The bytes of memory used by this net.
 
bool debug_info_
 Whether to compute and display debug info for the net.
 
vector< Callback * > before_forward_
 
vector< Callback * > after_forward_
 
vector< Callback * > before_backward_
 
vector< Callback * > after_backward_
 

Detailed Description

template<typename Dtype>
class caffe::Net< Dtype >

Connects Layers together into a directed acyclic graph (DAG) specified by a NetParameter.

TODO(dox): more thorough description.

Member Function Documentation

◆ Backward()

template<typename Dtype >
void caffe::Net< Dtype >::Backward ( )

The network backward should take no input and output, since it solely computes the gradient w.r.t the parameters, and the data has already been provided during the forward pass.

◆ ForwardFromTo()

template<typename Dtype >
Dtype caffe::Net< Dtype >::ForwardFromTo ( int  start,
int  end 
)

The From and To variants of Forward and Backward operate on the (topological) ordering by which the net is specified. For general DAG networks, note that (1) computing from one layer to another might entail extra computation on unrelated branches, and (2) computation starting in the middle may be incorrect if all of the layers of a fan-in are not included.

◆ Reshape()

template<typename Dtype >
void caffe::Net< Dtype >::Reshape ( )

Reshape all layers from bottom to top.

This is useful to propagate changes to layer sizes without running a forward pass, e.g. to compute output feature size.

◆ ShareWeights()

template<typename Dtype >
void caffe::Net< Dtype >::ShareWeights ( )

Shares weight data of owner blobs with shared blobs.

Note: this is called by Net::Init, and thus should normally not be called manually.

Member Data Documentation

◆ blob_loss_weights_

template<typename Dtype >
vector<Dtype> caffe::Net< Dtype >::blob_loss_weights_
protected

Vector of weight in the loss (or objective) function of each net blob, indexed by blob_id.

◆ bottom_vecs_

template<typename Dtype >
vector<vector<Blob<Dtype>*> > caffe::Net< Dtype >::bottom_vecs_
protected

bottom_vecs stores the vectors containing the input for each layer. They don't actually host the blobs (blobs_ does), so we simply store pointers.

◆ learnable_param_ids_

template<typename Dtype >
vector<int> caffe::Net< Dtype >::learnable_param_ids_
protected

The mapping from params_ -> learnable_params_: we have learnable_param_ids_.size() == params_.size(), and learnable_params_[learnable_param_ids_[i]] == params_[i].get() if and only if params_[i] is an "owner"; otherwise, params_[i] is a sharer and learnable_params_[learnable_param_ids_[i]] gives its owner.


The documentation for this class was generated from the following files: