Caffe

Deep learning framework by BAIR

Created by
Yangqing Jia
Lead Developer
Evan Shelhamer

Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.

Check out our web image classification demo!

Why Caffe?

Expressive architecture encourages application and innovation. Models and optimization are defined by configuration without hard-coding. Switch between CPU and GPU by setting a single flag to train on a GPU machine then deploy to commodity clusters or mobile devices.

Extensible code fosters active development. In Caffe’s first year, it has been forked by over 1,000 developers and had many significant changes contributed back. Thanks to these contributors the framework tracks the state-of-the-art in both code and models.

Speed makes Caffe perfect for research experiments and industry deployment. Caffe can process over 60M images per day with a single NVIDIA K40 GPU*. That’s 1 ms/image for inference and 4 ms/image for learning and more recent library versions and hardware are faster still. We believe that Caffe is among the fastest convnet implementations available.

Community: Caffe already powers academic research projects, startup prototypes, and even large-scale industrial applications in vision, speech, and multimedia. Join our community of brewers on the caffe-users group and Github.

* With the ILSVRC2012-winning SuperVision model and prefetching IO.

Documentation

Notebook Examples

Command Line Examples

Citing Caffe

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}

If you do publish a paper where Caffe helped your research, we encourage you to cite the framework for tracking by Google Scholar.

Contacting Us

Join the caffe-users group to ask questions and discuss methods and models. This is where we talk about usage, installation, and applications.

Framework development discussions and thorough bug reports are collected on Issues.

Acknowledgements

The BAIR Caffe developers would like to thank NVIDIA for GPU donation, A9 and Amazon Web Services for a research grant in support of Caffe development and reproducible research in deep learning, and BAIR PI Trevor Darrell for guidance.

The BAIR members who have contributed to Caffe are (alphabetical by first name): Carl Doersch, Eric Tzeng, Evan Shelhamer, Jeff Donahue, Jon Long, Philipp Krähenbühl, Ronghang Hu, Ross Girshick, Sergey Karayev, Sergio Guadarrama, Takuya Narihira, and Yangqing Jia.

The open-source community plays an important and growing role in Caffe’s development. Check out the Github project pulse for recent activity and the contributors for the full list.

We sincerely appreciate your interest and contributions! If you’d like to contribute, please read the developing & contributing guide.

Yangqing would like to give a personal thanks to the NVIDIA Academic program for providing GPUs, Oriol Vinyals for discussions along the journey, and BAIR PI Trevor Darrell for advice.