Skip to content

Commit 942c609

Browse files
yuanjuastevhliu
andauthored
Model card for mobilenet v1 and v2 (#37948)
* doc: #36979 * doc: update hfoptions * add model checkpoints links * add model checkpoints links * update example output * update style #36979 * add pipeline tags * improve comments * Apply suggestions from code review Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * apply suggested changes * Apply suggestions from code review Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
1 parent 9a85105 commit 942c609

File tree

2 files changed

+127
-59
lines changed

2 files changed

+127
-59
lines changed

docs/source/en/model_doc/mobilenet_v1.md

Lines changed: 64 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -14,54 +14,92 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# MobileNet V1
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white">
20+
</div>
2121
</div>
2222

23-
## Overview
23+
# MobileNet V1
2424

25-
The MobileNet model was proposed in [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861) by Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam.
25+
[MobileNet V1](https://huggingface.co/papers/1704.04861) is a family of efficient convolutional neural networks optimized for on-device or embedded vision tasks. It achieves this efficiency by using depth-wise separable convolutions instead of standard convolutions. The architecture allows for easy trade-offs between latency and accuracy using two main hyperparameters, a width multiplier (alpha) and an image resolution multiplier.
2626

27-
The abstract from the paper is the following:
27+
You can all the original MobileNet checkpoints under the [Google](https://huggingface.co/google?search_models=mobilenet) organization.
2828

29-
*We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.*
29+
> [!TIP]
30+
> Click on the MobileNet V1 models in the right sidebar for more examples of how to apply MobileNet to different vision tasks.
3031
31-
This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here](https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md).
32+
The example below demonstrates how to classify an image with [`Pipeline`] or the [`AutoModel`] class.
3233

33-
## Usage tips
3434

35-
- The checkpoints are named **mobilenet\_v1\_*depth*\_*size***, for example **mobilenet\_v1\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and **224** is the resolution of the input images the model was trained on.
35+
<hfoptions id="usage">
36+
<hfoption id="Pipeline">
3637

37-
- Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
38+
```python
39+
import torch
40+
from transformers import pipeline
3841

39-
- One can use [`MobileNetV1ImageProcessor`] to prepare images for the model.
42+
pipeline = pipeline(
43+
task="image-classification",
44+
model="google/mobilenet_v1_1.0_224",
45+
torch_dtype=torch.float16,
46+
device=0
47+
)
48+
pipeline(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
49+
```
4050

41-
- The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
51+
</hfoption>
52+
<hfoption id="AutoModel">
4253

43-
- The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV1Config`] with `tf_padding = False`.
54+
```python
55+
import torch
56+
import requests
57+
from PIL import Image
58+
from transformers import AutoModelForImageClassification, AutoImageProcessor
4459

45-
Unsupported features:
60+
image_processor = AutoImageProcessor.from_pretrained(
61+
"google/mobilenet_v1_1.0_224",
62+
)
63+
model = AutoModelForImageClassification.from_pretrained(
64+
"google/mobilenet_v1_1.0_224",
65+
)
4666

47-
- The [`MobileNetV1Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use a 7x7 average pooling layer with stride 2 instead of global pooling. For larger inputs, this gives a pooled output that is larger than 1x1 pixel. The HuggingFace implementation does not support this.
67+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
68+
image = Image.open(requests.get(url, stream=True).raw)
69+
inputs = image_processor(image, return_tensors="pt")
4870

49-
- It is currently not possible to specify an `output_stride`. For smaller output strides, the original model invokes dilated convolution to prevent the spatial resolution from being reduced further. The output stride of the HuggingFace model is always 32.
71+
with torch.no_grad():
72+
logits = model(**inputs).logits
73+
predicted_class_id = logits.argmax(dim=-1).item()
5074

51-
- The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
75+
class_labels = model.config.id2label
76+
predicted_class_label = class_labels[predicted_class_id]
77+
print(f"The predicted class label is: {predicted_class_label}")
78+
```
5279

53-
- It's common to extract the output from the pointwise layers at indices 5, 11, 12, 13 for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
80+
</hfoption>
81+
</hfoptions>
5482

55-
## Resources
83+
<!-- Quantization - Not applicable -->
84+
<!-- Attention Visualization - Not applicable for this model type -->
5685

57-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV1.
5886

59-
<PipelineTag pipeline="image-classification"/>
87+
## Notes
6088

61-
- [`MobileNetV1ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
62-
- See also: [Image classification task guide](../tasks/image_classification)
89+
- Checkpoint names follow the pattern `mobilenet_v1_{depth_multiplier}_{resolution}`, like `mobilenet_v1_1.0_224`. `1.0` is the depth multiplier and `224` is the image resolution.
90+
- While trained on images of a specific sizes, the model architecture works with images of different sizes (minimum 32x32). The [`MobileNetV1ImageProcessor`] handles the necessary preprocessing.
91+
- MobileNet is pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset with 1000 classes. However, the model actually predicts 1001 classes. The additional class is an extra "background" class (index 0).
92+
- The original TensorFlow checkpoints determines the padding amount at inference because it depends on the input image size. To use the native PyTorch padding behavior, set `tf_padding=False` in [`MobileNetV1Config`].
93+
```python
94+
from transformers import MobileNetV1Config
6395

64-
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
96+
config = MobileNetV1Config.from_pretrained("google/mobilenet_v1_1.0_224", tf_padding=True)
97+
```
98+
- The Transformers implementation does not support the following features.
99+
- Uses global average pooling instead of the optional 7x7 average pooling with stride 2. For larger inputs, this gives a pooled output that is larger than a 1x1 pixel.
100+
- Does not support other `output_stride` values (fixed at 32). For smaller `output_strides`, the original implementation uses dilated convolution to prevent spatial resolution from being reduced further. (which would require dilated convolutions).
101+
- `output_hidden_states=True` returns *all* intermediate hidden states. It is not possible to extract the output from specific layers for other downstream purposes.
102+
- Does not include the quantized models from the original checkpoints because they include "FakeQuantization" operations to unquantize the weights.
65103

66104
## MobileNetV1Config
67105

docs/source/en/model_doc/mobilenet_v2.md

Lines changed: 63 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -14,61 +14,91 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# MobileNet V2
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-EE4C2C?style=flat&logo=pytorch&logoColor=white">
20+
</div>
2121
</div>
2222

23-
## Overview
24-
25-
The MobileNet model was proposed in [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, Liang-Chieh Chen.
26-
27-
The abstract from the paper is the following:
28-
29-
*In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3.*
23+
# MobileNet V2
3024

31-
*The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input/output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters.*
25+
[MobileNet V2](https://huggingface.co/papers/1801.04381) improves performance on mobile devices with a more efficient architecture. It uses inverted residual blocks and linear bottlenecks to start with a smaller representation of the data, expands it for processing, and shrinks it again to reduce the number of computations. The model also removes non-linearities to maintain accuracy despite its simplified design. Like [MobileNet V1](./mobilenet_v1), it uses depthwise separable convolutions for efficiency.
3226

33-
This model was contributed by [matthijs](https://huggingface.co/Matthijs). The original code and weights can be found [here for the main model](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet) and [here for DeepLabV3+](https://github.com/tensorflow/models/tree/master/research/deeplab).
27+
You can all the original MobileNet checkpoints under the [Google](https://huggingface.co/google?search_models=mobilenet) organization.
3428

35-
## Usage tips
29+
> [!TIP]
30+
> Click on the MobileNet V2 models in the right sidebar for more examples of how to apply MobileNet to different vision tasks.
3631
37-
- The checkpoints are named **mobilenet\_v2\_*depth*\_*size***, for example **mobilenet\_v2\_1.0\_224**, where **1.0** is the depth multiplier (sometimes also referred to as "alpha" or the width multiplier) and **224** is the resolution of the input images the model was trained on.
3832

39-
- Even though the checkpoint is trained on images of specific size, the model will work on images of any size. The smallest supported image size is 32x32.
33+
The examples below demonstrate how to classify an image with [`Pipeline`] or the [`AutoModel`] class.
4034

41-
- One can use [`MobileNetV2ImageProcessor`] to prepare images for the model.
4235

43-
- The available image classification checkpoints are pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). However, the model predicts 1001 classes: the 1000 classes from ImageNet plus an extra “background” class (index 0).
36+
<hfoptions id="usage-img-class">
37+
<hfoption id="Pipeline">
4438

45-
- The segmentation model uses a [DeepLabV3+](https://arxiv.org/abs/1802.02611) head. The available semantic segmentation checkpoints are pre-trained on [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/).
39+
```python
40+
import torch
41+
from transformers import pipeline
4642

47-
- The original TensorFlow checkpoints use different padding rules than PyTorch, requiring the model to determine the padding amount at inference time, since this depends on the input image size. To use native PyTorch padding behavior, create a [`MobileNetV2Config`] with `tf_padding = False`.
43+
pipeline = pipeline(
44+
task="image-classification",
45+
model="google/mobilenet_v2_1.4_224",
46+
torch_dtype=torch.float16,
47+
device=0
48+
)
49+
pipeline(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
50+
```
4851

49-
Unsupported features:
52+
</hfoption>
53+
<hfoption id="AutoModel">
5054

51-
- The [`MobileNetV2Model`] outputs a globally pooled version of the last hidden state. In the original model it is possible to use an average pooling layer with a fixed 7x7 window and stride 1 instead of global pooling. For inputs that are larger than the recommended image size, this gives a pooled output that is larger than 1x1. The Hugging Face implementation does not support this.
55+
```python
56+
import torch
57+
import requests
58+
from PIL import Image
59+
from transformers import AutoModelForImageClassification, AutoImageProcessor
5260

53-
- The original TensorFlow checkpoints include quantized models. We do not support these models as they include additional "FakeQuantization" operations to unquantize the weights.
61+
image_processor = AutoImageProcessor.from_pretrained(
62+
"google/mobilenet_v2_1.4_224",
63+
)
64+
model = AutoModelForImageClassification.from_pretrained(
65+
"google/mobilenet_v2_1.4_224",
66+
)
5467

55-
- It's common to extract the output from the expansion layers at indices 10 and 13, as well as the output from the final 1x1 convolution layer, for downstream purposes. Using `output_hidden_states=True` returns the output from all intermediate layers. There is currently no way to limit this to specific layers.
68+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
69+
image = Image.open(requests.get(url, stream=True).raw)
70+
inputs = image_processor(image, return_tensors="pt")
5671

57-
- The DeepLabV3+ segmentation head does not use the final convolution layer from the backbone, but this layer gets computed anyway. There is currently no way to tell [`MobileNetV2Model`] up to which layer it should run.
72+
with torch.no_grad():
73+
logits = model(**inputs).logits
74+
predicted_class_id = logits.argmax(dim=-1).item()
5875

59-
## Resources
76+
class_labels = model.config.id2label
77+
predicted_class_label = class_labels[predicted_class_id]
78+
print(f"The predicted class label is: {predicted_class_label}")
79+
```
6080

61-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with MobileNetV2.
81+
</hfoption>
82+
</hfoptions>
6283

63-
<PipelineTag pipeline="image-classification"/>
6484

65-
- [`MobileNetV2ForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
66-
- See also: [Image classification task guide](../tasks/image_classification)
85+
## Notes
6786

68-
**Semantic segmentation**
69-
- [Semantic segmentation task guide](../tasks/semantic_segmentation)
87+
- Classification checkpoint names follow the pattern `mobilenet_v2_{depth_multiplier}_{resolution}`, like `mobilenet_v2_1.4_224`. `1.4` is the depth multiplier and `224` is the image resolution. Segmentation checkpoint names follow the pattern `deeplabv3_mobilenet_v2_{depth_multiplier}_{resolution}`.
88+
- While trained on images of a specific sizes, the model architecture works with images of different sizes (minimum 32x32). The [`MobileNetV2ImageProcessor`] handles the necessary preprocessing.
89+
- MobileNet is pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset with 1000 classes. However, the model actually predicts 1001 classes. The additional class is an extra "background" class (index 0).
90+
- The segmentation models use a [DeepLabV3+](https://huggingface.co/papers/1802.02611) head which is often pretrained on datasets like [PASCAL VOC](https://huggingface.co/datasets/merve/pascal-voc).
91+
- The original TensorFlow checkpoints determines the padding amount at inference because it depends on the input image size. To use the native PyTorch padding behavior, set `tf_padding=False` in [`MobileNetV2Config`].
92+
```python
93+
from transformers import MobileNetV2Config
7094

71-
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
95+
config = MobileNetV2Config.from_pretrained("google/mobilenet_v2_1.4_224", tf_padding=True)
96+
```
97+
- The Transformers implementation does not support the following features.
98+
- Uses global average pooling instead of the optional 7x7 average pooling with stride 2. For larger inputs, this gives a pooled output that is larger than a 1x1 pixel.
99+
- `output_hidden_states=True` returns *all* intermediate hidden states. It is not possible to extract the output from specific layers for other downstream purposes.
100+
- Does not include the quantized models from the original checkpoints because they include "FakeQuantization" operations to unquantize the weights.
101+
- For segmentation models, the final convolution layer of the backbone is computed even though the DeepLabV3+ head doesn't use it.
72102

73103
## MobileNetV2Config
74104

0 commit comments

Comments
 (0)