Skip to content

Commit ebdfa98

Browse files
shubham0204zucchini-nlp
authored andcommitted
Update model card for Depth Anything (huggingface#37065)
[docs] Update model card for Depth Anything
1 parent a9b636a commit ebdfa98

File tree

1 file changed

+48
-80
lines changed

1 file changed

+48
-80
lines changed

docs/source/en/model_doc/depth_anything.md

Lines changed: 48 additions & 80 deletions
Original file line numberDiff line numberDiff line change
@@ -14,101 +14,69 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# Depth Anything
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
</div>
2121
</div>
2222

23-
## Overview
24-
25-
The Depth Anything model was proposed in [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the [DPT](dpt) architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
26-
27-
<Tip>
28-
29-
[Depth Anything V2](depth_anything_v2) was released in June 2024. It uses the same architecture as Depth Anything and therefore it is compatible with all code examples and existing workflows. However, it leverages synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions.
30-
31-
</Tip>
32-
33-
The abstract from the paper is the following:
34-
35-
*This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet.*
36-
37-
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
38-
alt="drawing" width="600"/>
39-
40-
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
41-
42-
This model was contributed by [nielsr](https://huggingface.co/nielsr).
43-
The original code can be found [here](https://github.com/LiheYoung/Depth-Anything).
44-
45-
## Usage example
23+
# Depth Anything
4624

47-
There are 2 main ways to use Depth Anything: either using the pipeline API, which abstracts away all the complexity for you, or by using the `DepthAnythingForDepthEstimation` class yourself.
25+
[Depth Anything](https://huggingface.co/papers/2401.10891) is designed to be a foundation model for monocular depth estimation (MDE). It is jointly trained on labeled and ~62M unlabeled images to enhance the dataset. It uses a pretrained [DINOv2](./dinov2) model as an image encoder to inherit its existing rich semantic priors, and [DPT](./dpt) as the decoder. A teacher model is trained on unlabeled images to create pseudo-labels. The student model is trained on a combination of the pseudo-labels and labeled images. To improve the student model's performance, strong perturbations are added to the unlabeled images to challenge the student model to learn more visual knowledge from the image.
4826

49-
### Pipeline API
27+
You can find all the original Depth Anything checkpoints under the [Depth Anything](https://huggingface.co/collections/LiheYoung/depth-anything-release-65b317de04eec72abf6b55aa) collection.
5028

51-
The pipeline allows to use the model in a few lines of code:
29+
> [!TIP]
30+
> Click on the Depth Anything models in the right sidebar for more examples of how to apply Depth Anything to different vision tasks.
5231
53-
```python
54-
>>> from transformers import pipeline
55-
>>> from PIL import Image
56-
>>> import requests
32+
The example below demonstrates how to obtain a depth map with [`Pipeline`] or the [`AutoModel`] class.
5733

58-
>>> # load pipe
59-
>>> pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
34+
<hfoptions id="usage">
35+
<hfoption id="Pipeline">
6036

61-
>>> # load image
62-
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
63-
>>> image = Image.open(requests.get(url, stream=True).raw)
37+
```py
38+
import torch
39+
from transformers import pipeline
6440

65-
>>> # inference
66-
>>> depth = pipe(image)["depth"]
41+
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf", torch_dtype=torch.bfloat16, device=0)
42+
pipe("http://images.cocodataset.org/val2017/000000039769.jpg")["depth"]
6743
```
6844

69-
### Using the model yourself
70-
71-
If you want to do the pre- and postprocessing yourself, here's how to do that:
72-
73-
```python
74-
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
75-
>>> import torch
76-
>>> import numpy as np
77-
>>> from PIL import Image
78-
>>> import requests
79-
80-
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
81-
>>> image = Image.open(requests.get(url, stream=True).raw)
82-
83-
>>> image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf")
84-
>>> model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")
85-
86-
>>> # prepare image for the model
87-
>>> inputs = image_processor(images=image, return_tensors="pt")
88-
89-
>>> with torch.no_grad():
90-
... outputs = model(**inputs)
91-
92-
>>> # interpolate to original size and visualize the prediction
93-
>>> post_processed_output = image_processor.post_process_depth_estimation(
94-
... outputs,
95-
... target_sizes=[(image.height, image.width)],
96-
... )
97-
98-
>>> predicted_depth = post_processed_output[0]["predicted_depth"]
99-
>>> depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
100-
>>> depth = depth.detach().cpu().numpy() * 255
101-
>>> depth = Image.fromarray(depth.astype("uint8"))
45+
</hfoption>
46+
<hfoption id="AutoModel">
47+
48+
```py
49+
import torch
50+
import requests
51+
import numpy as np
52+
from PIL import Image
53+
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
54+
55+
image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-base-hf")
56+
model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-base-hf", torch_dtype=torch.bfloat16)
57+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
58+
image = Image.open(requests.get(url, stream=True).raw)
59+
inputs = image_processor(images=image, return_tensors="pt")
60+
61+
with torch.no_grad():
62+
outputs = model(**inputs)
63+
64+
post_processed_output = image_processor.post_process_depth_estimation(
65+
outputs,
66+
target_sizes=[(image.height, image.width)],
67+
)
68+
predicted_depth = post_processed_output[0]["predicted_depth"]
69+
depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
70+
depth = depth.detach().cpu().numpy() * 255
71+
Image.fromarray(depth.astype("uint8"))
10272
```
10373

104-
## Resources
105-
106-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.
74+
</hfoption>
75+
</hfoptions>
10776

108-
- [Monocular depth estimation task guide](../tasks/monocular_depth_estimation)
109-
- A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎
77+
## Notes
11078

111-
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
79+
- [DepthAnythingV2](./depth_anything_v2), released in June 2024, uses the same architecture as Depth Anything and is compatible with all code examples and existing workflows. It uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions.
11280

11381
## DepthAnythingConfig
11482

0 commit comments

Comments
 (0)