Skip to content

模型旨在解决现有的深度估计模型在处理复杂场景、透明或反射物体时的性能限制。与前一代模型相比,V2版本通过采用合成图像训练、增加教师模型容量,并利用大规模伪标签现实数据进行学生模型教学,显著提高了预测精度和效率。

Notifications You must be signed in to change notification settings

AgentOpen/DepthAnythingV2-safetensors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features:

more fine-grained details than Depth Anything V1 more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard) more efficient (10x faster) and more lightweight than SD-based models impressive fine-tuned performance with our pre-trained models We also release six metric depth models of three scales for indoor and outdoor scenes, respectively.

Quick Preview Reminder: To ensure the best viewing experience, please choose the 1440p video quality in the player settings.

Comparison with Depth Anything V1 on Fine-grained Details

About

模型旨在解决现有的深度估计模型在处理复杂场景、透明或反射物体时的性能限制。与前一代模型相比,V2版本通过采用合成图像训练、增加教师模型容量,并利用大规模伪标签现实数据进行学生模型教学,显著提高了预测精度和效率。

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors