deepstream-segmentation
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|
parent directory.. | ||||
################################################################################ # SPDX-FileCopyrightText: Copyright (c) 2019-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: Apache-2.0 # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################ Prerequisites: - DeepStreamSDK 8.0 - Python 3.12 - Gst-python - NumPy package <2.0, >=1.22 (2.0 and above not supported) - OpenCV package To install required packages: $ source /path/to/pyds/bin/activate # Activate the environment $ pip3 install --force-reinstall numpy==1.26.0 $ pip3 install opencv-python If on Jetson, the libgomp.so.1 must be added to LD_PRELOAD: $ export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1 Download CitySegSemFormer model: $ mkdir -p /opt/nvidia/deepstream/deepstream/samples/models/citysemsegformer $ cd /opt/nvidia/deepstream/deepstream/samples/models/citysemsegformer $ wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/citysemsegformer/deployable_onnx_v1.0/files?redirect=true&path=citysemsegformer.onnx' -O citysemsegformer.onnx $ wget --content-disposition 'https://api.ngc.nvidia.com/v2/models/org/nvidia/team/tao/citysemsegformer/deployable_onnx_v1.0/files?redirect=true&path=labels.txt' -O labels.txt Additionally, compile the libnvds_infercustomparser_tao.so lib: $ git clone https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git $ cd /path/to/deepstream_tao_apps/post_processor For x86: $ export CUDA_VER=12.8 For aarch64: $ export CUDA_VER=13.0 For both platforms: $ make $ cp -r /path/to/deepstream_tao_apps/post_processor/libnvds_infercustomparser_tao.so /opt/nvidia/deepstream/deepstream/lib/ To run: $ python3 deepstream_segmentation.py <config_file> <jpeg/mjpeg stream> <FOLDER NAME TO SAVE FRAMES> This document shall describe the sample deepstream-segmentation application. It is meant for simple demonstration of how to use the various DeepStream SDK elements in the pipeline and extract meaningful insights from a video stream such as segmentation masks and respective color mapping for segmentation visualization. This sample creates instance of "nvinfer" element. Instance of the "nvinfer" uses TensorRT API to execute inferencing on a model. Using a correct configuration for a nvinfer element instance is therefore very important as considerable behaviors of the instance are parameterized through these configs. For reference, here are the config files used for this sample : 1. The 19-class segmentation model configured through dstest_segmentation_citysemsegformer_config.txt In this sample, we first create one instance of "nvinfer", referred as the pgie. Later nvinfer element attach some MetaData to the buffer. By attaching the probe function at the end of the pipeline, one can extract meaningful information from this inference. Please refer the "tiler_src_pad_buffer_probe" function in the sample code. For details on the Metadata format, refer to the file "gstnvdsmeta.h". In this probe we demonstrate extracting the masks and color mapping for segmentation visualization using opencv and numpy.