0 背景


测试环境: Ubuntu 18.04, CUDA 10.2, T4


1 安装流程

1.1 创建conda环境

conda create -n yolov5 python=3.6

1.2 下载源码

mkdir deepstream_yolov5
cd deepstream_yolov5
git clone
git clone
git clone

1.3 安装环境

cd yolov5
# git checkout v3.0 #切换到3.0版本
pip install scikit-build
pip install -r requirements.txt

2 模型准备

2.1 下载预训练模型

yolov5 界面下载预训练模型,这里我们以YOLOv5x为例介绍

下载好模型后放到 yolov5/weights目录下

2.2 生成engine

cd yolov5
cp ../tensorrtx/yolov5/ ./

修改 中的模型名称,需要根据自己用的模型做对应修改,主要是第8行和第11行

import torch
import struct
from utils.torch_utils import select_device

# Initialize
device = select_device('cpu')
# Load model
model = torch.load('weights/', map_location=device)['model'].float()  # load to FP32

f = open('yolov5x.wts', 'w')
for k, v in model.state_dict().items():
    vr = v.reshape(-1).cpu().numpy()
    f.write('{} {} '.format(k, len(vr)))
    for vv in vr:
        f.write(' ')

运行后会在本地生成 yolov5x.wts文件,然后拷贝到Yolov5-in-Deepstream-5.0文件夹中

cp yolov5x.wts ../Yolov5-in-Deepstream-5.0
cd ../Yolov5-in-Deepstream-5.0

修改 yolov5.cpp 文件,将 NET 宏改成自己对应的模型

#define NET x  // s m l x


mkdir build
cd build
cmake ..
sudo ./yolov5 -s

运行后会生成 yolov5x.engine 和 文件,首先检查下engine运行结果是否正确


mkdir ../samples
# 网上下载两张 coco 数据集图片
sudo ./yolov5 -d  ../samples


[11/02/2020-15:08:27] [W] [TRT] Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles



cp yolov5*.engine ../Deepstream\ 5.0/
cp ../Deepstream\ 5.0/

3 deepstream部署

在 Deepstream 5.0/nvdsinfer_custom_impl_Yolo 目录中运行 make 编译,生成libnvdsinfer_custom_impl_Yolo.so文件


  • model-engine-file=yolov5s.engine --> model-engine-file=yolov5x.engine
  • custom-lib-path=objectDetector_Yolo_V5/nvdsinfer_custom_impl_Yolo/ --> custom-lib-path=nvdsinfer_custom_impl_Yolo/

原工程中缺少 labels.txt 文件,把 coco 数据集的 80 类 labels 拷贝过来即可

然后根据自己的需求,修改 deepstream_app_config_yoloV5.txt 即可


LD_PRELOAD=./ deepstream-app -c deepstream_app_config_yoloV5.txt


注意,engine 文件要在 Yolov5-in-Deepstream-5.0 工程中生成,在 tensorrtx 工程中生成 engine 再导过来运行会出现一大堆框


已标记关键词 清除标记
<div><p> </p> <p><em>This would replace which seems to be causing <a href="">some reliability issues</a> + enable us to add more features to opendatacam and rely on an official nvidia compatible + tested solution</em> </p> <p><strong>Features:</strong></p> <ul><li>at least 50% performance inference boost ( 13 FPS > 23 FPS for Jetson nano on tiny yolo v3) , nvidia does some magic to optimize everything to their hardware.</li><li>memory and reliability tested </li><li>out of the box Docker compatibility with all Nvidia hardware (desktop + jetsons) </li><li><a href="">plug and play several neural network</a> , for instance seems you can link the output of YOLO to a second neural network that would determine the car color / car type / license plate ...</li><li>Give a way to write our own pipelines, for example they have an example to blur face and license plate:</li><li>Use custom YOLO weights</li><li>Provide video stream on an RTSP protocol</li></ul> <p><strong>Missing to interface with Opendatacam</strong></p> <ul><li>Send detections data output to some stream readable by node.js (JSON stream or websocket ...), seems it only write to a file</li><li>Stream video to MJPEG instead of RTSP... We could also hook opendatacam to RTSP and make the node.js server expose MJPEG (do the proxy in the node.js app instead of in deepstream)</li></ul><p>该提问来源于开源项目:opendatacam/opendatacam</p></div>
©️2020 CSDN 皮肤主题: 游动-白 设计师:上身试试 返回首页