I confirmed that mmapi has been updated to the 32.5 release

Hello,


Checking the MMAPI documentation, it was updated to the 32.5 release.
Looking at the 04_video_dec_trt example, a command that executes onnx as an argument is provided.
Are you going to show a sample using onnx?

If correct
When will the updated jetson MMAPI be available?
Thank you.

Hi,
It looks like we don’t publish the model file. Will check with teams and update.

1 Like

Hi,
Please run the sample with the attachment.

resnet10.zip (5.3 MB)

1 Like

Hello,

Can you tell me how to get the source code that changed with 32.5 release ?

Thank you.

Hi,
Please install the package by executing:

$ sudo apt install nvidia-l4t-jetson-multimedia-api

And you should see the samples:

/usr/src/jetson_multimedia_api
1 Like

Hello,

I have encountered an error.

/usr/src/jetson_multimedia_api/samples/04_video_dec_trt$ ./video_dec_trt 2 …/…/data/Video/sample_outdoor_car_1080p_10fps.h264 …/…/data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-onnxmodel …/…/data/Model/resnet10/resnet10_dynamic_batch.onnx
Error: Unknown option --trt-onnxmodel

video_dec_trt [Channel-num] … [options]

Channel-num:
1-32, Number of file arguments should exactly match the number of channels specified

Supported formats:
H264
H265

OPTIONS:
-h,–help Prints this text
–dbg-level Sets the debug level [Values 0-3]

--trt-deployfile     set deploy file name
--trt-modelfile      set model file name
--trt-mode           0 fp16 (if supported), 1 fp32, 2 int8
--trt-enable-perf    1[default] to enable perf measurement, 0 otherwise

Error parsing commandline arguments.

Thank you.

Hello,

I have installed jetson mmapi(32.5 release) using below command.

But, I have not noticed the change for supporting onnx file.
Please check for this.

Thank you.

Hi,
We are able to install correct version. Attach the prints for your reference:

nvidia@tegra-ubuntu:~$ sudo apt install nvidia-l4t-jetson-multimedia-api
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  nvidia-l4t-jetson-multimedia-api
0 upgraded, 1 newly installed, 0 to remove and 93 not upgraded.
Need to get 69.5 MB of archives.
After this operation, 89.4 MB of additional disk space will be used.
Get:1 https://repo.download.nvidia.com/jetson/t186 r32.5/main arm64 nvidia-l4t-jetson-multimedia-api arm64 32.5.0-20210115151051 [69.5 MB]
Fetched 69.5 MB in 6s (11.3 MB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package nvidia-l4t-jetson-multimedia-api.
(Reading database ... 178316 files and directories currently installed.)
Preparing to unpack .../nvidia-l4t-jetson-multimedia-api_32.5.0-20210115151051_arm64.deb ...
Unpacking nvidia-l4t-jetson-multimedia-api (32.5.0-20210115151051) ...
Setting up nvidia-l4t-jetson-multimedia-api (32.5.0-20210115151051) ...
nvidia@tegra-ubuntu:~$ cd /usr/src/jetson_multimedia_api/samples/04_video_dec_trt/
nvidia@tegra-ubuntu:/usr/src/jetson_multimedia_api/samples/04_video_dec_trt$ sudo make
Compiling: video_dec_trt_csvparser.cpp
Compiling: video_dec_trt_main.cpp
make[1]: Entering directory '/usr/src/jetson_multimedia_api/samples/common/classes'
Compiling: NvElementProfiler.cpp
Compiling: NvElement.cpp
Compiling: NvApplicationProfiler.cpp
Compiling: NvVideoDecoder.cpp
Compiling: NvDrmRenderer.cpp
Compiling: NvJpegEncoder.cpp
Compiling: NvVideoConverter.cpp
Compiling: NvBuffer.cpp
Compiling: NvLogging.cpp
Compiling: NvEglRenderer.cpp
Compiling: NvUtils.cpp
Compiling: NvJpegDecoder.cpp
Compiling: NvVideoEncoder.cpp
Compiling: NvV4l2ElementPlane.cpp
Compiling: NvV4l2Element.cpp
make[1]: Leaving directory '/usr/src/jetson_multimedia_api/samples/common/classes'
make[1]: Entering directory '/usr/src/jetson_multimedia_api/samples/common/algorithm/cuda'
Compiling: NvAnalysis.cu
Compiling: NvCudaProc.cpp
make[1]: Leaving directory '/usr/src/jetson_multimedia_api/samples/common/algorithm/cuda'
make[1]: Entering directory '/usr/src/jetson_multimedia_api/samples/common/algorithm/trt'
Compiling: trt_inference.cpp
make[1]: Leaving directory '/usr/src/jetson_multimedia_api/samples/common/algorithm/trt'
Linking: video_dec_trt
nvidia@tegra-ubuntu:/usr/src/jetson_multimedia_api/samples/04_video_dec_trt$ ./video_dec_trt

video_dec_trt [Channel-num] <in-file1> <in-file2> ... <in-format> [options]

Channel-num:
        1-32, Number of file arguments should exactly match the number of channels specified

Supported formats:
        H264
        H265

OPTIONS:
        -h,--help            Prints this text
        --dbg-level <level>  Sets the debug level [Values 0-3]

         Caffe model:
        --trt-deployfile     set caffe deploy file name
        --trt-modelfile      set caffe model file name
         ONNX model:
        --trt-onnxmodel      set onnx model file name, only support dynamic batch(N=-1) onnx model
        --trt-mode           0 fp16 (if supported), 1 fp32, 2 int8
        --trt-enable-perf    1[default] to enable perf measurement, 0 otherwise
1 Like

Hello,

sudo apt install nvidia-l4t-jetson-multimedia-api
[sudo] password for realwave:
Reading package lists… Done
Building dependency tree
Reading state information… Done
nvidia-l4t-jetson-multimedia-api is already the newest version (32.4.4-20201027211332).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Can you tell e when jetson mmapi has been updated?

Thank you.

Hi,
Do you upgrade the system to r32.5? Please check the release version( $ head -1 /etc/nv_tegra_release ).

Or run sudo apt update before sudo apt install

1 Like

Hello,

When I do apt update, it comes out like this.
Do I need to edit sources.list?

jetson7@jetson7-desktop:/usr/src$ sudo apt update
Get:1 file:/var/cuda-repo-10-2-local-10.2.89 InRelease
Ign:1 file:/var/cuda-repo-10-2-local-10.2.89 InRelease
Get:2 file:/var/visionworks-repo InRelease
Ign:2 file:/var/visionworks-repo InRelease
Get:3 file:/var/visionworks-sfm-repo InRelease
Ign:3 file:/var/visionworks-sfm-repo InRelease
Get:4 file:/var/visionworks-tracking-repo InRelease
Ign:4 file:/var/visionworks-tracking-repo InRelease
Get:5 file:/var/cuda-repo-10-2-local-10.2.89 Release [574 B]
Get:6 file:/var/visionworks-repo Release [2,001 B]
Get:5 file:/var/cuda-repo-10-2-local-10.2.89 Release [574 B]
Get:7 file:/var/visionworks-sfm-repo Release [2,005 B]
Get:6 file:/var/visionworks-repo Release [2,001 B]
Get:8 file:/var/visionworks-tracking-repo Release [2,010 B]
Get:7 file:/var/visionworks-sfm-repo Release [2,005 B]
Get:8 file:/var/visionworks-tracking-repo Release [2,010 B]
Hit:12 Index of /node_14.x/ bionic InRelease
Hit:14 https://repo.download.nvidia.com/jetson/common r32.4 InRelease
Hit:15 https://repo.download.nvidia.com/jetson/t210 r32.4 InRelease
Hit:16 Index of /ubuntu-ports bionic InRelease
Hit:17 Index of /ubuntu-ports bionic-updates InRelease
Hit:18 Index of /ubuntu-ports bionic-backports InRelease
Hit:19 Index of /ubuntu-ports bionic-security InRelease
Reading package lists… Done
Building dependency tree
Reading state information… Done
263 packages can be upgraded. Run ‘apt list --upgradable’ to see them.

Thank you.

Hi,
Please upgrade to r32.5 by following the steps:
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/updating_jetson_and_host.html#wwpID0E06B0HA

1 Like

Hello,

If I don’t upgrade to r32.5 and want to get jetson-multimedia-api as r32.5 release, can I just modify the sources.

Thank you.

Hello , @DaneLLL

With the resnet.onnx I gave yesterday, I succeeded in running example 04.

When I run the custom training model as modelfile, the following error appears.
Can you see the error and let me know what needs to be fixed?
It looks like the network input is different, but I’m not sure.
Once I know I need to fill in trt_inference.h for the custom model.

Can you give me some advice?

/usr/src/jetson_multimedia_api_20210127/samples/04_video_dec_trt$ ./video_dec_trt 2 …/…/data/Video/sample_outdoor_car_1080p_10fps.h264 …/…/data/Video/sample_outdoor_car_1080p_10fps.h264 H264 --trt-onnxmodel …/…/data/Model/resnet10/best.onnx --trt-mode 0
set onnx modefile: …/…/data/Model/resnet10/best.onnx
mode has been set to 0(using fp16)

Input filename: …/…/data/Model/resnet10/best.onnx
ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.7
Domain:
Model version: 0
Doc string:

onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
images: kMIN dimensions in profile 0 are [2,3,640,640] but input has static dimensions [1,3,640,640].
Network validation failed.
video_dec_trt: trt_inference.cpp:600: void TRT_Context::onnxToTRTModel(const string&): Assertion `engine’ failed.

Thank you.

Hi,
We would suggest upgrade the system to r32.5 for running mmapi of r32.5.

1 Like

Hi,

images: kMIN dimensions in profile 0 are [2,3,640,640] but input has static dimensions [1,3,640,640].

This error indicates the input dimension is not aligned.
It looks like your model only supports batchsize = 1, but the app tries to use batchsize =2.

Could you check if there is anything missing in the batchsize setting first?

Thanks.

1 Like

Hello,

Whenever I use a custom onnx file, do I need to change the contents of the CalibrationTable_ONNX file?

Thank you.

It seems to be the number of inputs + the number of outputs of the model.

Let’s track the details about onnx in the Question about modifying 04_video_dec_trt example to use custom .onnx.
Thanks.

1 Like

Hello, @DaneLLL @AastaLLL

Can you tell me the github address of the project where the model file was trained and the code used to convert it to onnx?


I used YOLO, but it is difficult to follow because it is different from the sample.
You are asking for the above information in order to proceed using a model with the same structure as the sample.

Thank you.