Deepstream raw output from RTSP feeds have no speed or direction

T4, Deepstream 5.0.1, deepstream-test5-app - newbie question:

We’re detecting and counting cars via RTSP feeds from several cameras. I noticed in the raw output from that the json always has 0 for speed and direction. Could someone point me in the right direction or where to look to see how to configure this so it will output speed and direction of the detected vehicles? Here is the raw output. I did notice that the “event” block has values for “moving” or “entry”. Not sure what entry means. Is there some documentation that I can reference that has all potential values that are possible here?

deepstream-test5-app -p 0 -m 1 -t -c configs/deepstream_config_demo.ini

“messageid”: “5afe27d9-429b-4404-938d-5a69b2779ffb”,
“mdsversion”: “1.0”,
@timestamp”: “2021-04-14T18:14:26.249Z”,
“place”: {
“id”: “0”,
“name”: “Raleigh/TRYON_RD./WILMINGTON_ST.”,
“type”: “intersection/road”,
“location”: {
“lat”: -78.651541510000001,
“lon”: 35.731103939999997,
“alt”: 100.0
“aisle”: {
“id”: “C_127_158”,
“name”: “Lane 1”,
“level”: “P1”,
“coordinate”: {
“x”: 1.0,
“y”: 2.0,
“z”: 3.0
“sensor”: {
“type”: “Camera”,
“description”: “3028”,
“location”: {
“lat”: -78.651541510000001,
“lon”: 35.731103939999997,
“alt”: 100.0
“coordinate”: {
“x”: 5.2000000000000002,
“y”: 10.1,
“z”: 11.199999999999999
“analyticsModule”: {
“id”: “XYZ”,
“description”: “Vehicle Detection and License Plate Recognition 0”,
“source”: “OpenALR”,
“version”: “1.0”
“object”: {
“id”: “156250”,
“speed”: 0.0,
“direction”: 0.0,
“orientation”: 0.0,
“vehicle”: {
“type”: null,
“make”: null,
“model”: null,
“color”: null,
“licenseState”: null,
“license”: null,
“confidence”: 0.0
“bbox”: {
“topleftx”: 998,
“toplefty”: 508,
“bottomrightx”: 1280,
“bottomrighty”: 716
“location”: {
“lat”: 0.0,
“lon”: 0.0,
“alt”: 0.0
“coordinate”: {
“x”: 0.0,
“y”: 0.0,
“z”: 0.0
“event”: {
“id”: “5122db00-a92b-45c9-b42a-572c0dcf1bbb”,
“type”: “moving”
“videoPath”: “”

The code of generating “speed” and “direction” is in nvmsgconv plugin. The source code can be found in /opt/nvidia/deepstream/deepstream/sources/libs/nvmsgconv, the code in generate_object_object() function in nvmsgconv.cpp shows that the “speed” and “direction” phrase are always 0s.

DeepStream SDK is just a SDK. The code of nvmsgconv is just a sample of how to generate json format messages. So just some phrases such as “color”, “make”,… are implemented. While some phrases are just been defined but not implemented.

The code is open source. You can modify them to implement the functions you want.

Thanks for clarifying! Much appreciated.