DS3.0 deepstream_test4_app.cpp

Thanks for the help! I double-checked the existence of that file and the path of PROTOCOL_ADAPTOR_LIB. Both seem to be correct. Here is an output of the macro of C file and the output of the command:

nvidia@Nvidia:~$ ll /usr/lib/aarch64-linux-gnu/tegra/libnvds_kafka_proto.so 
-rwxr-xr-x 1 root root 35224 Jul 10 15:55 /usr/lib/aarch64-linux-gnu/tegra/libnvds_kafka_proto.so*
#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <sys/timeb.h>
#include <unistd.h>
#include "gstnvdsmeta.h"
#include "nvdsmeta_schema.h"

#define MAX_DISPLAY_LEN 64
#define MAX_TIME_STAMP_LEN 32

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

//#define PROTOCOL_ADAPTOR_LIB "../../../libs/kafka_protocol_adaptor/libnvds_kafka_proto.so"
#define PROTOCOL_ADAPTOR_LIB "/usr/lib/aarch64-linux-gnu/tegra/libnvds_kafka_proto.so"
#define CONNECTION_STRING "<analytics server IP address>;9092;test123"
#define CONFIG_FILE_PATH "./config.txt"

where “test123” is the kafka task the analytics server runs.

Have you tried to implement this on a tesla analytics server and a jetson board perception server? I am using jetpack 4.2.1 with deepstream 4.0 on Jetson.

Would it be possible for you to also implement this with the same setting and let me know which version of the software is able to run test4 please? I am thinking it could also be a version compatibility problem. Thanks again for your support!

yes, Analytic server on x86 desktop, perception on jetson board, using Jetpack 4.2.1, should be same with you, works on my side.

Is it possible to set up a call debugging this problem?

hmm, you can find the kafka library,
nvidia@Nvidia:~$ ll /usr/lib/aarch64-linux-gnu/tegra/libnvds_kafka_proto.so
-rwxr-xr-x 1 root root 35224 Jul 10 15:55 /usr/lib/aarch64-linux-gnu/tegra/libnvds_kafka_proto.so*
have no idea why it can not open it.
can you follow the README step by step as provided in test4 sample app try again?

Yeah, I feel the same too. I have tried it multiple times and even tried reinstalling ubuntu.

you are using Deepstream 4.0 EA release, right? how about this, uname -a, please paste here.
if you are using latest Jetpack 4.2.1 GA release, then DS4.0 EA release package likely not to work

Yes, I am using DS 4.0 EA with Jetpack 4.2.1. I was instructed that Jetpack 4.2.1 was the version to run DS 4.0… so which Jetpack version should I use?

can you paste “uname -a” output here?

Linux Nvidia 4.9.140-tegra #1 SMP PREEMPT Thu May 16 09:40:33 PDT 2019 aarch64 aarch64 aarch64 GNU/Linux

Is it the same setting as you have? Should I tried with Jetpack 4.2?

yeah, same setting, you do not need to try other version.

Thank you! For some reasons, running the zookeeper and kafka in the analytics server file with docker composer is necessary. I used my own version of kafka and zookeeper before and that was the problem. Now I have another question: how can I see the message in the analytics server sent from the sample test 4? I ran the kafka-console-consumer.sh in kafka docker container, but can’t see any received message there

you need to open kibana dashboard using analytics <>:5601, follow deepstream_360_d_smart_parking_application/analytics_server_docker at master · NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application · GitHub, step 6, create index pattern, let test4 running, then you can see the messages sent from test4.

In the smart parking application, the information is formed based on the nv-schema.json. However, the message sent from test 4 has a different sets of messages (person/vehicle). Thus, if you look at the kibana, all of the messages are about vehicle. I tried to modify nv-scema.json to include person and bicycle properties, but that doesn’t fix anything. Is there anyway to find the message directly from kafka container?

I also noticed in test 4, it only sends the first object every 30 frames, but after I took that out, it still doesn’t change anything.

test4 by default sends message for first object every 30th frame, So most likely first object will be vehicle in the frame, you can use different stream with no or less vehicles if possible.
parking application and test4 both use same schema to form messages but how metadata is generated is different. In parking app, spot and aisle components provide metadata and based on that schema is generated, In test4 app itself generate metadata to show the use case.

Thank you for clarifying this with me! I did find the schema file for test4 app. It is internally a struct in NvDsEventMsgMeta. I found all of the variables in the struct was set, but they are not sent as part of the message. (At least in kibana, some variables are not shown). So, which part of the code in test4.c determines what information is included?

Hi
please check generate_event_msg_meta function in deepstream_test4_app.c

Yes, I noticed this function, but some variables set here is not shown in kibana. For example, the variable “objClassId” shows whether the object is vehicle or person. This is set in test4.c, but is not shown as a field in kibana. Is this something I can set in the analytics server?