Deepstream and outputs(GPIO/.mp3)


Trying to figure this out but having trouble finding information.

If I use TLT to train a custom dataset that will detect e.g. blue busses. I then deploy the model on Deepstream and it recognizes blue busses when it ses them.

How do I now in the simplest way use this information in a Jetson Nano to for example play an .mp3 file saying (“blue buss”) or turning on a blue led via GPIO.

Thank you,

Hi @Subframe ,
Please refer to GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps: Sample apps to demonstrate how to deploy models trained with TLT on DeepStream
These are demos to deploy TLT models.


Hi @mchi,

While building TensorRT OSS I encountered this problem

/usr/local/bin/cmake … -DGPU_ARCHS=“53 62 72” -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out

bash: /usr/local/bin/cmake: No such file or directory

  1. Upgrade CMAKE worked without problems.


so, all the issue got fixed, right?


For now I use:

if obj_meta.class_id == 0:     
       # trigger audio/GPIO here

As suggested by Dusty