Jetson l4t voice demo container

Is there a version of the container for the voice demo:

docker pull nvcr.io/nvidia/jetson-voice:r32.4.2

that runs on l4t 4.3?

If I run the current one on my jetson agx xavier I get this:

docker run --gpus=all -it --rm --network host nvcr.io/nvidia/jetson-voice:r32.4.2 trtserver --model-control-mode=none --model-repository=models/repository/jasper-asr-streaming-vad/
I0922 14:47:49.755758 1 server.cc:115] Initializing TensorRT Inference Server
I0922 14:47:49.977182 1 server_status.cc:55] New status tracking for model 'ctc-decoder-cpu-trt-vad-streaming'
I0922 14:47:49.977410 1 server_status.cc:55] New status tracking for model 'feature-extractor-trt-vad-streaming'
I0922 14:47:49.977550 1 server_status.cc:55] New status tracking for model 'jasper-asr-trt-ensemble-vad-streaming'
I0922 14:47:49.977697 1 server_status.cc:55] New status tracking for model 'jasper-trt-decoder-streaming'
I0922 14:47:49.977785 1 server_status.cc:55] New status tracking for model 'jasper-trt-encoder-streaming'
I0922 14:47:49.977856 1 server_status.cc:55] New status tracking for model 'voice-activity-detector-trt-ctc-streaming'
I0922 14:47:49.978288 1 model_repository_manager.cc:675] loading: ctc-decoder-cpu-trt-vad-streaming:1
I0922 14:47:49.978979 1 model_repository_manager.cc:675] loading: feature-extractor-trt-vad-streaming:1
I0922 14:47:49.979573 1 model_repository_manager.cc:675] loading: jasper-trt-decoder-streaming:1
I0922 14:47:49.980316 1 custom_backend.cc:202] Creating instance ctc-decoder-cpu-trt-vad-streaming_0_0_cpu on CPU using libctcdecoder-cpu.so
I0922 14:47:49.980380 1 model_repository_manager.cc:675] loading: jasper-trt-encoder-streaming:1
I0922 14:47:49.980730 1 custom_backend.cc:205] Creating instance feature-extractor-trt-vad-streaming_0_gpu0 on GPU 0 (7.2) using libfeature-extractor.so
I0922 14:47:49.980840 1 model_repository_manager.cc:675] loading: voice-activity-detector-trt-ctc-streaming:1
I0922 14:47:49.982730 1 custom_backend.cc:205] Creating instance voice-activity-detector-trt-ctc-streaming_0_gpu0 on GPU 0 (7.2) using libvoice-activity-detector.so
I0922 14:47:50.031355 1 model_repository_manager.cc:829] successfully loaded 'voice-activity-detector-trt-ctc-streaming' version 1
E0922 14:47:54.824862 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0922 14:47:54.825820 1 logging.cc:43] INVALID_STATE: std::exception
E0922 14:47:54.825948 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0922 14:47:54.826204 1 model_repository_manager.cc:832] failed to load 'jasper-trt-decoder-streaming' version 1: Internal: unable to create TensorRT engine
E0922 14:47:54.826684 1 logging.cc:43] coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
E0922 14:47:54.827467 1 logging.cc:43] INVALID_STATE: std::exception
E0922 14:47:54.828057 1 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed.
E0922 14:47:54.829108 1 model_repository_manager.cc:832] failed to load 'jasper-trt-encoder-streaming' version 1: Internal: unable to create TensorRT engine
I0922 14:47:58.020849 1 model_repository_manager.cc:829] successfully loaded 'feature-extractor-trt-vad-streaming' version 1

Would be good to have this for the latest l4t package:

head -n 1 /etc/nv_tegra_release
# R32 (release), REVISION: 4.3, GCID: 21589087, BOARD: t186ref, EABI: aarch64, DATE: Fri Jun 26 04:34:27 UTC 2020

Hi @francesco.ciannella, there isn’t yet a version of this container for L4T R32.4.3, so please use L4T R32.4.2 to run it.

Thanks for your reply!

Do you plan to port this to L4T R32.4.3? Also i’d be interested in getting the link to the github repository with the Dockerfile for this container if possible?

Thanks!

Sorry for the delay - we don’t currently plan to re-release the demo for newer L4T.

If you wanted to port BERT to newer L4T, you could copy my BERT code from the demo in the container, or see here for what I based it off of:

https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT/trt