RIVA node JS GRPC

Please provide the following information when requesting support.

Hardware - GPU (A100/A30/T4/V100) Quadro RTX 8000
Hardware - CPU Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz
Operating System - Ubuntu 18.04.5 LTS
Riva Version - 1.9 beta
TLT Version (if relevant)
How to reproduce the issue ? (This is for errors. Please share the command and the detailed log here)
Hi, I was able to start communicating with the RIVA server via node JS GRPC, however after sending the streaming config, I am getting a “Releasing idle sequence with correlation” in the docker logs
Does anyone know what that means ?
Attached below is an excerpt of ‘docker logs riva-speech’ :
I0329 08:11:45.462642 11748 grpc_riva_asr.cc:913] ASRService.StreamingRecognize called.
I0329 08:11:45.462730 11748 grpc_riva_asr.cc:947] ASRService.StreamingRecognize performing streaming recognition with sequence id: 2103500628
I0329 08:11:45.462811 11748 grpc_riva_asr.cc:1001] Using model riva-asr for inference
I0329 08:11:45.462989 11748 grpc_riva_asr.cc:1017] Model sample rate= 16000 for inference
I0329 08:11:45.463202 11748 riva_asr_stream.cc:213] Detected format: encoding = 1 RAW numchannels = 1 samplerate = 16000 bitspersample = 16
W:batch_slot_manager.cc:105: Releasing idle sequence with correlation id 1329347422 idle time 557587642us
W:batch_slot_manager.cc:105: Releasing idle sequence with correlation id 1968383396 idle time 510828680us

1 Like

For more context, this is the NodeJS client code I am using based on a post made earlier :
Riva : Node JS examples
I also attached my config.sh file for reference :
config.sh (7.5 KB)

Sounds like you might just need to close the gRPC client when done?
There’s a close() method for the RivaSpeechRecognitionClient that you create.

Hey @pineapple9011, just fyi, I also had a nodejs grpc client set up and running when you made that NodeJS example post. I did not use protoloader as you did and used another library for generating ts bindings for the client. However, unlike you I never ran into the issue of having to trim out the extra header at the start of the wav file.

After closing the stream and client, I now receive a ASRService.StreamingRecognize returning OK in my docker logs.
Does that mean that my transcription is successful and if so how do I get the results?
In addition, is there a need to change the sample rate of my wav files to the one set in the streaming config, and is there a max duration supported for the wav files (e.g 3 minutes and above)
Attached below is my docker logs riva-speech :
I0331 07:49:24.104351 5752 grpc_riva_asr.cc:913] ASRService.StreamingRecognize called.
I0331 07:49:24.104424 5752 grpc_riva_asr.cc:947] ASRService.StreamingRecognize performing streaming recognition with sequence id: 1590625615
I0331 07:49:24.104507 5752 grpc_riva_asr.cc:1001] Using model riva-asr for inference
I0331 07:49:24.104668 5752 grpc_riva_asr.cc:1017] Model sample rate= 16000 for inference
I0331 07:49:24.104849 5752 riva_asr_stream.cc:213] Detected format: encoding = 1 RAW numchannels = 1 samplerate = 16000 bitspersample = 16
I0331 07:49:24.944602 6046 grpc_riva_asr.cc:674] Send silence buffer for EOS
I0331 07:49:32.039855 5752 grpc_riva_asr.cc:1105] ASRService.StreamingRecognize returning OK

I never ran into the issue of having to trim out the extra header at the start of the wav file.

Interesting, what version of Riva was this?
I don’t think we had to do this in newer versions, but I never checked.

Does that mean that my transcription is successful and if so how do I get the results?

Yup, the logs you have are what I get as well. Results should be in your response callback. If you’re using streams, it’s the data callback.

Is there a need to change the sample rate of my wav files to the one set in the streaming config

Of course!

Is there a max duration supported for the wav files

Yeah, I think you specify that when you build your model, check the docs. Not sure what the default examples use though.

Thanks for the help! Managed to get it working.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.