Traffic Light Classifier & Sign Classifier sample limitation

Dear Sir,
I find the light & sight classifier sample function has limited trained data as below. Could there are any methods to import light & sign data of customized area (e.g Taiwan, Japan, China…). Thanks.

[Light][Trained Data] United States

[Sign][Trained Data] United States and European Union

Hi,

The model need to be retrained for a customized region.
Currently, our SignNet and LightNet is focusing on US and Europe.

Sorry for the inconvenience.
Thanks

Hi AastaLLL,
Thanks for your reply. If there is any guide for retrained model methods of driveworks lightnet & signnet?
Thank you.

Dear melinda.hung,
All Driveworks DNNs are NVIDIA proprietary. So network details are not released in Devzone documentation.

Hi SivaRamaKrishna,
Thanks for your kindly reply.

Dear SivaRamaKrishna,
If I customize DNN model for other areas, how could I import the model to NV Driveworks?

Dear melinda.hung,
By using DNN Framework APIs in DW, you can integrate your DNNs into DW framework. Please check object detector sample in DW.

Dear SivaRamaKrishna,
Trace ./sample_object_detector, right?

Yes

Dear SivaRamaKrishna,
Thanks for your kindly replay. I found there is a parameter(–tensorRT_model) in ./sample_object_detector sample file, it looks like that I could build a customized tensorRT_model binary file and assign to the parameter. But if I do not use tensorRT model to train the model(e.g TensorFlow, Caffe, PyTorch…), is it possible to assign my customized models? Is there are any limitations on the developments? Thanks.

–tensorRT_model=[path/to/TensorRT/model]
Specifies the path to the TensorRT model file.
The loaded network is expected to have a coverage output blob named “coverage” and a bounding box output blob named “bboxes”.
Default value: path/to/data/samples/detector//tensorRT_model.bin where can be either Pascal or Volta.

Dear melinda.hung,
Is there are any limitations on the developments?

DW requires your model to be a TensorRT model.
If I do not use tensorRT model to train the model(e.g TensorFlow, Caffe, PyTorch…), is it possible to assign my customized models

You can convert you your model to TensorRT model using tensorRT_optimization tool in DW.

This sample gives you an idea of how to load tensorrt model and perform inference using DNN framework. The preprocessing and post processing varies in each use case.