Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) **GPU • DeepStream Version6.2
**• JetPack Version (valid for Jetson only)**6.2 • TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only)**525
**• Issue Type( questions, new requirements, bugs)**questions
I want to implement some functionality, like nvidia::deepstream::NvDsSampleProbeMessageMetaCreation using my own models and classes.
Do you have the source file for this component?
Do I have to build my own extension to achieve this kind of customized functionality? Can this be done by combining other components in graph composer?
If I uncheck the generate-dummy-data box, does this mean the following “vehicle-type-unique-id car-make-unique-id car-color-unique-id” won’t affect the message?
And, does this mean I can just use this sample extension to generate payload with my own model specified by config file in the infer component?
If I want to use container build tool on x86 dev platform for jetson deploying, do I need run both
bazel build … and build … --config=jetson on the x86 dev platform?
BTW how can I unregister a extension or update the same name extension to a new version?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
You can use the following commands to clean the registered extensions registry repo clean registry cache -c
Please refer to the registry usage Registry — DeepStream 6.3 Release documentation