DeepStream 5.1 Issues

I have been getting into using DeepStream 5.1 on Jetpack 4.5.1 installed on the Jetson Nano Developer Kit. I was able to run the majority of included sample config files. However, every sample took about a minute to finally load up and begin running after what seems like some startup errors.

Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine open error
0:00:05.365285782 18532     0x2c5be8c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 6]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed
0:00:05.365415994 18532     0x2c5be8c0 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 6]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine failed, try rebuild
0:00:05.365467975 18532     0x2c5be8c0 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 6]: Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
WARNING: INT8 not supported by platform. Trying FP16 mode.
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.

I get this printout regardless of what sample I run but they all eventually run, this is more of an inconvenience than a problem that I would like to solve if possible.

Ultimately my goal is to run deep streamer pipelines using java-GStreamer. Currently, only python and c are supported but I was hoping java support wouldn’t be too difficult. I was able to test1 from the python GitHub repository to java except for extracting the metadata. I was able to set up a probe for the metadata but now have to actually extract the data. Essentially I need the java equivalent of pyds or a java port of the DeepStreamer MetaData code. I looked into using JNA to make some java bindings to the file but not sure if that was the right path.

Any ideas on how I could get started on a java port of that library. Or maybe some other crude way of extracting metadata from the model’s inference so its usable on java. My other option is to run a python deepstream process and then somehow get the extracted metadata into java.