Inspired by this comment by @miguel.taylor , I want to write custom user metadata to the object metadata. However, running the pipeline with this code in place causes the pipeline to exit with code 139, a signal segmentation violation. It seems there must be some problem in the way the Python performs memory allocation. What could be the issue here? Given that it seems to work well for others in the linked thread.
The code used for writing user meta data to the object meta:
I have not been able to set up a gdb tool yet. As you can see from the code, I’m working with the Python binding and not C code, so using gdb is not as trivial. I will try to setup a debugging tool like this ASAP.
I did try to get more information by using the debug logging with GST_DEBUG=5. Unfortunately this did not result in a clear log at the point where it goes wrong. The logs are extremely long and there was no mention of the SIGSEGV signal or code 139 in the final parts.
What also makes debugging more difficult is that we had some other problems in the pipeline due to which we had to move the demuxer to after the tracker. With this change in place the pipeline no longer crashes at every run. Instead, the pipeline most of the time runs fully but does still exit with code 139 instead of code 0.
I also dove deeper into this thread. From there I found out that the way custom user meta is added with the Python bindings was changed in release 1.1.10. I tried the new method, but it did not work with the Python bindings we are using. I have requested that we try to update, but that is still ongoing.
I looked into the pyds version that was running in our code to see if it could be upgraded to 1.1.10. Unfortunately, we cannot upgrade to any deepstream version than 6.0.1 because that is the last DS version for the old generation Jetson Nano which we still use. So the current version is 1.1.1 and updating is not a possibility.
I would also rather not implement the copy and release interfaces in the binding code due to the added complexity. Our team does everything in Python and in this case I would rather have it work than have it be more efficient.
For remark 2, the pipeline fully works without the added custom user metadata. Since the custom user metadata will be an added feature to a pipeline that’s already running in production, making big changes in the pipeline would probably not be worth it.
What would be the simplifications that I should look into? The code for adding the custom metadata seems like it should be correct, so I’m not sure which other elements the conflicts could come from.