ISP file is not being honored under docker when using Gst python bindings

Within a docker container I have:

total 424
drwxr-xr-x 2 root root   4096 Mar 20 07:23 .
drwxr-xr-x 6 root root   4096 May 12  2023 ..
-rw-r--r-- 2 root root  98553 Mar 20 07:24 camera_overrides_foo.isp
-rw-r--r-- 2 root root  98553 Mar 20 07:24 camera_overrides.isp
-rw-r--r-- 1 root root 100849 Mar 20 07:24 camera_overrides_bar.isp
-rw-r--r-- 1 root root    102 Mar 20 07:24 README.txt

This directory has been hosted mounted into the container but I don’t think that should make a big difference. ISP Cache files have been removed as you can see.

When I start the pipeline from within the docker container using gst-launch-1.0 tool the ISP is honored and the video looks right.

When I start the pipeline from outside of the docker container using the same command, the ISP is honored and the video looks right.

HOWEVER, when I start execute a python3 script using the gstreamer Python bindings, the ISP isn’t getting applied to my video output.

In all three cases, it is the exact same pipeline string. Why isn’t the ISP being honored when I use gstreamer Python bindings from within a container instead of using gst-launch-1.0 directly?

In fact, I can even see syslog show:

OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
=== python3[23]: CameraProvider initialized (0x7f6cb7cd10)CAM: serial no file already exists,

But if I stop the pipeline, change the ISP file, and restart it using Python GST bindings, the new ISP file is not honored, i.e. I don’t see the messages.

Here is a sequece of two pipeline starts with the ISP file changed at run-time:

~$ sudo /usr/sbin/nvargus-daemon
[sudo] password for alex:
=== NVIDIA Libargus Camera Service (0.97.3)=== Listening for connections...=== python3[23]: Connection established (7F737291D0)OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module0
OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
=== python3[23]: CameraProvider initialized (0x7f6cb7cd10)CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 805

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 8050
PowerServiceCore:handleRequests: timePassed = 817
=== python3[23]: CameraProvider destroyed (0x7f6cb7cd10)=== python3[23]: Connection closed (7F737291D0)=== python3[23]: Connection cleaned up (7F737291D0)

# R32 (release), REVISION: 5.2, GCID: 27767740, BOARD: t210ref, EABI: aarch64, DATE: Fri Jul 9 16:01:52 UTC 2021

If you change the ISP file while nvargus-daemon is running, will it always pick it up? There seems to be some flawed caching logic that is uses to not see a new file change. I’ve even tried to touch(1) the file to see if it is looking at mtimes and it still skips it.

Again, the sequence is within the application:

  1. Start python using gst bindings (create a bus of elements, set PLAYING state, etc.)
  2. Stop it completely (catch EOS, wait for clean up, and remove pipeline from memory)
  3. Change ISP file to foo or bar as appropriate (I’ve tried hardlinking as well as copy and ensured it is owned by root:root/0644) and then remove nvcam_cache_* and serial_no_* files
  4. Reallocate Gst pipeline (again using python gst bindings) and set to PLAYING state
  5. ISP file is not loaded by nvargus-daemon but it reuses the previous one?

Second time the ISP file is NOT loaded as you can see from the above syslog output (you don’t see a second pair of Found override file ... messages, there are two sensors sharing same overrides file).

It smells like nvargus-daemon is somehow re-using a stale connection to the ISP?!

A small follow up:

So I wrote a Python application that would start a pipeline for 10 seconds, stop the whole pipeline, swap ISP files, start the pipeline again for 10 seconds, rinse, repeat. I did this 100 times and nvargus-daemon never loads another ISP file despite the it being changed repeatly. Basically, I see:

Mar 30 11:10:29 D200-0A8 nvargus-daemon[4562]: ---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
Mar 30 11:10:29 D200-0A8 nvargus-daemon[4562]: ---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----

…in /var/log/syslog but despite me starting/stopping my pipeline and swapping ISP files, I never see that message again. Can someone from NVIDIA tell me what I need to do to guarantee a fresh ISP file is loaded on every pipeline start? Is there some kind of caching interval I should be aware of?

Please confirm the ISP file mode is 664 or 644

Thanks

@ShaneCCC it is. I tried both (look at the output above). Plus nvargus-daemon complains otherwise in syslog.

Is there some kind of caching interval for the ISP? Can you tell me the EXACT steps needed to ensure a fresh ISP is loaded before a pipeline start via Gst python bindings (I actually don’t think this has to do with Python vs. C vs. shell, it seems to be related to some kind of session caching between the initial load of nvargus-daemon and the ISP)? Application will not be shutdown, but the pipeline within it will be before start up.

Note if I EXIT my application completely, nvargus-daemon loads the ISP file on startup. It smells like this version of nvargus-daemon caches the ISP over some interval but I haven’t figured out what. I would ASSUME the Found override file log message would be printed on every ISP load, right?

Is there anything in the ISP itself that could cause this? Both ISPs were delivered to us from D3.

How do you run the docker?

@ShaneCCC

docker run --privileged --runtime=nvidia --device=/dev/video0 --device=/dev/video1 -v /var/nvidia/nvcam/settings:/var/nvidia/nvcam/settings --mount source=/tmp/argus_socket,target=/tmp/argus_socket,type=bind ...

There are more envs and some other mounts by nothing related to what we are discussing.

Can you please tell me what would cause nvargus-daemon to ignore the ISP file?

The first load I see in syslog:

Apr  1 08:12:06 nvargus-daemon[4562]: ---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
Apr  1 08:12:06 nvargus-daemon[4562]: ---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
Apr  1 08:12:07 nvargus-daemon[4562]: === python3[391]: CameraProvider initialized (0x7f72cb07a0)CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0
Apr  1 08:12:07 nvargus-daemon[4562]: NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0
Apr  1 08:12:07 nvargus-daemon[4562]: CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0
Apr  1 08:12:07 nvargus-daemon[4562]: NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

But the second time I restart it I only see:

Apr  1 08:12:27 nvargus-daemon[4562]: NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0
Apr  1 08:12:27 nvargus-daemon[4562]: message repeated 5 times: [ NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0]
Apr  1 08:12:28 nvargus-daemon[4562]: PowerServiceCore:handleRequests: timePassed = 8064
Apr  1 08:12:29 nvargus-daemon[4562]: PowerServiceCore:handleRequests: timePassed = 801

I’ve also changed the mount to -v /tmp:/tmp instead of bind mounting the socket directly as an effort to deal with any possible stale connections to nargus-daemon socket file.

What am I doing wrong?

More interesting things:

So I tried restarting nvargus-daemon on the host while the application is running but not recording and after the pipeline has been fully stopped (EOS has been received, state is set to NULL and pipeline object ref is zero, i.e., None which should clean it up).

I see this:

(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)

So shouldn’t the current session after the pipeline has fully stopped CLOSE the socket connection and start a new one. I’m just wondering if the ISP file is trigged by some client connection pooling you’ve implemented within nvargus-daemon based on the log messages above.

I’ve also tried moving the ISP files to another directory and copying them over so the only file at any given moment is /var/nvidia/nvcam/settings/camera_overrides.isp but nvargus-daemon refused to load any new versions of it until my application exits. I am at a lost.

EDIT: Tried to run the container in --network host mode to see if that would make any difference. It did not.

@ShaneCCC Is there anyway to verify nvargus-daemon loaded an ISP on pipeline startup?

Here is more debugging from nvargus-daemon itself running in debug mode:

OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
=== python3[4065]: CameraProvider initialized (0x7f79f95290)CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 818
NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 8020
PowerServiceCore:handleRequests: timePassed = 802

Notice it doesn’t load the camera_overrides.isp file twice despite the that these logs show a pipeline stop, copy new ISP file, and then new pipeline created and start.

One thing to note for engineering: The CameraProvider address never changes despite new different pipelines being destroyed and reallocated using the Python Gst bindings. Is it possible the ISP file was loaded as part of the first CameraProvider initialization and then re-used again and thus not loading the ISP file despite the fact that its contents has changed between the stop/start cycle?

Usually the ISP file should load for each gst launch for each cameras.

@ShaneCCC When you say launch? I assume Gst.parse_launch() which is what I am calling on each new pipeline creation (then followed by set_state(Gst.State.PLAYING) etc. But yet the overrides file is not loading (note that if I quit my application and restart it, the iSP file does load so it’s not the ISP file itself).

This really feels like some kind of caching bug within nvargus-daemon. Can you please tell me what scenarios if any would cause nvargus-daemon to NOT load /var/nvidia/nvcam/settings/camera_overrides.isp? Is there a programmatic way to force the overrides file to load or check that it HAS been loaded?

What’s I mean is open camera.

Enable the log to check if more information.

sudo service nvargus-daemon stop
sudo su
export enableCamPclLogs=5
export enableCamScfLogs=5
/usr/sbin/nvargus-daemon

launch camera from from docker console.

@ShaneCCC The logs are from doing exactly that? Did you look at them? (both Pcl and Scf logs are enabled).

What do you mean launch from the docker console? nvargus-daemon is launched outside the container (the logs are from me launching it on the host in a shell, I guess that is what you mean).

Here are the logs again of two start/stop cycles with the ISP file changed underneath:

=== python3[13835]: Connection established (7F819E01D0)OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module0
OFParserListModules: module list: /proc/device-tree/tegra-camera-platform/modules/module1
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
NvPclHwGetModuleList: WARNING: Could not map module to ISP config string
NvPclHwGetModuleList: No module data found
OFParserGetVirtualDevice: NVIDIA Camera virtual enumerator not found in proc device-tree
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
---- imager: Found override file [/var/nvidia/nvcam/settings/camera_overrides.isp]. ----
=== python3[13835]: CameraProvider initialized (0x7f7dfa01d0)CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

CAM: serial no file already exists, skips storing againNvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 806
NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

NvIspAfConfigParamsSanityCheck: Error: positionWorkingHigh is not larger than positionWorkingLow positionWorkingHigh = 0, positionWorkingLow = 0

PowerServiceCore:handleRequests: timePassed = 8033
PowerServiceCore:handleRequests: timePassed = 801
=== python3[13835]: CameraProvider destroyed (0x7f7dfa01d0)=== python3[13835]: Connection closed (7F819E01D0)=== python3[13835]: Connection cleaned up (7F819E01D0)

Notice the CameraProvider destroyed (0x7f7dfa01d0) is never changed throughout the stop/start cycle. It is as if once the first pipeline starts, nvargus-daemon doesn’t give up the connection despite the fact that python3[13835] is shutting down and re-starting pipelines constantly (and swapping camera_overrides.isp underneath it).

My guess is there is some kind of mapping between ISP load and CameraProvider which is pooled to a python pid. But you are going to have to give me more information. I think I have provided quite a lot at this point.

Here are some debug messages from the nvarguscamerasrc component on shutdown right before we swap camera_overrides.isp file and start a new pipeline via Gst.parse_launch():

0:00:15.479960738 15562     0x3a624190 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1482:argus_thread:<nvargus1> argus_thread: stop_requested=1

0:00:15.480008447 15562     0x3a623f20 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1596:consumer_thread:<nvargus1> consumer_thread: stop_requested=1

CONSUMER: Done Success
GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
0:00:15.593708137 15562   0x7f7c003c00 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1596:consumer_thread:<nvargus0> consumer_thread: stop_requested=1

0:00:15.593708137 15562   0x7f7c003e30 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1482:argus_thread:<nvargus0> argus_thread: stop_requested=1

0:00:15.610876957 15562   0x7f30009720 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1954:gst_nv_argus_camera_src_finalize:<nvargus1> finalize
0:00:15.611427905 15562   0x7f30009720 DEBUG       nvarguscamerasrc gstnvarguscamerasrc.cpp:1954:gst_nv_argus_camera_src_finalize:<nvargus0> finalize

At this point, the overrides file is swapped and the NEW pipeline is allocated and set to the PLAYING state. But again, as you can see, I don’t see nvargus-daemon logging the Found override file [...]. a second time? Shouldn’t I see that on every pipeline launch?

@ShaneCCC NVIDIA’s has a bug:

I finally found a way to get nvargus-daemon to load the ISP every single time: Put the gstreamer recording pipeline in its own process. This forces whatever connection pooling you are doing (I will bet some kind of CameraProvider session to python3 process id) to establish a new socket connection to nvargus-daemon and bingo, I see the ISP file loaded (the pid of python3 obviously changes)

Something is definitely wrong with the way you are doing connection pooling as it is impacting the way the ISP file is being loaded. It should make no difference whether the pipeline is being recreated within the same process or multiple processes or even multiple threads. But it does and that’s a bug.

Note that sometimes keeping it within the same process works - I can only imagine this is related to the bug itself (maybe timing on the socket closing or something else like the timing of how Python cleans up memory, all speculation on my part…).

Sorry, I don’t clear your test.
Could you have step by step detail reproduce information to help understand the issue.

Thanks

@ShaneCCC.

  1. Start a pipelne in a Pythin script (anyone will do that starts with nvarguscamerasrc)
  2. Stop the pipeline, not the application though, just send EOS and cleanup
  3. Switch the /var/nvidia/nvcam/settings/camera_overrides.isp file to another one using Python (e.g., shutil.copy, then us os.chmod and os.chown to ensure proper perms, etc.)
  4. Clean-up all cache files (serial* and nvcam* files in /var/nvidia/nvcam/settings/)
  5. Start a new pipeline (anyone will do that starts with nvargsucamerasrc) within the same Python3 script (script should have not exited), pipeline = None then Gst.parse_launch set to Gst.State.PLAYING again etc.
  6. Verify that nvargus-daemon doesn’t load the new ISP file and from the logs above shows that it seems to be using the same CameraProvider object that was established after the first pipeline was started and stopped and that this provider object is still linked to the python3 pid of the application itself)

Whatever group is responsible for nvargus-daemon, can you please just send them this thread and request for a comment?

The only way that I have been reliably able to switch the camera_overrides.isp file as part of my running Python3 application is to put the gstreamer pipeline code under its own pid using multiprocessing which ensures that nvargus-daemon sees different pids at the time of Gst.parse_launch(). At least that is what I observe.

Restart the nvargus-daemon by below command before the step 5 to try.

sudo service nvargus-daemon stop
sudo service nvargus-daemon start 
sleep 5

I tried that but the application dies because the socket closes underneath it. That’s just one of the debugging techniques I used to notice that nvargus-daemon is not cleanly closing the session after step 4.

The error while restarting nvargus-daemon while my application is running:

(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)

Basically I did:

  • Step 1-4, then have my appilcation do a time.sleep(10) before the next pipeline is allocated and played
  • In that ten seconds, sudo systemctl restart nvargus-daemon in another console
  • Error message immediately appears in syslog after the restart completes (note that pipeline = None and has been completely shutdown)
  • Step 5 happens and when the new pipeline starts it dies with a socket exception issue

Again, that is a clue that nvargus-daemon has a stale connection to the pipeline that just closed still. I will try something else that is in the same vein I thought of (EDIT: Samething, nvargus-daemon just won’t let go of the CameraProvider).

I’d still like an answer on how can I KNOW that the right ISP was loaded? How do I check that?

@ShaneCCC Is there an update here?

Why do need to change the ISP dynamically?

Thanks

Because we have both day and night tunes for our application (lights vs. IR) which is based on a schedule.

Is there a way to add some env(1) that nvargus-daemon will see and force a reload on every pipeline start? What do you propose as a solution outside rewriting my entire application to use separate pids (python3 process per gstreamer pipeline)?

Suppose the ISP file should load for each camera open. Don’t know why the python process have different behavior.