The CUDA with NITROS page includes a link to an example custom NITROS string encoder and decoder. You can see in the encoder that the string is placed onto the GPU memory with calls to cudaMalloc() and cudaMemcpy() (as you would expect). It is not clear to me when that GPU memory is freed via something like cudaFree(), however. Do users of NITROS need to free the GPU memory in the final, downstream ROS node? Or does NITROS manage this memory in some way? And if NITROS is managing this memory, when will it decide to free that memory?
Any answer to this?
Hi all
We attach the buffer to an internal message entity sourced from an internal entity pool. This message entity maintains a reference count (refcount), which it increments each time a copy is created and decrements when a copy is destroyed. Once the refcount reaches zero, the entity pool automatically destroys the contents of the entity, including freeing any attached buffers.
That, in short words, we handle the freeing of the memory automatically, you don’t need to worry about it
Best,
Raffaello
Thanks for your response @Raffaello. Does this mean that you can have multiple subscribers to a NITROS topic and the underlying memory will still be freed appropriately?
@Raffaello - Any comments on the above question? Thanks!
@Raffaello
How does memory management work in case of NitrosImage publisher & a NitrosImageView subscriber?
How can I maintain/copy a pointer to the underlying NitrosImage while accessing incoming NitrosImageView (and make sure that the underlying memory location is not freed after incoming NitrosImageView goes out of scope)?
Following this example on github from Nvidia: