Hello,
I have a UsdGeomPoints
object with ~3 million points. I am trying to assign a color via changing primvars:displayColor
attribute of the UsdGeomPoints
object using USD library directly, i.e. get primvars:displayColor
attribute and set with the values via UsdAttribute.Set
. The points
and displayColor
arrays have the same size, ~3 million color elements.
However, the primvars:displayColor
change won’t complete and the console displays the following error:
2024-11-22 18:01:16 [27,102ms] [Error] [rtx.scenedb.plugin] Wrong buffer size: attempted to upload 92182576 bytes to a 46091296 buffer
Do you have any suggestions on how to solve this issue?
Thanks,
P.S. I am using Kit 106.4 on an example application created with kit-app-template
I think we had this kind of error with the primvars:displayColor before. Can you first try to disable FSD (Fabric Scene Delegate) in Preferences > Rendering, and then restart the scene. Make sure it stays off, and then try again.
Thanks for the response @Richard3D
FSD is disabled for this use case, i.e. ~3 million points. I verified it. I cleared the Kit cache and used settings.app.useFabricSceneDelegate = false
in the .kit
file before launching the test. It won’t work when FSD is enabled too (throws the same error).
Edit: A quick comment about my tests with similar point clouds but different sizes
- When I use a point cloud (as
UsdGeomPoints
) with a size of ~1.5 million points, setting primvars:displayColor
works perfectly. No errors, no problems.
- If the point cloud size is ~3.0 million points (could be 2.8m, 2.9m), the above error shows up on the console.
In all use cases, I use the same code and directly setting primvars:displayColor
of the UsdGeomPoints
object.
Ok, let me ask the expert devs on this
1 Like
Would you mind sharing the code that you are trying to use. Something does not seem right here. You are attempting to load literally double the amount of memory into the allocated buffer. Did you mean to do that? Did you set the buffer smaller, manually?
Here is a sample code that I could share:
from pxr import Usd, UsdGeom, Sdf, Vt
import omni.usd
import numpy as np
import random
stage = omni.usd.get_context().get_stage()
# I have UsdGeomPoints object with points of size 2_880_704
prim = stage.GetPrimAtPath("/World/PC/Layer_0")
pts = UsdGeom.Points(prim)
pts_size = pts.GetPointCount()
print(pts_size)
# Randomly generating color
arr_pre = [[random.random() for _ in range(3)] for _ in range(pts_size)]
arr = np.array(arr_pre, dtype="float32")
prim = stage.GetPrimAtPath("/World/PC/Layer_0")
attr = prim.GetAttribute("primvars:displayColor")
attr.Set(Vt.Vec3fArray.FromNumpy(arr))
For this code, I see the following error on the console:
2024-11-22 23:19:41 [Error] [rtx.scenedb.plugin] Wrong buffer size: attempted to upload 46091344 bytes to a 46091328 buffer
I have also tried using Primvars API (my main code uses it to set the vertex interpolation), e.g.
# Replace last two lines of the above code with this
pv_api = UsdGeom.PrimvarsAPI(prim)
pv = pv_api.GetPrimvar("displayColor")
pv.Set(Vt.Vec3fArray.FromNumpy(arr))
pv.SetInterpolation(UsdGeom.Tokens.vertex)
In this case, I see the following error:
Wrong buffer size: attempted to upload 92182576 bytes to a 46091296 buffer
Regarding setting the buffer size, would you share the configuration variable which sets this buffer? I could definitely check it on my app. I have been testing the above code on a Kit installation from Omniverse Launcher, but I am still getting the same errors in Primvars API and direct attribute set cases.
I see some additional errors on Windows
2024-11-26 17:36:30 [55,993ms] [Error] [rtx.scenedb.plugin] Wrong buffer size: attempted to upload 92182576 bytes to a 46091296 buffer
2024-11-26 17:36:30 [56,062ms] [Error] [carb.graphics-direct3d.plugin] HRESULT: 0x80070057
2024-11-26 17:36:30 [56,063ms] [Error] [carb.graphics-direct3d.plugin] CommandList::Close() failed.
2024-11-26 17:36:30 [56,063ms] [Error] [gpu.foundation.plugin] Failed to end graphics command list in command list "CommandListManager command list (Render queue, device 0)". Active Annotation:
2024-11-26 17:36:30 [56,064ms] [Error] [rtx.scenedb.plugin] SceneDB command list failed during recording. Aborting uploads this frame.
I see it mentions Float32 in the code. Are you possibly trying to set double precision (64 bit) colors to a single precision (32bit) color attribute? I’ve seen folks make that mistake previously. Make sure you are using Gf.Vec3f, or Color3f for the input (not Gf.Vec3d, or Color3d)
I am using Vt.Vec3fArray
and it looks like the documentation says
Vec3fArray: An array of type GfVec3f.
Ref: Vt module — Omniverse Kit 106.4.0 documentation
For the Numpy arrays, I make sure the data type is float32
, not float64
.
I’ve tried converting Numpy arrays manually to a list of Gf.Vec3f
s, e.g.
arr2 = [Gf.Vec3f(float(i[0]), float(i[1]), float(i[2])) for i in arr]
and
attr.Set(Vt.Vec3fArray(arr2))
However, this didn’t work either.
I am sorry that didn’t work. Let me see if I can get advanced dev tech to help you out. Thanks.
If you are setting everything to the same color - then there’s an easier way:
displayColorAttr.Set([(0.5,0.5,0.5)])
One of the lead developers here gave me this code for you to try:
from pxr import Usd, UsdGeom, Vt
import omni.usd
import random
def rand_col():
return random.uniform(0.0, 1.0)
context = omni.usd.get_context()
stage = context.get_stage()
sel = context.get_selection()
for prim_path in sel.get_selected_prim_paths():
prim = stage.GetPrimAtPath(prim_path)
prim_geom = UsdGeom.Gprim(prim)
displayColorAttr = prim_geom.GetDisplayColorAttr()
displayColorAttr.Set([(rand_col(),rand_col(),rand_col())])
Thanks @Richard3D!
I have been experimenting a bit and I just tested the following.
attr = prim.GetAttribute("primvars:displayColor")
attr.Set(Vt.Vec3fArray.FromNumpy(np.array([[0, 0, 0]], dtype="float32")))
and I see the exact same error. I think this error is related to the size of the point cloud.
I haven’t tested the script you shared, but I think the above snippet serves the same purpose.
Since I’d like to set a different color for each individual point and use vertex interpolation, the size of the points array and the size of the primvars:displayColor
should be the same, if I understand it correctly from the documentation.
I am wondering if it is possible to achieve the same result (i.e. individual color value for each point) using a shader.
I think that’s two things going on here. One is getting the right code and the other is getting the right data size. In order to get to the bottom of the problem let’s reduce the data. So what I want you to do is get a much smaller point cloud, and then run it through your code and get that working. Then optimize the code make sure everything is working correctly. Then increase the data until something breaks. That is the point that your code and your hardware may be limited by does that make sense? In other words, let’s get the code working perfectly so we know it works and see what data sets we can get working first.
1 Like
I had a chance to test the code with 1.5 million points on a single UsdGeomPoints
object and it works without any issues.
The example with the problem is 2.8 million points on a single UsdGeomPoints
object.
However, I don’t have any data between 1.5 mil and 2.8 mil points on a single UsdGeomPoints
object. I can run a simple test to see where it breaks.
GPUs I am testing the point clouds are
- Nvidia RTX A1000 6 GB laptop GPU (Windows, driver 560.76 studio)
- Nvidia RTX A4000 16 GB desktop GPU (Ubuntu 22.04, driver 550.120)
Because 2.8 million points set is failing on both GPUs with the same exact errors, I thought that the problem might not be related to the hardware.
I’ll get back to you as soon as possible with the tests.
Ah yes, they are not the biggest cards we have. Certainly, the A1000 is not suitable. You would really need an A6000 48GB to really push it.
Although the hardware seems to be the easiest to blame, if we summarize the above errors, one might ask the following questions:
- The buffer size the above error mentioned is 46091296 bytes which corresponds to 43.95 megabytes. I am wondering why buffer size is that low and how we can set the system to use, say 100 megabytes of buffer.
- When
vertex interpolation
is set on the displayColor
primvar, looks like OV is trying to upload 92182576 bytes, which corresponds to 87.91 megabytes. I think increasing the buffer size would solve this issue as well.
- I am also wondering what changes on the RTX renderer side when I use
vertex interpolation
and why the data being uploaded becomes the double of the default interpolation method (92182576 / 46091296 ~= 2)
- I haven’t found any kind of configuration variable to control any kinds of buffer on the RTX Renderer documentation. Is there any way to change a variable to optimize what is being rendered? (I also tried Omniverse Flow by following the example on its documentation, but I couldn’t make it work)
- Interestingly, reloading the stage when
displayColor
primvar is set works just fine. Setting displayColor
primvar while the stage is open is causing the errors above. Why would this happen?
An Nvidia RTX A6000 48GB GPU is a very very very expensive hardware, and I need to provide answers to the person who needs approve the purchase. They will definitely ask me a lot of questions, especially the ones above. Otherwise, I am pretty sure they would not be happy with the performance of the Omniverse platform, and they would plan to move away from it. I personally would not want that to happen.
I’d be very happy if you could help me to answer the questions above.
Why don’t you send me the complete project so we can take a look and test in on our hardware first. Send us the complete usd file, the data file and everything else we need to completely open the file. Then we can see why its failing. However, if it is working with smaller data, and not with bigger data, it would suggest a memory issue. Do you have another way to borrow an A6000 for a test? Does your workflow REQUIRE this 2.8 million point data?