I’ve been working with Omniverse the last few weeks and I’m trying to make an interactable scene. I’m embedding a 3D button-panel into the 3D space. Think about the lightbulb demo in the Action Graph Samples Walkthrough.
The ability to interact (pick) a prim without actually selecting it (so that I can trigger code), is exactly what I’m looking to do, but without action graph. All of my other logic is written in behavior scripts (python), and I’d like to incorporate this functionality.
I’ve done a bunch of digging and tried a handful of ideas around carb events but I haven’t had any luck. I even looked at the underlying C++ in the On Picked Node, but ran out of ideas… Can anyone lend a hand?
I’d suggest using the stage event subscriptions to determine which prims have been selected.
Essentially one can interact with a stream of events by registering a callback, then doing a check on the event type. In your case I believe you’d want to check for omni.usd.StageEventType.SELECTION_CHANGED. From there, you can check which prims are selected and filter as necessary, to trigger the action needed.
Now, about the action needed. In the case of changing color, you might be able to simply change the display color attribute of the prim. Ultimately this might get into more of a design question of where to place that logic depending on reuse needs, etc.
See Michael Wagner’s post here. Let me know if that helps! I can help try to write a more directly applicable snippet for your problem if you run into issues.
This is basically what I’m doing already. Leveraging the existing stage events mechanism definitely works, but I believe actually what I’m trying to move away from.
From my experience, there’s an amount of overhead involved in clicking the prim, the stage browser expanding to show the selected prim, and the actual highlighting of the prim itself in the 3D scene. When my scene is “running” I don’t need/want any of that. I just want to report which prim the mouse is overlapping when the left mouse button is pressed.
In that action graph samples walkthrough video (at that 3:30 timestamp) running the scene locks down all of the Create UI and simply detects the “On Pick” to trigger the next event.
I think I have this figured out, though it’s a bit of a roundabout solution.
Two things.
On play - set the scene picking mode to NONE - This blocks prims from being selected via normal left click.
# Picking Mode Definitions
ALL = "type:ALL"
MESH = "type:Mesh"
CAMERA = "type:Camera"
LIGHTS = "type:CylinderLight;type:DiskLight;type:DistantLight;type:DomeLight;type:GeometryLight;type:Light;type:RectLight;type:SphereLight"
NONE = "type:None"
def on_play(self):
#Clear selection on play
self._usd_context.get_selection().clear_selected_prim_paths()
# Set picking mode (change MESH to ALL, CAMERA, LIGHTS, or NONE)
carb.settings.get_settings().set("/persistent/app/viewport/pickingMode", NONE)
def on_stop(self):
# Reset picking mode
carb.settings.get_settings().set("/persistent/app/viewport/pickingMode", ALL)
Leverage the viewport_api.request_query() to detect prims that are clicked in the viewport. Note: The code below is pretty rough. I’m sure there’s far better ways to get ndc mouse position in the viewport, but this works.
def _subscribe_to_left_click(self):
print("subscribed to mouse callback")
input_iface = carb.input.acquire_input_interface()
self._mouse_sub = input_iface.subscribe_to_input_events(self._mouse_info_event_cb, EVENT_TYPE_ALL, self._mouse, 0)
def _mouse_info_event_cb(self, event_in, *args, **kwargs):
if self.running:
event = event_in.event
if event.type == carb.input.MouseEventType.LEFT_BUTTON_DOWN:
print("LEFT BUTTON DOWN")
# Get the ndc mouse position in the viewport...
viewport_window = vp_utils.get_active_viewport_window()
viewport_api = viewport_window.viewport_api
# Directly access viewport location and size
viewport_x = viewport_window.position_x
viewport_y = viewport_window.position_y
viewport_width = viewport_window.width
viewport_height = viewport_window.height
mouse = self.appwindow.get_mouse() #Get app-space mouse x,y
(x, y) = self.input.get_mouse_coords_pixel(mouse)
print("Raw Mouse Input: " + str(x) + ", " + str(y))
# Calculate mouse position relative to the viewport
mouse_x = x - viewport_x
mouse_y = y - viewport_y
# Normalize the coordinates (-1.0 to 1.0) within the viewport
normalized_x = ((mouse_x / viewport_width) * 2) - 1.0
normalized_y = -(((mouse_y / viewport_height) * 2) - 1.0)
mouse_ndc = Gf.Vec3d(normalized_x, normalized_y, 0)
# Convert the ndc-mouse to pixel-space in our ViewportTexture
mouse, in_viewport = viewport_api.map_ndc_to_texture_pixel(mouse_ndc)
if not in_viewport:
return True
# Query the viewport for any prims at the mouse location
viewport_api.request_query(mouse, self.query_completed)
return True
# --- Viewport API Picking ---
def query_completed(self, path, pos, *args):
if not path:
print('No object')
else:
print(f"Object '{path}' hit at {pos}")
prim = self.stage.GetPrimAtPath(path)
# Set the "Picked" Attribute on the prim.
if prim.HasAttribute("Picked"):
prim.GetAttribute("Picked").Set(True)
print(prim.GetName() + " picked")
And again, this allows you to interact with a prim without actually selecting them. In my experience, there is enough overhead in the selection mechanism to warrant this. Completely lock down the scene, and only allow the user to interact with interactable objects