How to watch?

how to watch? this forum engine is from 2002

Yes, how to watch?

Post your question here and the team will respond real time. There is no video to watch.

We will have other events which have a live stream aspect but they will be hosted on our nvidia developer discord.
Sorry for any confusion

No problem, chat can be pretty easy. How does Chat GPT4 understand 3d layouts?

No live, just chat. I’m out, can’t type fast enough maybe next time.

No worries thanks for your interest.

In this paper, Large Language Models as Tool Makers, we see GPT4 being used to generate tools in Python, and GPT3.5 being used as the consumer of those tools along with validation. Do we anticipate a time when GPT4 could make tools inside of Omniverse?

Don’t worry, take your time with any question, we will also try to answer after the event is over.

Thanks Mark, I have something I want to ask as part of a bigger question but looks like I’ll need bit more time on my end.

still trying to attend.
went on discord.
went on omniverse
went to livestream.
what’s wrong?

It is just this chat, here in the forum. Different format so far.

1 Like

ok. which chat?
always returns to…
Build Custom AI Tools With ChatGPT and NVIDIA Omniverse : AMA June 28, 9am PDT - Connect With Experts - AMA / ChatGPT and Omniverse: AMA June 28th, 2023 - NVIDIA Developer Forums

Look at the directory in which this post is - and you will see other folks posting questions and getting answers. Sorry for any confusion

Use this link to get to the directory if you like : ChatGPT and Omniverse: AMA June 28th, 2023 - NVIDIA Developer Forums

Hey @zia_s_ideas, did you want to post this as a question?

How does Chat GPT4 understand 3d layouts?

We might have missed it in this thread, so please go ahead and re-post it in the main category.

The LLM related one below is already “in the works”.

It’s an interesting paper. If you’re a tool developer and want to incorporate AI, it’s a good one to look at. I think there will be many LLMs and LLM centric tools that can make tools inside of Omniverse. Already, ChatGPT is pretty good with Python and USD - both foundations of Omniverse extension and scene building. Because of this, I am able to use ChatGPT in my tool building workflow now. That said, it’s going to get much better. It’s easy to imagine LLMs helping with more of my existing work, and I’ve also started thinking that LLMs will become useful for temporary tools that I only need while working on a specific task. For example, maybe I want to make a temporary tool to create a UI that lists all of my lights and a brightness slider for each. I also think that tools that mix procedural algorythms + validation + AI are interesting for tool builders to explore.

is it 28th or 29th? When will the event go live??

Its live right now - folks are posting questions and getting answers . Look at the parent directory of this post.

Hi @prateekha3 and welcome to the NVIDIA
developer forums.

The event is live right now, here in this forum. Check the main category ChatGPT and Omniverse: AMA June 28th, 2023 - NVIDIA Developer Forums and you will see live questions and answers.

Hey Paul! Thanks for the reply.
Yeah, that would be very cool. Using prompts to generate UI’s, or in my case, I’d rather skip UI’s all together and just have prompts drive the input. I think GUI’s have been necessary so average people don’t have to assiduously type in commands but now I think with the constraints of exacting syntax being relaxed, artists could start to bypass the need to learn sequences of GUI steps.
This could be especially useful for things like rendering which can get pretty involved.
Prompt driven rendering could be something like:

“Render out this sequence from selected camera at 30 frames per second, 2k resolution, add the bloom filter, set to 6 percent in post and add an edge vignette with 6% inset. Also set the motion blur samples to low. Send a notification when the render is half done and totally done.”

This way, we know what we want, but we don’t have to know how to get it.
Initially, we don’t have to dig through settings experimentally, nor dig through documentation or someone’s youtube tutorial. As an idea.