Optix 6.5 - Prime Samples fail with RTX 3080

Hello,

i have some issues with Optix 6.5 on an RTX 3080 (Laptop, Windows 10)

All Non-Optix Prime Examples (like optixPathTracer and optixDeviceQuery) are working fine.
But all Optix Prime Examples (like primeSimplePP, primeInstancing, …) are failing.
primeInstancing failes after “using cuda context” printout with an “illegal memory access was encountered”.
others are just freezing.

Same problem, if i compile the samples myself with VS2019.

Also some other program of mine which uses Optix Prime works fine on 10xx and 20xx generation gpus, but freezes if i start it on the RTX 3080.

NVIDIA Driver 466.26 is used.

I know that Optix 6.5 is not recommended on newer GPU because it is not optimized for using RTCores.
But it should be still compatible, right?

Regards,
Sebastian

We’re investigating, I do see the same issue on Linux with driver 465.27. In the mean time, you can probably roll your driver back one version to get them working if you want to play with the samples or keep an older project running.

Yes Prime should still run, I believe, so this is a mistake, apologies for the hiccup! Thank you for reporting it, we’ll work to get it fixed ASAP.

Just to be candid, I don’t know how long Prime support in new driver versions will continue. Any production needs based on Prime may wish to lock to a previously working driver version. We definitely recommend not starting new projects on Prime, but using OptiX 7 instead. The “optixRayCasting” sample in OptiX 7 was introduced as an example of Prime-like usage, and it can take advantage of the RT Cores for much higher performance.


David.

Hi Sebastian,

To follow up on this, the team has responded that no public release of Prime is compatible with Ampere GPUs. I apologize for suggesting otherwise, because I wasn’t aware of this. We currently have no future public releases of OptiX Prime planned either.

Prime will continue to work as-is on Turing and earlier architectures (whatever is supported by the intersection of the last Prime release and whichever driver you’re on).

We’d be happy to help & offer guidance for moving from Prime to the latest version of OptiX 7 if you’d like.


David.

“To follow up on this, the team has responded that no public release of Prime is compatible with Ampere GPUs”
That is really bad news. So every application ever used Optix Prime is now worthless with newer Hardware and needs to be rewritten …
Then please update the release notes and make a big Note, that Optix 6.5 is now deprecated on the Optix Download page, because “All NVIDIA GPUs of Compute Capability 5.0 (Maxwell) or higher are supported.” is basically not true.

“We’d be happy to help & offer guidance for moving from Prime to the latest version of OptiX 7 if you’d like.”
And is this guaranteed to stay usable with future nvidia gpu generations?

Regards,
Sebastian

Hi Sebastian,

I’m genuinely sorry this caught you (and me) by surprise. I realize it’s cold comfort now, but we have tried to be a little pushy ;) about it being a good idea to move to OptiX 7 for a long time, and we’ve also tried to be open on the forum about the fact that OptiX Prime would never support RTX hardware and would not be improved anymore - for almost three years, basically since the release of OptiX 6.0. (for example OptiX, OptiX Prime, Compatibility with CPU and RTX - #5 by Ankit_Patel ||| Where is the RTPbufferformat in optix7.0? - #2 by Keith_Morley)

Your point on the release notes is taken, we should probably name the latest GPU architecture every time, so the notes don’t become confusing or seem untrue later. The release notes for OptiX 6.5 predate Ampere. They are true as written for both APIs - OptiX 6 and OptiX Prime - with respect to all Nvidia GPUs that existed on the release date of the OptiX 6.5 SDK. None of our release notes are really intended to extend to any unnamed or unknown (at the time of publication) future GPUs or future CUDA toolkits or other APIs unless stated explicitly, but we could certainly have made that more clear in this case.

Aside from the notes, all OptiX 6 versions already are in the Legacy Downloads section of our site, and version 6.5 is being maintained but not improved, meaning support isn’t officially “deprecated” yet but it’s still not a good idea to begin any new projects using OptiX 6.5, and it is a good idea to start now porting any existing long-term projects that use OptiX 6.5 to the latest SDK (currently OptiX 7.3).

And is this guaranteed to stay usable with future nvidia gpu generations?

I’m not sure I understand your exact question in this context, or what kind of guarantee you’re looking for. Do you want to lock your SDK, and/or driver, and/or CUDA toolkit version across hardware upgrades, or are you asking whether OptiX as a whole might become deprecated? As long as OptiX exists (and I think it will continue to exist and improve for many many years to come), then there will be a version that is compatible with some of the latest Nvidia hardware, just as there always has been since OptiX began. But when preparing for or purchasing new hardware with new features, it is always the case that software upgrades may be necessary to support the new hardware, whether it’s the Nvidia driver, the CUDA toolkit, or the OptiX SDK version. Sometimes app changes might be necessary as well, if you want to take advantage of new hardware features.

OptiX 7 is the current path forward. This current version is much more closely aligned with the published standards in Vulkan and DirectX ray tracing extensions than earlier versions of OptiX, therefore much less likely to change dramatically in the future. Now that we’re down to a single API, you can have more certainty that it will carry into the future than was ever possible with OptiX Prime.

By the way, if you depend on OptiX for your business, and would like long term support for any given version, please consider getting in touch with us more directly so we can confidentially discuss your schedule and requirements. This is the best way to voice your needs before we make changes, and avoid any future surprises.


David.

Hi Daniel, thanks for the quick reply.

yes, but the message in the forums was always, it will not benefit performance-wise from RT Cores, but it will it will run on the HW (which was the case for RTX 20xx). You made also that assumption.
I think we agree, that if it was known at least since release of Ampere, a official note somewhere (outside of forums), that it will definitly not run on Ampere and future gens would have been helpful.

The question is, if we now port our stuff to Optix 7 and next year (maybe) NVIDIA “Lovelace/Ampere Next/Hopper” GPU comes out, will there be a Optix 8.0 which again has a completly different API, or will Optix 7 just gets an update (is it seen as a LTS Version). But you already answered that with:

From a developer point of view, we would appreciate if we can reuse existing code with minimal effort for future HW updates.
For CUDA e.g. there you have a great compatibility. We can run source code created with early versions, maybe recompile it, but it still works. Of course there are sometimes some API changes, especially for using new HW Features. But it is a different thing if the complete API changes at all (like from Optix 6->Optix 7).

So i think we can close the thread now ^^

Regards,
Sebastian

The amount of work required to replace OptiX Prime against an OptiX 7 based ray-intersection mechanism is considerably lower than porting a high-level OptiX API based program to OptiX 7.

How that would look like has been shown inside the OptiX SDK example optixRaycasting at least since OptiX SDK 5.1.0, means there are OptiX 7 versions of that as well.

As benefit of using OptiX 7 you’ll get the full hardware RT core support and can do things not even possible in OptiX Prime at all, like custom geometric primitives, custom hit records, fully programmable hit behavior, anyhit continuation rays without additional ray query, more flexible scene graph hierarchy, etc.

yes, but the message in the forums was always, it will not benefit performance-wise from RT Cores, but it will it will run on the HW (which was the case for RTX 20xx). You made also that assumption.

Correct, I did make that assumption. Talking to the team, I found what happened here is that we did not explicitly decide to prevent OptiX Prime from running on Ampere, we wanted it to run and it would be nice if it still continued to work. Ampere related compiler changes broke Prime unexpectedly. As part of the discussion that your report started, it was deemed too much effort and not a high enough priority to fix, because we had already communicated that Prime was deprecated, because we were not aware of customers still depending on Prime before it broke, and because we already provided an upgrade path for Prime users, one that like Detlef points out enables superior performance and flexibility.

From a developer point of view, we would appreciate if we can reuse existing code with minimal effort for future HW updates.

Yes, for what it’s worth, we very much completely agree! We have been striving to make minimal effort upgrades possible and will continue to do so. The huge change from OptiX 6 to OptiX 7 was debated internally for years(!), and everyone on the team was very worried about the fact that the new API was incompatible with the old one. We are going to do everything we can do prevent that from happening again. It’s simply that the high level OptiX 6 and earlier API was deemed to be fundamentally going in a different direction than was needed, and a course correction was necessary. We think OptiX 7 is on the right course. So far, all of our professional partners agree, and we think it will not need such a large course change again. There have been other threads about this in the past you can read that might help with context, for example Optix 7 breaking changes

That said, we are not making promises or guarantees about what will happen in the future. I don’t think OptiX will ever change again as dramatically, at least for the foreseeable future, but to answer your question directly and honestly and completely, that’s not something we have committed to. Our mission with OptiX is to enable the highest performance ray tracing that is possible, not to promise that it won’t change. There are some options available for helping to manage expectations, ensuring that changes to the OptiX API will not be a surprise, and being able to plan upgrade schedules:

  • Plan ahead to lock your SDK version and driver version for the duration of code freeze you need. Take advantage of the time after code freeze to upgrade.
  • Communicate your support needs in advance to us, let us know what you’re using a how long you’d like it to work. Even if it’s more than we can agree to support, we will hear you and you’ll be more likely to get notified of changes and have advocates.
  • Use an API that comes with a published standard. Even though we really like OptiX and think it’s the best API for GPU ray tracing, it may be worth considering Vulkan or DirectX instead. Those APIs have changed less because they change later than OptiX, they change more slowly, and they come with published standards. If stability is the priority, then these shouldn’t be ruled out. OptiX does try new things earlier than either VKR or DXR, and in part because of that, the OptiX API changes more frequently.


David.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.