I want to use NVIDIA-Optix 8.0.0 to render 3D images of protein structures

Welcome to the OptiX forum.

If you’re new to OptiX, please read the OptiX Programmming Guide and API reference documents to learn what features are available inside the OptiX host API and device functions with which you implement your render algorithm.

I am new to image rendering.

Does that mean you’re new to computer graphics programming?
That would require some more fundamental explanations about how to implement the following then.

First, you would need to decide how the data inside the protein structure should be represented as geometric primitives, because that defines how you would build the acceleration structures (AS) which represent the scene data.

Then you would need to decide how the rendering of those geometric primitives should look like. That defines how the OptiX device programs (raygen, intersection, closesthit, anyhit, miss) need to be implemented to achieve the desired effect.

For example, is it enough to represent atoms as opaque colored spheres and connections between them as cylinders, or would you need a more volumetric kind of display, like a hull around the whole molecule with transparency, etc.

That could be implemented using different built-in geometric primitives. OptiX supports the intersection of triangles fully in hardware on RTX boards and has built-in intersection programs for sphere primitives and curves where linear curves could be used for the connections:

Or you could tessellate these basic elements as triangle meshes and use instancing to display many of them but that’s going to be slower and also would require more memory.

Then let’s say, you want a simple display with just diffuse colors and ambient occlusion to let the spatial relations of the atoms become more apparent.

For that you would implement a ray generation program, which is the entry point to the raytracing algorithm called from optixLaunch) and that shoots primary rays from your camera position into the scene, often using a simple pinhole camera which is defined by a position P and three vectors U, V, W which span the first quadrant of a viewing plane. Many examples of that are inside the OptiX SDK and other examples.

If some geometric primitive inside your scene is hit, the closesthit program is called in which you would determine the color of that surface hit point. For that you would calculate some attributes like the hit position, color and the shading normal at that surface hit point to be able to calculate the color and potentially lighting.
If you just need a diffuse color and ambient occlusion, you would store a color per each geometric primitive you access via the primitive ID you’ll get with optixGetPrimitiveIndex and for ambient occlusion you would shoot some shadow (visibility) rays into the hemisphere over the shading normal (that is done with a cosine weighted hemispherical distribution of directions around the normal exactly like a Lambertian BRDF is sampled) and determine the attenuation of the color depending on the amount of hits and misses.
That would mimic the behavior of a global illumination with a constant white environment light of objects which are purely diffuse (Lambert shading).
That can also be done with a simple progressive rendering algorithm (shown in all path tracing examples).

There are a lot of small intricacies to consider when implementing ray tracing algorithms, for example self-intersection avoidance when shooting continuation or shadow rays from surface hit points.
You must understand how the Shader Binding Table (SBT) works inside OptiX to be able to implement scenes with different geometric primitives.

If that is all new to you, the best would be to work through some of the OptiX SDK examples, beginning with optixHello which isn’t even shooting rays, then the optixTriangle and optixSphere (and maybe optixCustomPrimitive) which are all only showing how to render a single primitive.

Then I would recommend looking into some of my intro_* examples in the OptiX (7 and 8) advanced examples (linked above and inside the sticky posts of this sub-forum) which show how to build a scene with single Instance AS over Geometry AS which is the recommended structure for maximum AS traversal performance esp. on RTX boards.
If you understood how those examples work, it should be pretty easy to add the necessary geometric primitives to them, which would give you a global illumination renderer as foundation. Should be a fun project actually.

On the other hand, why would you need to program that yourself when there exist plenty of applications which can already visualize protein *.pdb files?
The VMD Visual Molecular Dynamics program for example is using OptiX for quite some time now. Try a search for VMD here https://developer.nvidia.com/blog