PDF links not working or not present & "library science" driven UX suggestions

Hello,
FYI:
The following pages don’t have present or working (when present) PDF documentation links:
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/index.html
https://docs.nvidia.com/hpc-sdk/compilers/openacc-mpi-tutorial/index.html
https://docs.nvidia.com/hpc-sdk/compilers/fortran-cuda-interfaces/index.html
https://docs.nvidia.com/hpc-sdk/compilers/openacc-gs/index.html
https://docs.nvidia.com/hpc-sdk/compilers/cuda-fortran-prog-guide/index.html
https://docs.nvidia.com/hpc-sdk/compilers/c++-parallel-algorithms/index.html
https://docs.nvidia.com/cuda/npp/index.html

also
This page:
https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-ref-guide/index.html

Has the following link which doesn’t link to a PDF document:
https://docs.nvidia.com/hpc-sdk/compilers/pdf/hpc21ref.pdf

PS I suggest the following may be worthy of consideration as constructive criticism:

1: Make a global index of downloadable content including version number, update/creation date, platform / tool relationship (as applicable), title, file name, link to download.

2: Make it possible to search for / filter downloadable content (e.g. within the above index) and then select to either download an archive (e.g. zip, tar) of the selected content with a sensible file naming and/or categorical sub-directory hierarchy, or at least present (e.g. from the index) a page with complete direct hyperlinks to the selected files so one can copy/paste/download them as a group via one’s own browser / download tool.

3: Why not store all released versioned downloadable content in something like git (or nvidia’s existing github site) or something that facilitates people’s ability to track / pull changes when new versions come out, and also browsing the repos would enable discovering new / unfamiliar things etc.

In the “old days” it was much easier / faster to categorically find and download documents, tools / software, etc. from sites that just offered categorical FTP sites or web sites with file storage and indexing enabled so one could just as an fictional hypothetical example just
bookmark something that might be like:
https://downloads.nvidia.com/cuda/11.3/x86_64/linux/fedora/documentation
or
https://downloads.nvidia.com/cuda/latest/x86_64/linux/fedora/release
and find / pull the specific version or the latest version more or less without a lot of
manual web browsing / clicking / navigating.

Use case 1: Pretend you have someone like a new developer becoming interested in
AI / ML / CUDA / HPC / Nvidia GPUs etc and they want to discover, find, then download all relevant software and documentation to those topics of interest.

Use case 2: An existing developer has an ongoing interest in and use of many of your
tools and documentation products e.g. cuda, cudnn, tensorrt, tlt, device drivers, cublas, cufft, cpsparse, thrust, etc. etc. and wants to get the latest documentation + release notes + software downloads pertinent to several (e.g. 6-20+) of your tools / documents / etc. These may be updated every few months or sometimes more often. As far as I can see it is an extremely “interactive” slow manual web browsing process to go browsing to find the latest versions of these things which may or may not have changed since X time they last pulled the tools / literature

In either case speaking from my own experience it seems like it can take hours literally just to navigate the web site between various tools, looking for links to the relevant files, documents, cited / suggested relevant additional documents, etc. cuda, compiler information, release notes, compatibility / tuning guides, etc. etc. dozens of relevant documents & files just for cuda alone to say nothing of the many other developer tools you have.

GIT, FTP, or http indexed “file repository” with proper categorical repo / directory hierarchy would fix the tediousness of looking for files which may or may not even exist in a given version / platform.

It is one area where “library science / librarian” specialty approach / knowledge has diverged from “web designer” since most (I’m not speaking of nvidia exclusively) web sites are truly miserably architected from the standpoint of categorizing / finding & making accessible large amounts of information and helping users find and search for things besides the really unsatisfactory “UI intensive” ways that assume that the very act of browsing the web site IS the main focus of the user vs. taking the viewpoint that obtaining and developing with the documentation / tools / etc. is the main focus and the UX should be one to make that more efficient and powerful.

“google style” “I’m feeling lucky” keyword search isn’t a scalable / good substitute for categorical organization, indexing, and enabling better more automated tools than web browsers to be able to be used to find / access information online.

Thank you very much