Blockchain drivers

You know banks in global use more power then BTC network and we do not need banks!
So yeah rethink first why you are ok with them

increasing power ainā€™t a solutionā€¦ we need a better TLB on the 1070 cardsā€¦NVIDIA can u ear US??? HELLO???

A TLB (translation lookaside buffer, to speed up the translation of virtual to physical addresses) is a piece of hardware. It is designed into processor hardware. In the case of the GTX 1070 that design process likely happened around 2014, maybe early 2015. So unless you have means of time travel available to you ā€¦

1 Like

You know 2013 or so AMD R9 380 soon will hash more then GTX 1070!
R9 290x already hash more!

I couldnā€™t care less, but it is apparent that you do care. So I will quote myself:

Generally speaking, just use the hardware and software that best suits your use case.

+1 On some specific drivers.

Currently h/w:

my 1060s are way better w/h than 1070s

its insano:

1070 - 4.8 w/h
1060 - 3.47 w/h

They didnā€™t have the courtesy to make a driver for the 1660 ti for windows 8.1. But they made one for windows 7 with the 1660ti. :(

Can you please advise if this defect concerning the TLB capacity in (10x) Pascal Architecture has been carried over to (16x) Turing Architecture or has this been resolved in Turing moving forward?

A design limitation is not a defect. The designers of processors with virtual address space capabilities have to decide on a TLB size before they build the chip. Just like a cache can only hold a limited amount of data from the totality of the data in the address space, TLBs can only track a limited amount of pages from the totality of pages in the address space.

I donā€™t know the TLB size for Pascal or Turing, however NVIDIAā€™s official answer in #13 above strongly suggests that TLB size was increased (relative to Pascal) in Volta (shipped 2017) and subsequent architectures, a group which includes Turing (first shipped 2018):

So these newer GPUs (Volta and beyond) will show much less performance sensitivity due to DAG size

My approach would be to simply give it a try or ask other users with Turing-based GPUs in a forum relevant to the particular use case.

The thing is, only two Pascal GPUs are affected by hashrate drop: 1070 and 1080ti. Well, maybe 1080 too.
1050ti are ok. 1060 are ok. p106 are ok. All further gpus (1660 and beyond) are OK. Strange, isnā€™t it?

Not at all strange. These cards are based on different GPUs. According to TechPowerUpā€™s database, GTX 1080Ti uses GP102, GTX 1070 and GTX 1080 use GP104, while GTX 1060 uses GP106 and GTX 1050 uses GP107. The different GPUs are listed with different die sizes, so these are clearly differently configured designs, but all based on the same basic architecture (Pascal).

I beg to differ on that. Firstly it is not a design limitation, it is a flaw in design with resulting defect which impacts performance. Legal definition: A design defect means that the product was manufactured correctly, but that there is something in the way the product is designed that makes it dangerous/unusable to consumers. Since the design defect is a flaw in how the product is designed, the defect generally affects the entire product line, rather just one particular item.

This defect impacts all 10x series cards, I have seen this with 1060 and p106 mining as well as my 1070. Never had this issue with AMD RX cards after AMD fixed with blockchain drivers. It appears to be inherent in Pascal architecture. NVIDIA created P106 mining cards, so why market these cards for specific purpose without been transparent about a design limitation? In my book that is called misrepresentation and misleading claims if they knowingly designed a product with a limitation but failed to inform potential users of that limitation which would directly impact the users ability to make an informed choice on the product at the point of sale - ie to purchase a competitor like AMD that does not suffer from such issues.

There are only 2 outcomes, either NVIDIA knew about the issue (by way of design limitation or in my opinion design flaw) but failed to be transparent and tell potential customers about the impact which is misrepresentation/misleading or they did not know in which case this an inherent defect in the product and it is not fit for purpose and of unsatisfactory quality. Either way NVIDIA are liable.

Im sure the majority of users like me would agree - had we been made aware of this at the time of purchase and had NVIDIA been transparent, then we would have rather opted to purchase AMD cards for longevity.

1 Like

my P106 (mining specific) and 1060 suffer the same hashrate drop of about 2 - 3 Mhs per card on ETASH. 1070 are down by 6 - 3 Mhs per card

Did the software version whose performance you are unhappy with now even exist when the Pascal family of GPUs was designed? From reading this thread, the answer appears to be a resounding no. In fact, from reading this thread it appears the software was changed quite some time a after Pascal-architecture GPUs were available in the market. So it seems like you would want to complain to the software vendor for making those changes that had negative impact on existing hardware people were already using.

I am curious: Did NVIDIA promise any particular specific performance level for these Pascal-based GPUs with regard to the specific software version you are interested in?

Software Version? We are talking about hardware limitation by way of physcial properties of the card - a design flaw, especially when the company markets products specific for use in mining like the p106 mining cards. For your information - release date of Pascal Architecture was April 5, 2016, Ethereum release using the ETASH algorithm was in 30 July 2015, (and the algorithm would pre-date the Ethereum release date as well) so your point is what?

Clearly you dont know what you are talking aboutā€¦ the issue became apparent with increase in size of the DAG file. Isnt it strange that the same issue does not effect AMD RX cards only NVIDIAā€¦

1 Like

This happened at what time? Who made this DAG file change? When you originally got your Pascal-based GPU it worked fine for ETASH, correct? And did your GPU hardware change since then, or was it the software configuration (DAG file) that changed?

It is not a software change, DAG file increases in linear fashion in size with each epoch, which is why it was made known that cards with 3GB memory would be unusable at some point and we are currently getting closer to 4GB. With increases in size over time of DAG file, this has brought the design flaw in the NVIDIA product to light the total on-chip TLB capacity on the Pascal GPU" cannot handle the increase in DAG file size unlike AMDā€¦ Did you even read the notes from the Moderator - Robert_Crovella? Or are you just blindly cheerleading for NVIDIA?

People like me, real world users feel tha pain and frustration, especially when AMD fixed thier issue through release of blockchain drivers, but NVIDIA apparently could not be bothered to even inform users of flaws in thier productsā€¦

1 Like

Let me ask you a simple question njuffa, do you crypto-mine and use NVIDIA cards for mining? Because based on your replies, and lack of understanding of DAG files ect, it would appear not and therefor you should relegate yourself to not providing advice on the topic. If you do have some avenue into NVIDIA, I would be interested in a formal statement from them if you can help with that would be appreciatedā€¦

1 Like

I have read the note by Robert Crovella and I know what TLB capacity refers to. Performance of a workload will degrade once its working set exceeds the TLB capacity. That is true for all processors with TLBs. There is a number of design parameters for TLBs that a chip designer can consider but once the hardware is built those are immutable (there may be some niche processor with run-time configurable TLB, but I have never read of any).

Design parameters for processor components are evaluated through simulations based on workloads that exist at the time of the design. This usually predates shipments to the general public by a couple of years.

If you apply a workload with a particular configuration that came into existence years after a processor was designed that is kind of difficult to incorporate into the design process :-)

FWIW, I have no need to cheer for anybody, I am just a happily retired software engineer who also happened to participate in the design of multiple processors (the AMD K6, K6-2 and Athlon processors, to be specific).

Thank-you for your bio, I studied electrical engineering with background in IT, actually currently work in IT (more specifically IT security) Been involved in crypto-mining since 2017. Now that introductions have passed, So to your point, NVIDIA designed a product with limitations in a specific application, created a product for specific purpose like the P106 and P104 mining cards using the Pascal architecture at the end of 2017 and marketed that product for mining purpose (reduced graphics outputs ect), failed to advise its customers of those limitations that would effect the performance of the product and its lifespan which would have allowed its customers to make informed choice at point of sale. All of the software architecture and how the ETASH algorithm works, its mechanics is documented in a white paper available to public which would have detailed items like DAG file increase over time - it would appear that NVIDIA failed to consider this unlike AMD?

Circling back to what I said, in legal terms that is misrepresentation/misleading. Either they have a really poor design team at NVIDIA compared to AMD which would be down to negligence and not understanding the application they designed and marketed the product for, or this is a genuine manufacturing defect or flaw. Either way compared to AMD RX cards, the product is NOT fit for purpose and NOT of satisfactory quality, if you live in the UK where I am the Consumer Rights Act covers consumers for such breach of contract. From my personal point of view, had NVIDIA been transparent and made potential customers aware of this at time of sale in 2017, I for one would have opted to go all in for AMD graphics cards rather than face the nightmare we are now left with NVIDIA.

1 Like