Nvidia+CudaMiner vs. ATI+SGMiner - Very Different Reject Rates

edited March 2014 in Hardware support

I have both Nvidia and ATI cards and I am seeing a very large discrepancy between reject rates.

GTX680+CudaMiner @ 300 KH/s => 0.01% reject rate
GTX780Ti+CudaMiner @ 700 KH/s => 0.16% reject rate
R9 290X+SGMiner @ 913 KH/s => 8.0% reject rate

All of the above are on the mega node for higher difficulty. There are 0 HW errors being reported. So contrary to what one might expect, the faster the hash rate, the higher the reject rate. That makes no sense at all. And while even 0.16% reject rate seems fairly trivial, the 7.9% reject rate with SGMiner is most definitely not trivial.

The effective hash rate is thus:
780Ti: 700 * 0.9984 = 698.88
290X: 913 * 0.9200 = 839.96

This slashes the ATI card's advantage by 1/3.

I'm pretty sure this isn't normal.
Is this:
1) a "feature" of how the pool distributes WUs and assigns difficulty
2) a major bug in SGMiner 4.1.153
3) A consequence of instability in computation time across different WUs that only manifests on ATI cards
4) Something else?

Comments

  • edited March 2014

    it almost certianly sounds to me like a config issue. Especially when you mention that you are getting Hardware errors.

    Also you shoudl not be using mega unless you have over 25MH as this will cause you to get a too high share difficulty and not submit any shares in fast blocks.

  • That was a typo - it should have said 0 hardware errors reported.
    Yes, I know mega says that it is for 25MH/s+ but there is no hardware that can do that currently available (pre-orderable, but not actually ownable right now unless you work for the manufacturers of one of them and have a prototype).

    Nevertheless, Nvidia_Cudaminer configs show less than 0.2% reject rate even on my 680 which only manages 300KH/z, but my 910KH/s 290X is seeing much higher reject rates. Switching from mega to ltc-eu didn't help much. Normal pools go up to 256 in difficulty, and mega is 128+, so in reality I end up with the same difficulty > 128 regardless of which I connect to.

    Since Nvidia setup on the same server doesn't get rejects despite the lower raw rejection rate, I can only assume there is something screwy going on with the hash rate tuning on the server side - the Nvidias stay below the limit where it starts to go wrong, and the 290X gets into the area where it starts to mess up. Possibly because Nvidias stay at slightly lower difficulty WUs.

  • edited March 2014

    25MH + does not have to be from 1 device it is 25Mh+ farm a bunch of gpus all pointed to the same account. Mega diff also starts at 128 as stated here "LTC High Hash server - variable difficulty of  128 - 1024 (BETTER FOR 25+ MH MINERS)"

    the fact you yourself said that the nvidia doesnt get rejects disprovs your theory of the "server side tuning being screwy" have you tried any other mining software and found the same results / pool servers.
    My next question is are both cards on the same worker if so what happens if you seperate them. next have you actually tried any other configs for you 290?

  • My cards always get difficulty over 128 anyway, so I don't think mega vs. standard makes much difference in this case.

    I tried mining with a different pool, and sgminer still produces about 5-8% rejects, supposedly because the WU was unknown. This may well be a sgminer bug in 4.1.153 from Friday's git master branch.

  • I can confirm it is an sgminer 4.1.153 issue. I switched back to cgminer 3.7.2 and the rejects went down to 0%. The hash rate is down to 875KH/s, but that is still 30KH/s better than the effective net rate (after rejects are accounted for) that sgminer yielded.

Sign In or Register to comment.