HD 7990 shutdown PC when second GPU intensity is more than 12

edited January 2014 in Hardware support

Hi,
my HW is:

  • Asus  M2NPV-VM motherboard
  • Athlon 64 x2 3200
  • 4GB RAM
  • Gigabyte HD 7990
  • Chieftec 1000W APS-1000C power supply

OS:

My problem is that when I run cgminer on second GPU with -I 13, machine shuts down, but on first GPU -I 13 is working perfectly.

Her are some test's:

Mining on first GPU
--thread-concurrency 8192 -I 13 --worksize 256 -g 2 --gpu-memclock 1500 -d 0
[2014-01-14 18:23:51] GPU0                | (5s):551.4K (avg):590.5Kh/s | A:2230 R:0 HW:0 WU:587.6/m                   
[2014-01-14 18:23:51] GPU1                | (5s):0.000 (avg):0.000h/s | A:0 R:0 HW:0 WU:0.0/m   

Mining on second GPU
--thread-concurrency 8192 -I 12 --worksize 256 -g 2 --gpu-memclock 1500 -d 1
[2014-01-14 18:29:56] GPU0                | (5s):0.000 (avg):5.317Kh/s | A:0 R:0 HW:0 WU:9.2/m                   
[2014-01-14 18:29:56] GPU1                | (5s):480.7K (avg):482.4Kh/s | A:512 R:0 HW:0 WU:422.5/m

Mining on bots GPUs
--thread-concurrency 8192 -I 11 --worksize 256 -g 2 --gpu-memclock 1500
[2014-01-14 18:41:43] GPU0                | (5s):223.2K (avg):250.0Kh/s | A:0 R:0 HW:0 WU:215.4/m                   
[2014-01-14 18:41:43] GPU1                | (5s):240.2K (avg):263.2Kh/s | A:0 R:0 HW:0 WU:225.3/m


Configs that shutdown
Both CPUs --thread-concurrency 8192 -I 12 --worksize 256 -g 2 --gpu-memclock 1500
Second CPU --thread-concurrency 8192 -I 13 --worksize 256 -g 2 --gpu-memclock 1500 -d 1

I also notice that my CPU is on 100% when cgminer is running. I'm running cgminer with:
export DISPLAY=:0
export GPU_USE_SYNC_OBJECTS=1
export GPU_MAX_ALLOC_PERCENT=100

I also try to mine on same HW on win7 and end up with same results, so I'm suspecting some HW problem.

Any suggestion would be appreciated.
Regards
Brumela


Comments

  • Are you using cgminer 2.7.2 or 3.7.2?  The older versions have a CPU bug that peg the CPU to 100%, if you are using an older version, I would try upgrading to a newer version of cgminer.

  • I have a simular problem, I have a 7850 and a 7950 when I use one of these  in my computer, the computer shuts down when I > 13
    I suspect my undersized 500W power supply. Tomorrow I'll recieve a 750W powersupply

    I will get back here to share the results after replacing the powersupply.

  • Hey well i had the same problem !! I also use the hd7990 but with 1000w psu

    Try this :
    Set clock limit to 1400
    And under volt to 1.09

    U can download afterberner to set these up

  • The 7990 can have as much as 6GB on-board and the intensity value loosely defines the point at which your ocl kernel hits a 2GB limit if the compiler is defaulting to 32 bit pointers.  With the sdk you're using, you can force 64 bit pointers by defining the environment variable "set [/x] GPU_FORCE_64BIT_PTR=1" in NT or similarly in your 'nix shell.

    Recent changes in the Radeon driver create a crossfire-like linkage between compatible GPUs via the pcie bus; no crossfire connector is needed.  This is possible (in my experience) even when a 1x->16x bus extender is used to connect one of the cards (which it appears you may be doing).  You may want to check for such a linkage in CCC in the 'hardware' display and disable crossfire entirely or via program profile.

    It also seems strange that you're having cgminer set the gpu-engine frequency and your system RAM seems less than adequate for mining scrypt (especially if it's DDR2).

    Just some observations; hope you find them helpful. ~3gg

  • edited January 2014

    @miner961
    thanks, for 1.09 voltage tip, I manage to get better results (
    around  850kH/s) with
    --gpu-memclock 1500,1400 --gpu-vddc 1.09 -d 0,1 -I 13,11 -g 2

    this also
    works but crashes after few minutes:
    --gpu-memclock 1500,1400 --gpu-vddc 1.09 -d 0,1 -I 13,12 -g 2


    From my test I can conclude that if I use -I 13 and -g 2 parameters together on second GPU I get crash in a second, but If I use just one of them mining works, but with hash rate lower as expected.

  • @3gghead
    1 .I will try GPU_FORCE_64BIT_PTR=1 ...
    2. I try to disable Crossfire in win7, but GPU-z allways showed taht Crossfire is enabled. I have one HD 7990 on x16 slot. Is it possible to disable Crossfire on HD 7990?
    3. Yes I have DDR2 ram. Are you suggesting 4G RAM or even DDR3?

  • edited January 2014

    If there is a way to physically disable AMD's new 'crossfire-over-the-bus' entirely, (aside from reverting to an older driver/SDK if possible) I have not been able to find it.  Using CCC i've created a 'game-profile' for cgminer.exe and disabled crossfire in that; perhaps you could determine how effective this is by creating a similar profile for gpu-z.exe to see what it reports.  In my experience, the display-connected card will offload as much work as it can to linked GPUs to the point of heat-death.

    Regarding the DDR2 RAM:  no doubt you're aware that DDR2 covers a range (voltage, clock-speed, bus-speed, etc) of DIMMs and, of course, DDR2 sockets/memory buses are incompatible with DDR3 RAM types.  However, if you're using slower DDR2 RAM than your mobo can support it couldn't hurt to upgrade to the best DIMMs that it can use.  For the Asus  M2NPV-VM, it seems that would be PC2-6400 4-4-4 or 5-5-5 (DDR2C/D respectively) preferably in 'dual-channel' mode (bought as paired-DIMMs).  It's something worth considering if you're currently using 400MHz DIMMs; it would double your throughput (6400MB/s, hence the name) and decrease latency.  4-4-4 might give you a small performance increase over 5-5-5@800MHz but your mobo may not be able to take advantage of it, plus it might be hard to find and pricey.  You'd surely want to check your mobo's hardware compatibility-list to better your chance of success.

    Your OP doesn't mention whether or not you're using the on-board GPU but it seems more likely that you could use that as your primary display while mining on the AMD cards (though you'd want to add "--gpu-platform 0" to your cgminer command-line).  In that setup, you should be able to mine at "-i 20 -g 1" with no lag on the desktop.  It's working for me, so that's why I mentioned RAM as a potential bottleneck; well, that and that your GPUs each have 6GB/s data-rate via 2 x 2 x 384-bit pcie bus connections (but not so much through a pci-e x1 slot).  Possibly a hint that you might dodge the 'auto-link' issue using a PCI --> PCI-E x1 -> PCI-E x16 which I've not tried myself yet.  The GPUs would be on separate buses but the driver, while doubtful, may create a DMA bus-bridge and link the cards.

    Definitely NOT suggesting the use of DDR3 RAM; just that your system isn't "balanced."  Kind of like shoe-horning a 351 Cleavland engine into a Datsun 240z chassis; it's gonna put some stress along the drive-shaft.  Those GPUs are just insanely powerful; forget mining and take over the stock market.  In any case, just sharin' my thoughts; have fun with the 7990s.

  • 3gghead, thank you for that great explanation.

  • you're overloading your PSU - get a kill-a-watt and see for yourself. OC'd RAM makes huge difference in consumption.
    No use in swapping DDR2 to super-fast DDR2/DDR3, only ram size can matter. TC of 2xxxx-3xxxx is recommended for 7950/7970 to get decent hash rate. Underclock/undervolt your GPU core while keeping mem speed at 1250 or 1000 or even 900 (timings get tighter with lower clocks - so you'll be loosing like 5-10% mem bandwidth compared to 1500MHz but things will run way cooler and consume much less - my 7950 uses almost like R270 but gives way more hashes)

  • And try not to keep molex of cards in the same psu chain ....

  • You don't say what is the gpu core clock. I'd try to set gpu core clock to a conservative value, just to be sure that the problem doesn't com from this. Like adding --gpu-engine 980

  • Also - I don't get why do you mine with 2 threads/GPU - this doesn't seem to work well on anything over 7850. 2x8192=16384 effective TC, which is somewhat small (do you get HW errors? if so - increase TC).

    The two reasons to do -g 2 AFAIK are - use the card for desktop (hm, you don't use 3 of them at once, do you?) and mining on 512MB HD5750 (black magic, really - 170KH, w/o such trick - system memory gets used and ridiculous 40kh is the result).

  • @brumela,

    Here's where you can get the kill-a-watt device that @xminer mentioned.

    If you want to check your power consumption at the wall outlet, you can use this:

    http://www.homedepot.com/p/P3-International-Kill-A-Watt-EZ-Meter-P4460/202196388?keyword=killowatt+ez

    Take the capacity of your pc power supply, say 750W and divide by .9 and write it down.  Then start over and divide by .8 and write it down.  In this example, this works out to 833 and 937.  If the draw from the wall outlet exceeds either of these numbers (customized for your unit), you may be overloading the power supply depending on how efficient the power supply is.

    I've found that using separate commands of cgminer along with device numbers and also trying to control the gpu engine speed leads to conflicts between the cards.  Use one instance of cgminer to run all three cards.  The best way to do that is with a config file rather than command line parameters.  You could try the configuration file shown in this thread, which talks about heat management, gpu engine management, and fan management.

    https://forum.give-me-coins.com/discussion/666/heat-death-how-to-kill-your-gpu-in-less-than-a-year#Item_7

    First, try to get all your cards running simultaneously in their stock configuration without overclocking or messing with the voltages.  For my cards not driving a monitor, I use TC = 0 and intensity = 19.  For my card driving a monitor, I use TC = 8192 and intensity from 12 - 15 depending on how much I'm using the computer.

    You may wish to load MSI Afterburner or ASUS GPU Tweak to reset the cards to their default conditions.  Er, scratch that.  Just saw you're using xubuntu.  Try running the aticonfig --help command in a terminal window.  You could pipe the output to a text file and then view it.  Look for the command to reset the clocks to default.  Read what the following commands do before executing.  But, it should be the following.  You may have to stick sudo in front but I'm not sure.

    aticonfig --adapter=all --odgc
    This will show what gpu's are recognized and their current clocks.  You can also issue commands for each card by doing --adapter=0 etc. and plugging in the adapter number.

    aticonfig --adapter=all --od-enable
    Enable overdrive commands.

    aticonfig --adapter=all --odrd
    Restore default clocks.

    aticonfig --adapter=all --odcc
    Commit the clocks.

    aticonfig --adapter=all --odgc
    Check the changes.

    Reboot.  Now all cards should be in their stock configuration.  Use the procedures in the "heat death" thread to save a config file.  Then tweak the file and set your scripts to call only ONE cgminer for all the gpu's.  Alter the config file I provided by initially setting all intensity numbers to 12 and set all gpu-engine numbers to 0.  Adjust for the number of cards in your system.  Make sure you know which config file slot controls which card.  This is not always obvious.

    Once you get all three cards running in the stock configuration, then tweak the config file to set intensity for the card driving the monitor, if any, to 12 or up to 15 if the pc is lightly used, set the intensity of the other cards to 19, and play with the gpu-engine parameters.

    You can monitor the cards once they're running by running these command in separate terminal windows.

    watch aticonfig --adapter=all --odgc
    watch aticonfig --adapter=all --odgt

    Hope this helps.

    Sincerely,

    Ron

  • I'lm guessing that the problem might be that you are using -g 2 and not setting a TC. You don't know what cgminer uses as TC.


    If you want to know what you're doing, set a TC.

    People with high results on 7990 use simply :
    -I 13 -g 2 -w 256 --thread-concurrency 8192

    This should work.

    (then, optimise gpu engine, memclock and voltage).
  • edited January 2014

    I got this problem when the power supply is not up to delivering mining power full-time. What other peripherals are attached to the board? Hard to believe a 1000W PSU isn't enough but the insides may not be up to the job...

  • Hard to tell exactly, but based on this spec sheet:

    http://www.gigabyte.com/products/product-page.aspx?pid=4592#sp

    It looks like the gpu cards use 300W each.  So two of them would be 600W.  You would THINK that 400W remaining would be enough for the balance of system.

    Ron

  • edited January 2014
    Ron, 
    You are forgetting the 75watts from the PCI-E slots.  They can use up to 375 watts a piece

    I would say that there is a very real possibility you are maxing that PSU out.  The 7990s are a dual GPU card, so you are essentially running 4 GPUs in the machine.  Depending on the quality of the PSU you may not be able to provide a stable 1000watts.  I believe each card pull 375 watts at reference speeds. So if your CPU is pegged at 100% like you state and you bring both 7990s up you are likely drawing too much of your PSU.


    This seems pretty common on this forum. People build great rigs with multiple high end cards but forget about their PSU.  The PSU is one of the most important parts of any computer.  A good, efficient, stable power supply is pretty essential.  I know the best brands and most efficient PSUs seem expensive, but they are worth it in the long run when you are running a rig like this. 

    I cheap PSU will give you unstable power at full load, and use more energy doing it.
Sign In or Register to comment.