Computer Case to house 3 - 4 graphic cards??

edited December 2013 in Hardware support

I am planning to get 4 280x and start mining.  I wonder what motherboard & case would take 4 cards.  Since I live in a small apartment room, so I need a quiet case to hide all cards & noise. Will an ATX midtower or full tower be able to install all 4 cards?

Comments

  • Hi Danielng,

    I'm going to give you "A" solution.  But, I'm afraid it doesn't meet your small and quiet requirement.  I think small and quiet is contradictory to wanting to be running 4 double width graphics cards that may be chewing up 1000 W of power.  You need lots of real estate on the motherboard, and lots of real estate in the case, and lots of air flow.

    I like the Antec P280 case.

    http://www.antec.com/product.php?id=704504&pid=3

    It's NOT small.  In fact, it's big.  This is definitely full tower.

    To accommodate 4 double width cards, you'll need a case that accommodates at least 8 expansion cards.  This one will handle 9.  So, you have room for some usb ports on the bottom slot or something.

    You can search newegg.com or frys.com, etc. for amd based motherboards with 4 PCI-E slots.  However, the last slot is usually right at the edge of the board, so the double width card hangs over the edge, which is why you need a big case.  Also, the PCI-E slots must be at double width or more spacing for the cards to fit.  They usually are this way.

    Check the power connectors on those cards.  They usually need 1 PCI-E cable from the power supply for older cards and 2 cables for newer faster cards.  That means a big beefy power supply with up to 8 PCI-E cables.  I like Corsair, although there are many good brands.  You're probably looking at 1000W - 1200W.  That also means you're going to have to be able to pull 10 A - 12 A or so from the wall outlet without tripping a breaker.

    I recommend this to monitor your power usage.

    http://www.homedepot.com/p/P3-International-Kill-A-Watt-EZ-Meter-P4460/202196388

    Your power supply should be able to provide 80% - 90% of what you see on the meter to the pc.

    In other words, if you're drawing 450 W from the wall, then your pc can only use about 360 W - 405 W of that.

    If your pc didn't have enough power, it would probably shut down or crash.  However, as you tax it harder, this could happen if the power supply is too small.

    To find out what reading on the meter is getting close to the max of your power supply, take the power supply rating and divide by .8.  So, for example, if the power supply is rated at 1000 W, 1000 W / .8 = 1250 W from the wall.  That's if the power supply is 80% efficient.  If it's 90% efficient, you'd divide by .9.  So, 1000 W / .9 = 1111 W from the wall.  So, in this example, if the reading on the meter is getting anywhere near 1100 W, you're probably very close to overtaxing the power supply.

    Air flow will be the life blood of your system.  You have to keep things cool.  I recommend removing the case fans that Antec provides and replacing them with Corsair AF 120 or 140 Performance Edition fans, which have better bearings and higher velocity.  I recommend adding 2 intake fans in front and removing the back / top fan(s) and replacing / installing 2 exhaust fans.

    Be sure you keep a close eye on the temperatures.  These things are running at maximum load, and if anything fails, things could go downhill rapidly.  Make sure your case fans are working and your intake and exhaust ports are clean.  Be sure to check them regularly.

    I use Speedfan to monitor cpu temperature, which, while not too affected by mining, an increase could indicate an air flow problem.

    http://www.almico.com/speedfan.php

    I use GPU-Z to monitor the gpu temps.

    http://www.techpowerup.com/gpuz/

    I'm not trying to discourage you from your project.  You may find a better solution.  What I've proposed will be hard to hide.  It will generate lots of heat.  It's not really noisy, but it's not silent.  But, these are just some things you need to think about.

    Good luck.

    Sincerely,

    Ron



  • Have you ever checked how much noise you hear from the next room when your current GPU is at max fan speed ?

    The thing is, 4 cards on a normal board (you can't really use risers inside a case...) will generate a shitload of heat.
    (Is there even a board that support FOUR ? I have only heard of three so far, taking into account that every card takes up two slots nowadays)

    Either they will overheat permanently, which gives you bad kh/s, or they will die off far earlier than usual - which gives you a bad balance overall.
    Also, check http://www.reddit.com/r/litecoinmining/comments/1dvit7/the_powered_riser_problem_is_there_a_hardware/
    You will want molex risers.

    Coming back to my first question: If the answer is no, try a rig like explained here:
    http://www.cryptobadger.com/2013/04/build-a-litecoin-mining-rig-hardware/
    Lower temps = lower fan speed = lower volume overall. Of course there can be no reduction from a case where there is no such thing :-)

  • Arogtar,

    I started to say that, of course you can get cards with 4 double spaced pci-e slots.  Both the MSI 790FX-GD70 and MSI 890FXA-GD70 qualify.  However, on a quick tour through newegg's current listings, I couldn't find any with 4 pci-e slots ALL on double width spacing.  So, maybe they're much harder to find now.

    I have to look at the links you quoted in more detail as I've only scanned them.  They look interesting.  I do have a rig in a computer case running 3 cards.  I like the mechanical stability and protection that this provides, although I'm starting to see that there are other options.  GPU temps hover around 80 C for the cards with another card right next to them.  The last card, with no other one next to it runs about 70 C.  I'm hoping to add a 4th card and we'll see what happens.

    I have lots of air flow from high velocity case fans so the gpu's don't seem unhappy.  Whether I can run 4 at full blast in the case, I don't know yet.

    Like I said, I've only scanned the links you posted, but I don't see the big rucus about power or about whether the slot can provide enough of it.  If you attach the aux power cables from the power supply for pci-e to the card, usually one or two of them, then the slot shouldn't have to provide much power at all.  Can you elaborate on that if possible?

    Sincerely,

    Ron


  • edited December 2013

    Hi Ron,

    the PSU connector which powers your (and any other) MB was not designed for powering 3+ GPUs at 100% all the time. GPUs have separate connectors, yes, but still the manufacturers only added them because you can only take 75W via PCIe (16x) and your GPU easily exceeds that value.
    The 6x connectors give another 75w, and the 8x 150w. Still the GPU will take what it can get from the MB.

    And if all the stuff on the MB ends up anywhere near ~350w you might end up with this.
    And since that 350 is a maximum value, in me eyes nowhere "safe" on your casual bad luck day, just do the math:
    3x75w = 225...add some CPU etc...you're gonna be hitting 250/275, maybe even 300. I'd say 275+ is already in the "red zone".
    Therefore: Molex risers ftw :D

    [edit]
    > "If you attach the aux power cables from the power supply for pci-e to
    the card, usually one or two of them, then the slot shouldn't have to
    provide much power at all."
    First: please attach all cables. The slots are there for a reason. Your GPU might throttle or stuff if it isn't fed ;)
    Second: In theory, if your card only draws 225w you MIGHT be fine because it could potentially draw all the power from the 6 & 6+2 connectors. This would require that the power is drawn from those first, and from the MB last.

    I have several problems with that:
    1) I don't know if it works that way
    2) I can't seem to find anyone via Google who has tested this
    3) I wouldn't feel safe about my rig without someone doing a test with neat instruments from a laboratory :)
    4) If your GPU exceeds the need of 225w you're f***ed either way :D
    [/edit]

    [edit2]
    > "of course you can get cards with 4 double spaced pci-e slots.  Both the MSI 790FX-GD70 and MSI 890FXA-GD70 qualify"
    After getting my eyes on a more detailed image I can just say:
    I don't think the card will go there without "softly opening" the bottom of your case.
    [/edit2]

    Greetings
    Oliver

  • edited December 2013

    image
    image
    image

  • edited December 2013

    Sorry for the dummy posts. Took me a while to figure out how to and still could not find how to delete posts.

    Hope posting the pictures in this thread is ok.

  • edited December 2013

    .

  • This is what I ended up choosing and though it might not be what you are looking for, I actually think small and silent is not possible with the HW you want to run. Too much heat to dissipate.

    Gigabyte Z87X-OC

    Reasons

    4 x PCI-E 16 (length, 8x 4x 4x 4x electrical) slots double spaced

    Additional PCI-E power plug (6-pin) on the MoBo to feed the 4 PCI-E slots. No need for powered raisers.

    Dip switches to turn on/off individual PCI-E slots. No need to remove or unplug any cables during testing

    Voltage measuring headers on MoBo, cables included.

    A myriad of other more or less useful OC switches and features.

    For a case I recommend AeroCool Strike-X Air.

    Not really a case, more like a open test bed but with 10 expansion card slots and an open construction it's perfect for what I needed. Not the greatest build quality but it just sits on the table, doesn't have to withstand a crash test.

    With 2 raiser cables I installed 4 x 7950 (Sapphire Vapor-X). The MoBo comes with an expansion card bracket which I attached to the rear of the case to support the 2 raised cards. The horizontal support bracket which is attached at the back of the case made for a perfect support for the raised cards at the front. Had to drill a few holes and get a few screws, that's all.

    I removed the top 20cm fan and lid but you could squeeze in all 4 cards with 2 raiser cables and maintain sufficient cooling while having them all sit at same level. Don't recommend it though.

    I live in the tropics with average daily temps over 30C or 90F. All 4 cards run comfortably below 65C with their fans rarely reaching 3000 rpm. I pull about 630 kH/s per card. I don't use AC in that room, only a 54W portable fan, albeit directed at the cards.

    A Seasonic X-Series 650W PSU is able to run 3 cards rock stable. Using 2 to run all 4 but will get a single 1250W.

    Pentium G3420 STD

    CoolerMaster Hyper-X 212 Evo CPU cooler

    2x 8GB PC 19200 (2400 CL10)GEIL Trident @ 1.65v

    120GB SSD Kingston V300

    On the next rig I'd get cheaper RAM and a cheaper 120GB SSD but would not skimp on the MoBo. Proved invaluable in testing and setup. Trying to run 2 x 650W PSUs was a mistake. These Seasonic models have load sensing circuits and the 2nd would not "fire on all cylinders" because I could not load all relevant circuits (24pin MoBo plug). Wanted to save on UPS but it didn't work out. Lessons learned.

    JC

  • Hi Arogtar (or Oliver),

    Thanks for the info you posted.  I did a little more research on the power stuff.  I'm certainly not an pc power expert.  However, I did find that the pci spec has very specific requirements for power usage and sequencing.  There are specific timing requirements that dictate when and if a card can draw more power and where it can get it from.  Those are dependent on the system having one unified power supply that is spec compliant, and that all cards are spec compliant, and that everyone is playing by the rules.

    My conclusion is that I should be able to stuff my mb full of cards and if the power supply has enough pci cables to supply all the cards, everyone should be happy.  Having said that, I think this power supply has 6 of the 6+2 pci cables.  So, I'm fine with 4 older cards that only use one pci power cable, and I could even throw in a couple of newer cards that take two power cables, but I don't have enough power capacity for 4 newer cards with dual power cables.

    Anyway, it seems to me that if all players are following the spec, that no matter how full the chassis, it would be impossible to overload the system, assuming all the aux power cables are connected and are coming from the same original power supply.

    Regarding having to modify a case to fit all the cards in, many, if not most tower cases only support 7 expansion slots.  You have to get a case with space for 8 and preferably 9 expansion slots for this to work.  Those are more rare.

    I still have to check out some of the riser and more open structure options.

    Sincerely,

    Ron


  • edited December 2013

    Hi Ron,

    I'd be careful with that ;)

    How much power does each PCI-e slot draw? According to the PCI-E data sheet available online, a 16x PCIE slot can draw up to 75W, while a 1x slot can draw up to 25W. The key here is "Draw up to" that number. I'm going to go out on a limb and say that the average Scrypt/sha256 miner is right at that limit with each card, as a high end 7950 or 7970 will draw 200+ watts per card. If you add in a CPU, HDD, and other typical usages of a motherboard, you're looking at trying to pull 300+ watts from the 24-pin connector, which is causing it to burn up.
    How much power total do I need to provide to my PCI-e slots to avoid catastrophe? According to this site, the maximum amount of power that can be drawn from a 24-pin connector is 355 watts. All one would have to do is add up what a typical draw is prior to adding GPUs, and you could understand how quickly a hashing system + 4 GPUs would exceed that limit.
    Is using powered risers even enough to power 4 of the most state-of-the art graphics cards? The reality is that the only time we see motherboards burn up is when 4 or more cards are used, when powered risers are not used at all. This would suggest that being safe would be to use powered risers/EVGA Power Boost on the 3rd card (as a safety measure), and an absolute requirement for that 4th card, or additional cards.


    According to all the info collected so far, three 16x-connections is the absolute maximum you should be doing on your PCI-e slots without using risers.

    In theory, though, you could just go with 1 GPU on the board itself and e.g. 6 1x-16x risers.
    They wouldn't even need to be powered, for the 1x-slots can take a maximum of 25w.

    On the otherhand I must admit: I don't know if it works that way.
    Will an 16x-card connected via 1x on a 16x slot try to pull 75w over the 1x riser ? I don't know that.
    If the answer is yes you are probably killing at least your mainboard faster than you can count to three.
    So my advice is to play it safe: Invest 10-15$ more into each PCI-e slot and use using molex risers.
    No use in burnt hardware for anyone !

    Regards,
    Oliver

  • Hi Oliver,

    Thanks for this additional info.  I'm going to have to review it in more detail later.  I'm not trying to be controversial, but, I don't see why there would be any problem.  The motherboard, cards, and power supply are all designed to work as one coherent unified system, through the specs.  I don't have to know how much power they can draw.  The cards already know.  A properly running and cabled system should be able to run every slot at maximum power and every aux power cable from the power supply at max power continuously.  If this is not the case, I definitely need to do more research.  If risers are being used that don't connect all the circuit card edge connectors, or that disrupt the power supply load sensing and control circuits, then I can see a problem.  I could be wrong, but I don't think the pci power spec is written to exclude the motherboard from being use to it's full capacity.  I think each slot has maximum power limits and they should all be able to run simultaneously.  I could be wrong, though, and there is always more to learn.  8-)

    Sincerely,

    Ron


  • PS, I think I see a flaw in the above analysis.  Everything in the system that has it's own power cable coming from the power supply will not be pulling power through the 24 pin motherboard connector.  So, hard drives, optical drives, gpu aux power, etc.; all that stuff will be pulling power directly from the power supply.  The only power that will be pulled through the 24 pin motherboard connector will be power that is being pulled through the motherboard.  This would include power for the motherboard itself, power for the ram, power for the cpu, power for the fans sometimes, and power going to the card slots.  I still believe that it should be completely impossible to overload a properly functioning system with an adequately beefy power supply using specs compliant components.

    Now it may very well be possible to over HEAT a properly functioning system and it may very well be possible to overload defective components such as improperly crimped cable assemblies, etc.

    Sincerely,

    Ron


  • edited December 2013

    Hi,

    > The motherboard, cards, and power supply are all designed to work as one coherent unified system, through the specs.
    I don't think that any of the boards was designed for 100% 4x PCI-e 16x power being drawn permanently.

    Most people with burns reported that after like 14-60 days, so it takes some time for the damage to occur.

    Also, consider that the cards usually limit the draw by their sheer size. 3x2 slots = no more slots on most boards.
    Yes, some boards might take up to 4 - still that might be within the range, although I don't think for 24/7.
    But at the point where you use risers...cough...I think you screw the expectations of the component designers ;)

    I just found a post that states something about the CPU plug though - the 4x seems to be designed for 192w, and the 8x for 384w...so I don't even think the CPU takes any power from the 24x connector.

    I've also found something else:
    > All sizes of ×16 cards are initially 25 W; like ×1 cards, half-height cards are limited to this number while full-height cards may increase their power after configuration. They can use up to 75 W (3.3 V/3 A + 12 V/5.5 A), though the specification demands that the higher-power configuration be used for graphics cards only, while cards of other purposes are to remain at 25 W.
    In conclusion with the seperate CPU powering this basically means that four GPUs should actually be safe - which speaks for your hyptothesis of "trust the standard !" :D

    Of course this is still only valid if things work that way and there is really no power drawn from that 24x connector in any other way.
    How much does the mainboard itself take ? I've found sources that state up to 45w.
    Huch much w does the mainboard lose on the way ? 90% efficiency ? 95 ? 85 ? This would also increase the load on the 24x plug.

    Cheers
    Oliver

Sign In or Register to comment.