Compatibility question: SuperMicro x9SRH-7TF and Xeon E5-2665?


Recommended Posts

Hi All,

Total UnRAID newbie here, so excuse any silly questions...

 

I'm seriously thinking about doing my first UnRAID build, for combined NAS & other home-server uses.

I've also never built a standard PC before (although otherwise have experience in embedded systems), so it will be a learning experience on many levels.

 

I've been reading up gradually on both HW & SW matters, and wasn't planning on submitting a proposed build for a while yet.

 

I'll post a much more organized post with general question and desired features eventually, but in brief, I want an ATX/uATX motherboard that supports a single-socket Xeon, ECC memory and 10-12 SATA drives; 2+ PCIe slots.

I'm focused on SuperMicro boards for now because they seem to have a good rep, a wide choice of server-oriented boards, and the cost is very little above other vendors' for the same features.

 

So why the post?

Looks like I can buy a new SuperMicro X9SRH-7TF for fairly cheap. It's an older design (doesn't even have USB3 ports), but the attraction is the onboard 10GBase-T ports. I don't need them immediately, and certainly didn't expect to be able to include them in the budget, but they would be nice to play with further on, and the price now would be very little above a more modern Supermicro board without 10GBase-T .

The snag turned out to be reasonable CPU options. This mobo has an single LGA2011-1 socket, and supports the Xeon E5-2600/1600 and E5-2600/1600 v2 families... Many of them have been End-Of-Lifed by Intel, and I could find almost none of the low-end models for sale new -- generally, one of the $200-$300  low-end 4-core versions would have been my choice. The higher end ones are easier to find currentl, but are typically >$1000 .

 

I'd therefore given up on that mobo, and went back to thinking of a SuperMicro X10SLL-F + 8-port PCIe SATA controller combo .

 

I've now come across the option to get a new, boxed Xeon E5-2665 (20M Cache, 2.40 GHz, 4 cores / 8 threads) for $200 delivered (MSRP was ~$1400 when new). Normally, this kind of CPU would be total overkill and I wouldn't think of it, but if it works, certainly for the price, maybe the mobo's still relevant.

 

So now for the questions:

1) Is that CPU price much too good to be true? I've seen it now from a couple of unrelated-looking sources, and earlier this year there was talk about a lot of used multi-core E5s coming onto the market for cheap.

 

2) Would the X9SRH-7TF be well supported by current UnRAID 6.x? It's a C602J chipset, and I've seen others boards with it mentioned in builds.

 

3) Specifically, is the Intel X540 10GBase-T controller well supported under UnRAID? It would be a bummer to find out later it isn't  :(

 

4) The Intel spec page for the E5-2665 says "Scalability: 2S Only". I couldn't find out exactly what this means. Normally, I'd understand it as "can be used in single-socket or dual-socket systems, but not more". However, the "Scalability" spec for other CPUs in the family is "1s", "2s" or "2s only".

"1s" obviously means single-socket only. But what's the difference between "2s" and "2s only"?

 

Maybe "2s" means "can be used in single- or dual-socket systems" and "s2 only" means "can ONLY be used in dual-socket motherboards, in which case it wouldn't work. Anyone know for sure?

 

5) Any general disadvantages to this mobo/CPU combo ? I'm sure it'll be a bit more power-hungry than others, but hopefully not outrageously so.

 

 

Link to comment

It's a little disappointing -- 48 views but no replies after several days...

 

Anyway, I've figured out that there are discussions of this in the forum, but I missed them, searching for the specific mobo & CPU. Had I searched for the Xeon E5-2670 rather than the 2665, there'd be a lot of hits...

So basically, it's the tradeoff I was expecting: Cheaper than much weaker mobo/CPU combos because it's a dead end, CPU-upgrade-wise --

Not a problem for me (vs. the weaker to begin with E3-1241 V3 I'd otherwise be using, on a SuperMicro X10SLL-F mobo).

I'll probably need to add a PCIe USB3 controller to the X9SRH-7TF, but that's it.

 

Only immediate question is the added power usage of the E5. I'd like to figure out how much more draw  the E5 (and if possible, the X9 mobo) use up when idling, which will be a lot of the time. I'm concerned not just about direct electricity costs, but about noise -- more heat would seem to imply beefier fans and so more noise.

 

Th Intel datasheets only give TDPs (115W vs. 80W), which certainly won't be representative. I looked, but there doesn't seem to be a resource for idle power dissipation info. Anyone know numbers for the E5-2665 and/or the E3-1241 V3? Rough expectations?

 

Link to comment

Another question:

 

According to CPU-world, the earlier steppings of the CPU I'm interested in, the LGA2011-1 Xeon E5-2665, have the same issue of non-working VT-d implementation as the E5-2670, as discussed in the  long Good Deals E5-2670 thread.

 

I apparently need the C2 (QBVF, SR0L1) stepping.

 

This is the candidate, new, boxed version, but the listing doesn't mention the stepping; I contacted the seller, and they don't have any tech info on what they're selling, beyond the Intel part number, BX80621E52665 .

Is it safe to assume that if indeed it's a new, retail-boxed SKU, the CPU won't have this bug, given the Intel Ark specs simply say VT-d is supported for the chip, without mentioning exceptions?

 

If not, anyone know how to contact Intel to find out? It would seem kinda odd if Intel expected customers to just accept randomly getting the version with/without the bug, given  some customers care a lot about virtualization, and this is a $1400-MSRP product...

 

Thanks for any ideas.

Link to comment

... Is it safe to assume that if indeed it's a new, retail-boxed SKU, the CPU won't have this bug, given the Intel Ark specs

 

Yes, if it's a retail box it's safe to assume it meets the specifications noted on the Ark site.

 

 

...some customers care a lot about virtualization, and this is a $1400-MSRP product...

 

True -- and I'm sure it's not an issue for chips purchased through the retail channel.    Whether or not an e-bay seller is accurately describing a unit that's selling for FAR less than the original price is not something Intel is going to warrant.  However, Beach Audio is a fairly well know reseller on e-bay, and I suspect these chips are as described -- i.e. "sealed original retail box" ... so you should be fine.

 

Link to comment

... also, they list all of the specs as shown on the Ark site in describing what they're selling => so I'd think if there was any issue with the CPU that e-bay's "buyer protection" would cover you in getting a refund.  Just be sure you install it and confirm the performance as soon as you receive it.

 

Link to comment

Thanks for the replies!

 

I would never assume anything, but I would think there must be a way to tell the stepping of the CPU of the markings on it or the packaging. Sending Intel an email asking them, is not a bad idea either.

I'm sure there's a way to tell from either the box itself or the CPU markings (I expect Intel  can tell from the CPU's serial number, which might be on the box as well), but I don't have access to these until I buy it.

I finally found an appropriate Intel support page for pre-sales, so sent off the question, and we'll see what they say.

 

Link to comment

... Is it safe to assume that if indeed it's a new, retail-boxed SKU, the CPU won't have this bug, given the Intel Ark specs

 

Yes, if it's a retail box it's safe to assume it meets the specifications noted on the Ark site.

That's what I was hoping. As a  noob to PC builds, that's why I very much wanted a boxed CPU (last personal computer I messed at the component level with was my Commodore PET, in the late 1980s).

 

Whether or not an e-bay seller is accurately describing a unit that's selling for FAR less than the original price is not something Intel is going to warrant.  However, Beach Audio is a fairly well know reseller on e-bay, and I suspect these chips are as described -- i.e. "sealed original retail box" ... so you should be fine.

... also, they list all of the specs as shown on the Ark site in describing what they're selling => so I'd think if there was any issue with the CPU that e-bay's "buyer protection" would cover you in getting a refund.  Just be sure you install it and confirm the performance as soon as you receive it.

For sure. That's another reason for getting a boxed version -- there's no question as to what the sold item actually is.

Link to comment
  • 4 weeks later...

No, I haven't disappeared… Have been doing a research on build components, and would now appreciate feedback on the proposed build.

Apologies if this is a bit long... As a build & UnRAID noob, I figure too many questions is better than too few  :)

 

The good news:

Intel Support got back to me, and confirmed that the boxed CPU is guaranteed to be the stepping with VT-d -- earlier steppings were sold only as tray versions to OEMs…

 

…The bad news is, all three sources for the boxed CPU ran out within a day after my last post, and there aren't currently any boxed E5s at reasonable prices (<$600) at all .

It suspect it'll be a long wait until additional ones appear, given they were EOLed 1.5 years ago.

 

After reading most of the (extremely useful) thread on the E5-2670, I've decided on a used E5-2665 & memory from Natex.us, given the recommendations here.

They're specifically selling the good stepping (incidentally, the 2670s are pretty much sold out everywhere).

 

Also, in the meantime the deal on the Supermicro X9SRH-7TF mobo is gone, so I've been looking at alternatives.

------------

Background intended use/needs:

-- Array: Centralized file serving for a household.

-- Mac time-machine backup

-- Background backup to cloud (only some of the data)

-- Media serving

-- Running dockers

-- Running VMs

 

Desired Features/reqs:

-- High degree of expandability : start with 5 HDs / 16TB + one parity drive + one cache SSD, go up to 10-12 HDs / 40-50TBs + two parity drives + 2-3 SSD cache pool

-- Hot swap on all spinning drives

-- Ability to upgrade gradually (add disk at a time, more memory, more gig-Enet for link aggregation and/or SMB3 multichannel, eventually 10GBase-T)

-- Evtl. add PCIe 16x/8X video card for VM passthrough

-- Fairly quiet when disks not being accessed (machine will be right next to workdesk in my study)

 

Proposed build, with  questions below:

-- Case: Sharkoon T9 Value mid-tower ATX (1) 

-- Hot-swap cages: 2x iStarUSA BPN-DE350SS 5-in-3 trayless cages. One installed initially.

-- Motherboard: Asus Z9PA-D8/iKVM, (Almost) ATX (2)

-- CPU: 2x Xeon E5-2665 ("v1"). Similiar to the famous E5-2670 .

-- Memory: 32GB, 4x 8GB ECC RDIMMs, Samsung  M393B1K70DHO-CKO .

 

-- PSU: One of the EVGA SuperNOVA G2 line (3)

-- Cooling:

---- CPU coolers: 2x Noctua NH-U9S 

---- Case cooling: (4)

 

(1) Was considering the just-announced Silverstone CS380 for its hot-swap ability, but its smaller sibling DS380 has lots of cooling issues with apparently same drive cage, and actual availability keeps getting delayed. No reviews out yet, certainly no longer-term ones, so decided not to wait for it. My only concern with the T9 is that it may be a bit noisy… I'll wait and see, and hopefully will be able mitigate via quieter fans if needed.

 

(2) Wasn't originally considering a dual-CPU mobo -- don't really need it -- but Z9PA-D8 only costs me $5 more shipped vs. the Z9PA-U8 single-CPU version. It allows running more VMs in parallel, while still doing media/file serving. Added cost over single CPU setup is ~$140 .

 

(3) I need help with PSU sizing.

I tried 4-5 different power budget calculators, and got results all over the place, 350W-1000W.

I need a PSU that can eventually handle 10-12 HDs, 2 SSDs, a 10GBase-T NIC, and 1-2 low-end GPUs (30-40W each, for passthrough to VMs) . The Xeons are 115W TDP each.

Asus manual for the mobo says minimum is 500W, though it gives no details on what that assumes.

I suspect that 550W is OK but borderline power-wise, but this mobo needs 2x EPS 8-pin power inputs in addition to the 24-pin ATX one, and it looks like only >=750W PSUs have that.

Is a 750W PSU reasonable, or overkill? I understand running a PSU at a low % of max capacity is inefficient; I live in a climate that requires A/C more than half the year, so unnecessary server heat also incurs extra A/C load. Also, more heat ==> more noise to cool.

Thoughts?

 

(4) Case fans: Initially I plan using:

-- One of the case's included intake fans (120mm);

-- The fan that comes with the 5-in-3 cage, 80mm, pulling air through the cage into the case (not controllable via mobo, AFAIS)

-- Case's included exhaust fan (120mm).

Sounds reasonable to start with, esp. as initially there'll only be 5 disks & no GPU?

Afterwards, I'll replace/augment with more fans if necessary due to cooling/noise issues.

 

All feedback welcome (-:  I have more questions on HBA and drive choice I'll post separately.

 

 

Link to comment

 

(3) I need help with PSU sizing.

I tried 4-5 different power budget calculators, and got results all over the place, 350W-1000W.

Wattage is irrelevant.  What you need to do is add up all the current draw in amps for all the components at worst case and make sure that the 12V line (the only one that really matters) can supply (and have excess) the current (amps)

 

Most of the info is online, but when in doubt go big not small.  In your case, whatever calculator you used that offered up a 350W supply (which will have at most 18A available on the 12V line) is completely out to lunch, as the draw on the hard drives alone during powerup is ~24A

 

Assume 2-3 amps per hard drive.  Mobo is an amp or two.  Add on cards 1A ish.  Cooling fans will all have their rating on the label.  Cheap fans ~.25A  Highend server (high static pressure) fans ~1.5A

 

CPU up to ~10A at full load.

 

Ideally you want a single rail on the 12V line, but multi-rail can also be utilized, but you have to think about where the loads are on each rail and connect appropriately.

Link to comment
  • 2 months later...
2 minutes ago, wrestler said:

Did you get this built and how is it working?  I am interested in building the same.

The Z9PA-D8 mainboard was no longer available (at least in Europe) just when I was about to pull  the trigger. After some intensive research, I settled on the Asrock Rack EPC602D8A as a replacement ; there are pretty much no dual-CPU E5 (LGA2011-1) motherboards in the ATX form factor that are reasonably priced, that I could find.

 

The AsRock is somewhat similar to the single-CPU version of the Asus, but is slightly better for my purposes:  12 native SATA ports rather than 6, and 7 PCIe slots rather than 5. It also has an internal USB3 headerm so supports 4 native USB3 ports rather than 2. It does have IPMI support.

 

What took longest to research is what CPU cooler would work... Both the Asrock & Asus use the narrow version of the Intel ILM LGA2011-1 socket, and in addition have almost no space between the CPU and DIMM sockets. They were designed to use commercial-server type radiators with no fan and forced-duct cooling, or extremely noisy small-diameter fan/high-RPM coolers (only available from brands I'd never heard of); almost no common consumer/prosumer coolers will work here.

 

I really wanted a decently quiet solution. After a lot of back and forth with Noctua, SuperMicro (re their SNK-P0050AP4 cooler, which isn't low-RPM really), Asrock (previously with Asus as well), I settled on the Noctua NH-U9DX i4 .

Even so, the cooler obstructs access to the two innermost DIMM slots; if I ever have RAM problems and need to replace these two, or even just do a test-via-DIMM-swap, the cooler will need to be removed, thermal paste reapplied etc.

 

BTW, If you end up using the Asus Z9PA-D8 and are interested in using a Noctua cooler, to save you the 3 weeks (!) I corresponded with them on fan choice/placement, and want airflow to wards the rear of the case (I/O port panel), the only suitable Noctua cooler is the NH-U9S (assuming no height issue, which there shouldn't be with an ATX case); also, the fan location on one of the coolers would need to be reversed. Let me know if relevant, I'll PM you or post the detailed drawing they sent me.

Incidentally, I'm very impressed with Noctua's presales support -- they spent hours on obtaining accurate photos of the mainboard, making drawings of various coolers superimposed etc.

 

I only received the final parts of the build a few days ago, so it's not  done, but I did get UNRaid to boot with no issues on the very first try, just mainboard & power supply, not  assembled in the case yet. I'll report back in a few days once it's all assembled .

Only issue is I am getting warmer idle CPU temps than I'd like (48C-50C), however, it could be because it's only using the CPU cooler fans, no case fans, so I'll hold off worrying until  assembly is complete.

 

Final build list, excluding the array data disks:

-- Case: Sharkoon T9 Value mid-tower ATX   <== My first impressions are very favorable. Very well made, and unbelievable it only cost 51 Euros. (*)

-- Hot-swap cages: 2x iStarUSA BPN-DE350SS 5-in-3 trayless. One installed initially, the 2nd will go in once it's needed.

-- Mainboard: Asrock EPC602D8A,  ATX

-- CPU: 1x Xeon E5-2665 ("v1") .

-- Memory: 64GB, 8x 8GB ECC RDIMMs, Samsung  M393B1K70DHO-CKO .

 

-- PSU:  EVGA SuperNOVA 750 G2 (750W)

-- Cooling:

---- CPU coolers: 1x Noctua NH-U9DX i4 (using both fans)

---- Case cooling: Initally (to be modified as necessary):

        Exhaust fan: 120mm fan that comes with case

        Intake fans: 2x 120mm fans that come with case

                            1x 80mm fan ((fixed speed) on hot-swap cage

-- Cache drive: 1x 2.5" Samsung EVO 850 500GB SSD

 

----------

(*) One caveat about this case: All the 5.25" bays have metal tabs between them, so if you want to install a hot-swap cage that doesn't have channels accommodating such tabs, you'll need to flatten the tabs first; I used this G-clamp.

Link to comment
29 minutes ago, GreenDolphin said:

I really wanted a decently quiet solution.

 

For both quiet and SMALL did you consider a water-cooling solution?    These have a VERY tiny footprint on the CPU itself -- although you do, of course, have to find a spot for the radiator/cooling fan elsewhere in the case.

 

Link to comment
19 minutes ago, garycase said:

 

For both quiet and SMALL did you consider a water-cooling solution?    These have a VERY tiny footprint on the CPU itself -- although you do, of course, have to find a spot for the radiator/cooling fan elsewhere in the case.

 

Not really. Not familiar with watercooling in any detail, but:

-- I don't see any inherent reason a watercooled solution would be quieter than an aircooled one. The same heat wattage needs to be dispersed, after all  (actually, the watercooled system has extra wattage for pumps, though that's probably pretty low), so the radiator fan has the same potential noise issues as aircooling fans.

-- While I haven't run any thermal calculations :), I doubt the overall thermal load is large. The CPU's TDP is 115W; While there are multiple disks, given the way UnRAID's array works, only one disk is written to at a time. I won't have any PCIe cards to begin with, and the graphics card I intend to eventually add for an OS X VM is only 15W max.

-- The system won't be highly stressed for hours at a time, like an overclocked gaming rig;

-- An ATX case should IMO have enough internal volume such that decent fannage will do the job;

-- Given how uncommon narrow-ILM LGA2011 is to begin with, I suspect suitable watercooling components for the CPU will be even harder to find than aftermarket air coolers .

-- I very much dislike the failure modes of a liquid cooled system... Esp. in combination with a 24/7 server.

And there do seem to be a lot of accidents / failures. For a first build (well, since the late 1970s)? Prob. not  a good idea.

--  I doubt it's an accident that watercooling isn't commonly mentioned for large server installations; issues .

-- Water cooling has the reputation of being very expensive (anything over ~$100 for the entire cooling system is  "very expensive" in the context of this build);

-- Given added system complexity & cost, it doesn't make sense to me to even think of watercooling unless I can't tweak the aircooled system to fix the issues (and I'm pretty sure the only issue I'm likely to have is noise due to the cheap-ish included fans; actual cooling performance should be sufficiently tweakable using BIOS)

Link to comment

Liquid cooling has dropped a lot in price the last few years -- you can get a very nice system for well under $100

 

e.g. https://www.newegg.com/Product/Product.aspx?Item=N82E16835203017

 

or   https://www.newegg.com/Product/Product.aspx?Item=N82E16835181010

(this one requires a bracket adapter kit that's ~ $10)

 

And these are generally MUCH quieter than air-cooled solutions.   While it's true they need to dissipate the same amount of heat, they use larger fans than the air cooled units, which can run at lower rpm's for the same airflow -- which translates to quieter operation.    For example, both of these units use 120mm fans, compared to the 92mm fans on the Noctua cooler you bought => so the fans have 70% more airflow capability at a given rpm (assuming equally efficient blades).

 

Link to comment
15 minutes ago, garycase said:

Liquid cooling has dropped a lot in price the last few years -- you can get a very nice system for well under $100

 

e.g. https://www.newegg.com/Product/Product.aspx?Item=N82E16835203017

 

or   https://www.newegg.com/Product/Product.aspx?Item=N82E16835181010

(this one requires a bracket adapter kit that's ~ $10)

 

And these are generally MUCH quieter than air-cooled solutions.   While it's true they need to dissipate the same amount of heat, they use larger fans than the air cooled units, which can run at lower rpm's for the same airflow -- which translates to quieter operation.    For example, both of these units use 120mm fans, compared to the 92mm fans on the Noctua cooler you bought => so the fans have 70% more airflow capability at a given rpm (assuming equally efficient blades).

 

Interesting, thanks... Definitely cheaper than it used to be.

The below is strictly hypothetical for now, since I already have the air cooler. However, am always interested in learning new stuff :)

 

1) Neither of the above, AFAICS, fit my specific CPU socket (LGA2011-1 Narrow ILM). Both (the Corsair with adapter) only fit the LGA2011 Square ILM which is mechanically different than what I have. I checked their instructions to be sure.

 

2) That's the minor issue.

More significantly, I still don't see how it improves overall cooling capacity for the case overall, or noise:

My existing (free) case exhaust fan is also 120mm, and controlled by the motherboard just like these watercoolers' fan would be. Given those Noctua fans are silent (I can't hear them, at all, from 30cm away at 900RPM), there's no noise advantage.

Incidentally, spec-wise the Noctua fans are 18dBa vs. 30dBa for the Corsair and 35 for the Intel.

 

3)There's no other exhaust area on the case, so unless I drill an additional exhaust grille somewhere else on it for the watercooler's radiator fan,  it can only replace the stock case fan, not augment it.

That is, to the extent the waterblock moves more heat off the CPU than the Noctua heatsink + 2x 92mm fans (see (5) below), it still needs to leave the case;  that removed extra heat would  come at the expense of other heat generated from the case, since the overall case heat removal capacity is the same.

 

4) Ergo, the only advantage of watercooling I can see using these solutions is if the Noctua cooler can't deal with the CPU heat (*), and otherwise the case total heat is within the bounds removable by a 120mm fan.

 

5) The fact that watercooling has much better cooling potential than air in general isn't  relevant; it's far from a given these specific water blocks can remove more heat than the specific (quite large) Noctua heatsink.

I note neither one gives any specs in that regard (BTUs removed per minute). Without that, who knows what the actual performance is?

 

6) The damage if it leaks is serious, with no warning, and the MTBF is a third of the aircooled system's (not trivial given unattended 24/7 operation)

At least if a fan fails, on either a watercooled or aircooled solution, at least the CPU and many other electronic components have thermal protection shutdown

 

To really make use of a watercooled solution using my case, if there were a serious heat problem, I'm fairly certain I'd need to mount radiator & fans outside the case.

 

Anyway, thanks for the ideas!

 

(*) Doubtful IMHO -- this model has been out for 2.5 years, and was specifically designed for Xeon cooling,  spec'd for CPUs to 140W TDP, and used by many UNRAID and other home server builds. If there were issues with it, it would have been common knowledge by now.

Link to comment

The Noctua spec at 18db is with the low noise adapter (LNA) and for one fan running.    With two fans I suspect it's a bit louder -- but still very silent as long as you're using the LNA.    Both of the liquid cooling systems can also run the fans at lower speeds -- the Intel shows 21db at 800rpm; the Corsair site doesn't specify the noise level at lower rpms.

 

But I agree that if your Noctua is working well and is quieter than you can hear there's certainly no reason to switch :D

 

Link to comment
15 minutes ago, garycase said:

The Noctua spec at 18db is with the low noise adapter (LNA) and for one fan running.    With two fans I suspect it's a bit louder -- but still very silent as long as you're using the LNA.    Both of the liquid cooling systems can also run the fans at lower speeds -- the Intel shows 21db at 800rpm; the Corsair site doesn't specify the noise level at lower rpms.

 

But I agree that if your Noctua is working well and is quieter than you can hear there's certainly no reason to switch :D

 

 

Sure, this entire discussion is hypothetical... But half the reason I decided (after many years) to DIY a system, rather than buying a Synology or QNAP appliance, is so that I'd have an  excuse to do stuff hands-on and learn new things. Thanks for indulging me :D

 

Just a minor correction -- according to Noctua specs, that's ~18dB(A) each without the LNA, and ~13dbA with; since the dB scale is logarithmic, two fans together without LNA would yield 20.6 dBA (see here), and that's the max noise levels, at 1600RPM. Noctuas are apparently pretty amazing -- every review I've read shows them both quieter and cooler than virtually anything else.

As I mentioned above, so far I've only done testing outside the case; both with and without LNA yielded the same results (~48-50C, 900RPM), and the same whether idling in the BIOS or running MEMTEST86 (not sure I understood its display, but I think only 1 core of the 8 is used in the test).

I'll wait until I have the system fully assembled and all fans attached to see what temps I get (and sound levels  ;-)) at various loads -- I'll definitely have questions at that point... I  want to achieve as low baseline idling temps as reasonably possible -- that's where the machine will be spending most of its time.

Link to comment

Agree that Noctua makes some amazing fans and heatsinks -- I've used quite a few of their fans over the years.

 

MemTest86 can be configured to use all of the cores -- look at the options when you first start it.    By default it uses only one core;  you can configure it to use a different core on each pass; or to use all of the cores at once.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.