MacInTouch Amazon link...

storage systems

Channels
Products, Questions
As you probably know, Thunderbolt 2 is a speed bottleneck (< 20 Gbps), so even a simple PCIe Thunderbolt expansion box with an NVMe PCIe SSD card should easily max out performance even with a single SSD (e.g. Samsung 870 EVO).
That's true, and thank you for pointing that out. I should have been more specific with my ask, by specifying I'm interested in Thunderbolt 3-capable enclosures. I fear I'll be needing to replace my current seven-year-old Mac Pro with an iMac Pro in the near future and hope to future-proof my enclosure purchase.
 


Just happened to see this post. Does the comment about maintaining "an excellent power supply" refer to using a higher-quality power supply than the one that ships with a given drive, or to considering an uninterruptible power supply?
I'd say both, unfortunately. Based on the plastic enclosures and the external power supplies shipped with EasyStore and like drives, I'd have serious concerns about long-term viability if the drive is used a lot. There is no active ventilation and the drives are almost 100% enclosed in a shell with tiny holes.

On the power supply side, the external power supplies shipped with these units also do not fill me with the warm fuzzies. They're likely adequate for the warranty period, especially if you have clean power... But a UPS will certainly help, as most UPS' also address brownouts, spikes, etc., reducing the wear and tear on any power supply downstream from them. This is especially relevant in the countryside where line power may sag and spike a lot as inductive loads come on and off (think AC compressors, cheap motors, and the like).

In college, I had the pleasure of enjoying 4-5 power cuts a night thanks to the university hanging 3 college dorm rooms plus the lounge, etc. off of one circuit breaker. Half a year later and the power supply in my DuoDock was toast. A UPS would certainly have helped. Thankfully, the university addressed that issue.

My current server has a three 120mm fans dedicated to just the HDDs, a UPS, and a generously sized power supply (600W capacity for a 100W load) so those drives will be happy running to pretty much eternity at 29*C.
 


Does anyone have experience with the OWC Thunderbay 4 Mini with 2.5" SATA SSDs?
I looked at Thunderbolt 3 storage with NVMe blades, and while the speeds can be great, the cost is high. I think I could live with the roughly 500MB/s speed of a SATA3 SSD if this OWC can sustain it in JBOD mode. OWC rates the speed of the case as "up to 1556MB/s", which must be only in a RAID mode.
 


Does anyone have experience with the OWC Thunderbay 4 Mini with 2.5" SATA SSDs?
I looked at Thunderbolt 3 storage with NVMe blades, and while the speeds can be great, the cost is high. I think I could live with the roughly 500MB/s speed of a SATA3 SSD if this OWC can sustain it in JBOD mode. OWC rates the speed of the case as "up to 1556MB/s", which must be only in a RAID mode.
I have an older Thunderbay and it's ok (3.5" drives, JBOD), but it lacks proper drive vibration isolation (can hear it in another part of house... subtle but it's a metal housing and not isolated). Of course, you wouldn't need to worry on that issue with SSDs ;)

The Akitio website has specs on this Thunderbay 4, which only does software RAID. It can take SATA 6Gbps SSD drives. As you stated, its throughput in the 1500MB/s range is likely RAID0.

Putting in (4) Crucial BX500 2TB SSDs, this Thunderbay Quad Mini would set you back around $1200, more if you went with Samsung 860EVO SSDs.
 


Does anyone have experience with the OWC Thunderbay 4 Mini with 2.5" SATA SSDs?
I looked at Thunderbolt 3 storage with NVMe blades, and while the speeds can be great, the cost is high. I think I could live with the roughly 500MB/s speed of a SATA3 SSD if this OWC can sustain it in JBOD mode. OWC rates the speed of the case as "up to 1556MB/s", which must be only in a RAID mode.
I do have experience with an OWC ThunderBay 4 Mini but in the Thunderbolt 2 version, purchased in October of 2017. I purchased an exact duplicate a few months later for use in a relative's home. However, I've only used my ThunderBay 4 Minis in the provided SoftRAID's RAID5 configuration, so I can't speak to use in JBOD mode.

What I can speak to, from my experience with this product, is my record of reliability with it. Both units I purchased have been rock-solid with no issues whatsoever with either the enclosure or the four OWC Mercury Extreme Pro 6G SSD (500 GB each) drives. And, of course, the provided SoftRAID XT software has been excellent as well.

This is in contrast to an OWC ThunderBlade (2TB), which is ThunderBolt 3-connected. Since purchasing the ThunderBlade last February, I've needed to return it three times to OWC for replacement. Once for an SSD drive (OWC Aura P12 500GB) that SoftRAID had alerted me failed, once again for a drive that kept throwing up errors in SoftRAID, and the third time for drives spontaneously unmounting. The unmounting of drives was remedied by installation of upgraded firmware, but that procedure could only be performed by OWC at their factory.

While an irritation, of course, OWC's customer service was excellent each time, and they remedied the malfunctions quickly and under warranty. There was no cost for anything, including shipping, and the repair turnaround time was speedy. So there's that to consider as well.
 



I would like to request opinions on the fastest Thunderbolt hardware RAID enclosures available, with or without SSD drives provided in the purchase. I would like the enclosure to handle the fastest SSD drives available, so if there are recommendations for that as well, I'd love to hear them.
A perfectly reasonable question, but to pick nits, it's probably worth noting that Thunderbolt 3 is essentially a PCIe 3 x4 connection, and this isn't sufficient to support "the fastest SSD drives available".

It's fast enough to support virtually any existing single M.2 NVME SSD blade at full capacity, because these devices are also based on a PCIe 3 x4 connection. But anything faster - a high performance PCIe datacenter SSD, or a striped (RAID 0) array of multiple M.2 blades on a card - will be performance-limited by Thunderbolt. These types of things would need to be plugged directly into a sufficiently fast/wide PCIe slot to hit full speed (and so aren't really worth the cost if your plan is to put them in a Thunderbolt 3 enclosure..)
 


Does anyone have experience with the OWC Thunderbay 4 Mini with 2.5" SATA SSDs?
I looked at Thunderbolt 3 storage with NVMe blades, and while the speeds can be great, the cost is high. I think I could live with the roughly 500MB/s speed of a SATA3 SSD if this OWC can sustain it in JBOD mode. OWC rates the speed of the case as "up to 1556MB/s", which must be only in a RAID mode.
I have an older version of the Thunderbay 4 mini (Thunderbolt 2), which I use with my Mac Mini Server Mid-2010 (Thunderbolt 1) in JBOD mode. It has worked flawlessly for several years. I have 4 SSD's installed, and it's blindingly fast compared to the internal disks. Here are the Blackmagic Disk Speed Test results with a 5GB file:

SSD - Samsung SSD 860 EVO 1TB​
Write: 350 MB/s Read: 380 MB/s​
HD - WD (7200 rpm, SATA3)​
Write: 53 MB/s Read: 65 MB/s​

Even compared to a modern HD, you'll love the Thunderbay with SSDs. I just wish I hadn't upgraded my Mac Mini Server Mid 2011 to internal SSDs (paid to have it done) and used the Thunderbay instead.
 


I just got a new 10TB hard drive (G-Technologies) and will be certifying it with SoftRAID before it sees any significant use. That said, I just happened to fire up DriveDX. Should I be at all concerned that a drive that has a current total Power on Time of 40 hours and a Power Cycle Count of 7 already has an Overall Health Rating/Overall Performance Rating of 93.4%?

For a drive that's practically brand-new, I must admit I was a bit shocked to see a number even that far off 100.
I’d definitely look at the underlying data (the “health indicators”), but I would not be too worried. Among other things, I’ve seen DriveDx highlight a higher-than-expected spin up time (the time it takes from applying power to the spindle motor to when it achieves stable rotational speed) as indicating an area of concern, when it seems that at least Seagate and Western Digital drives frequently spin up slowly on occasion even when new. (Technically it’s the drive manufacturers that define the expected values of these parameters, but it’s the software reading the SMART which aggregates them into an overall rating, and I suspect DriveDx gives more weight to these slight deviations than is really warranted.)

If an error rate or retry count indicator is showing problems on a new drive, I’d be more worried than about the spin-up time or others which are more of an “early warning” or “possibly slightly out of spec” flag.
 


But a UPS will certainly help, as most UPS' also address brownouts, spikes, etc., reducing the wear and tear on any power supply downstream from them. This is especially relevant in the countryside where line power may sag and spike a lot as inductive loads come on and off (think AC compressors, cheap motors, and the like).
It’s probably also worth noting that UPS’s vary greatly in their ability to compensate for poor input power and the quality of the power they produce. The cheapest UPS systems produce a square-wave output on battery, rather than the sine wave of line power, which is rough on some types of power supplies. I’d avoid these. Some systems pass line power through unchanged until the point at which the voltage drops enough to switch to battery power.

At home (in the countryside), I use UPS systems which actually condition the line power, generating a clean sine wave and compensating for fluctuations in voltage, rather than just passing it through. They are slightly more expensive, but worth it to me for the peace of mind.
 


That's true, and thank you for pointing that out. I should have been more specific with my ask, by specifying I'm interested in Thunderbolt 3-capable enclosures. I fear I'll be needing to replace my current seven-year-old Mac Pro with an iMac Pro in the near future and hope to future-proof my enclosure purchase.
It is a different way of framing the enclosure issue, but with a $2,000 budget and an iMac Pro in close proximity to each other, a Mac Pro may do. Mac Pro plus some inexpensive four M.2 fSSD adapter cards is a bit over $2,000 - incrementally outside the price range, but would 'uncork' the faster options in the future. However, you likely would lose some internal Apple boot SSD capacity in that swap, so the gap could blow significantly past $2k if you "even out" the other components besides 3rd-party storage.

The other "shift in focus" is to look for an external Thunderbolt 3 PCIe card enclosure instead of something with "drive bays". You could add an M.2 RAID card to that. It would likely be x8 (or higher) and saturate the Thunderbolt 3 link, but another enclosure in the future (and another Mac host system) could "uncork" that. (There appears to be high demand for macOS bootable M.2 RAID cards at the moment from the spike in new Mac Pro owners at the moment.)

As far as external Thunderbolt enclosures go, I think you should look for something that is currently capable and not necessarily long-term future-proof. There may be something new from the CES announcements, but many of the Thunderbolt 3 enclosures have the 2016-era 6000 series ("Alpine Ridge") controllers in them. The iMac Pro has it also. There is a newer Thunderbolt controller of the 7000 series ("Titan Ridge", 2018 era) that should work better with future systems. Intel is about due (~2-2.5 year cycle) for a new Thunderbolt 3 controller at some point this year. That may be later this year (and too long to wait), but it is suggestive that Alpine Ridge is solidly in the 'older', 'more rearward looking' status at this point.

It would make sense for Apple to do a minor upgrade to the iMac Pro this spring. That would take care of the iMac Pro moving up on Thunderbolt controller, but Apple doesn't always do the sensible thing in terms of desktop "Pro" upgrade timing.

If looking for "maximum SSD speed" RAID 5, then I suspect a larger fraction of your budget will get wrapped up in the RAID controller cost than in capacity. There is a trade-off when you have a fixed budget and are looking for speed and capacity. Sitting on a single Thunderbolt controller will put a limiter on speed (bandwidth). Capacity costs are a major factor. It is better (faster) to have [files] on SSD than on hard drives – even if the SSD isn't operating at maximum speed, the relative speed is high. The 'ultimate' bandwidth SSDs often have a higher $/GB cost, so those may not be worth it if deploying on Thunderbolt v3. "Fast enough" and lower $/GB cost would get a much better capacity result. (RAID 5 tosses lots of data capacity at redundancy so "more affordable" capacity is likely a factor.)

SoftRAID has a Version 6 beta (I'm not in the beta) which may bring booting back. They'd have to get some cooperation from Apple to get into the firmware, as in the past (I'm not sure Apple is doing that in the more locked down firmware present). The descriptions say all of the capabilities are coming back with v6 - perhaps that is all the ones they have control over.
 


A perfectly reasonable question, but to pick nits, it's probably worth noting that Thunderbolt 3 is essentially a PCIe 3 x4 connection, and this isn't sufficient to support "the fastest SSD drives available".
It's fast enough to support virtually any existing single M.2 NVME SSD blade at full capacity, because these devices are also based on a PCIe 3 x4 connection. But anything faster - a high performance PCIe datacenter SSD, or a striped (RAID 0) array of multiple M.2 blades on a card - will be performance-limited by Thunderbolt. These types of things would need to be plugged directly into a sufficiently fast/wide PCIe slot to hit full speed (and so aren't really worth the cost if your plan is to put them in a Thunderbolt 3 enclosure..)
Note that [some] of the multiple NVMe enclosures [e.g. OWC 4M2] only support 1 lane per blade, so you get x4 performance to the computer but only x1 per blade

I didn’t bother getting the fastest NVMe blades, as you’ll never need it. However, it is still faster when comparing SATA to Thunderbolt.
 


A perfectly reasonable question, but to pick nits, it's probably worth noting that Thunderbolt 3 is essentially a PCIe 3 x4 connection, and this isn't sufficient to support "the fastest SSD drives available".
It's fast enough to support virtually any existing single M.2 NVME SSD blade at full capacity, because these devices are also based on a PCIe 3 x4 connection. But anything faster - a high performance PCIe datacenter SSD, or a striped (RAID 0) array of multiple M.2 blades on a card - will be performance-limited by Thunderbolt. These types of things would need to be plugged directly into a sufficiently fast/wide PCIe slot to hit full speed (and so aren't really worth the cost if your plan is to put them in a Thunderbolt 3 enclosure..)
Thank you very much, John W (and to Ric, who had tried to steer me to the same facts earlier). You've answered my question fully, by pointing out there's no need for me do the search for such an enclosure or drive in the first place.

When the time comes, I'll simply convert my existing Thunderbolt 3-enabled OWC ThunderBlade to JBOD, which will boot Catalina.
 



The takeaway (as it has been for several years) is that all things being equal, folk who want to host stuff at home should be buying HGST or Toshiba drives. SOHO systems are unlikely to feature as many parity checks / backups as Backblaze and, as such, cannot afford the "business as usual" 1-3% failure rate with Seagate products.

Also notable is the complete absence of WDC products in that data set, even though WDC now owns HGST. If you click around their web site, Seagate's 12TB products have apparently been particularly troublesome, so I'd avoid using those drives. Backblaze is apparently working with Seagate to replace their extant cohort.

So even with all the redundancies that Backblaze has, a 3+% failure rate is not acceptable. (Labor may have something to do with it, as HGST 12TB drives/brands literally require 8x fewer technician visits for drive replacement, never mind the other associated tasks like documenting, shipping, etc. All that maintenance does add up, no matter how cheap the initial drive.)
 


I've found it hard to find the exact models in the Backblaze reports. I'm guessing that many models are superseded by the time of the reports.

Just checking Amazon, it looks like Western Digital is now branding Ultrastar drives as WD. I got no hit for HUH721212ALN604, instead getting HUH721212ALE604, and by vendors I'm unfamiliar with.
 


The takeaway (as it has been for several years) is that all things being equal, folk who want to host stuff at home should be buying HGST or Toshiba drives. SOHO systems are unlikely to feature as many parity checks / backups as Backblaze and, as such, cannot afford the "business as usual" 1-3% failure rate with Seagate products.

Also notable is the complete absence of WDC products in that data set, even though WDC now owns HGST. If you click around their web site, Seagate's 12TB products have apparently been particularly troublesome, so I'd avoid using those drives. Backblaze is apparently working with Seagate to replace their extant cohort.

So even with all the redundancies that Backblaze has, a 3+% failure rate is not acceptable. (Labor may have something to do with it, as HGST 12TB drives/brands literally require 8x fewer technician visits for drive replacement, never mind the other associated tasks like documenting, shipping, etc. All that maintenance does add up, no matter how cheap the initial drive.)
I agree with these conclusions, and when I had a choice, when I still worked with a medium-sized data center, I would try to purchase HGST drives for many-drives-per-shelf, many-shelves-per-rack applications.

But it should be pointed out that Backblaze's results are not strictly applicable to desktop, or small-enclosure (1 - 6 drives) applications. They utilize drives in high-density, custom designed shelves ("Pods") with up to 45 drives. And though those enclosures are designed to minimize vibrations, they're stacked in racks with lots of other enclosures, and the drives are desktop drives — that's the key to Backblaze's low-cost approach — and not designed for the higher acoustic noise (vibration), constantly-powered-on environment of a data center.

TL;DR: The same drives might have different relative failure rates in desktop systems/enclosures (that are kept clean) than in the Backblaze stats.
 



I've found it hard to find the exact models in the Backblaze reports. I'm guessing that many models are superseded by the time of the reports. Just checking Amazon, it looks like Western Digital is now branding Ultrastar drives as WD. I got no hit for HUH721212ALN604, instead getting HUH721212ALE604, and by vendors I'm unfamiliar with.
The "E" toward the end of the second part number represents a 512e ("Advanced Format," or "AF") 6 Gbit/s SATA interface that appears to the host system to have 512 byte sectors despite having larger (4096 byte) physical sectors; I believe the "N" is for drives with an actual 512 byte sector structure, though it might represent 4K "native" AF drives.
 


I've found it hard to find the exact models in the Backblaze reports. I'm guessing that many models are superseded by the time of the reports.

Just checking Amazon, it looks like Western Digital is now branding Ultrastar drives as WD. I got no hit for HUH721212ALN604, instead getting HUH721212ALE604, and by vendors I'm unfamiliar with.
The decoder ring: Ultrastar DC HC520 data sheet (PDF). See the end of the second page: “How to Read the Ultrastar Model Number.” The difference here seems to be “N6 = 4Kn SATA 6GB/s, E6 = 512e SATA 6Gb/s.” Web-searching 4Kn vs. 512e is left as an exercise for the reader. (Or see Joe Gurman's reply that was posted as I typed.)
 


But it should be pointed out that Backblaze's results are not strictly applicable to desktop, or small-enclosure (1 - 6 drives) applications. They utilize drives in high-density, custom designed shelves ("Pods") with up to 45 drives. And though those enclosures are designed to minimize vibrations, they're stacked in racks with lots of other enclosures, and the drives are desktop drives — that's the key to Backblaze's low-cost approach — and not designed for the higher acoustic noise (vibration), constantly-powered-on environment of a data center.
While I agree in principle, I'll also note that many consumer-grade desktop enclosures (especially external drive units from OEMs like Seagate, WDC, etc.) do a fabulous job of roasting enclosed hard drives in their own juices. After a multi-hour backup (which 12TB capacity basically ensures), I'd wager the drives in there are significantly hotter than the ones in the storage pods at Backblaze.

My spinners tend to fail in the backup enclosures, not the main server, even though they get used a lot less hours per year than the server drives. I presume that has to do with turning them off and moving the enclosure off-site.

Inside the server, I keep my spinners below 31*C year-round; they rest on silicone grommets inside a large metal cage with plenty of air flow. The solid state stuff, like the Optane p4801x or the LSI 2116 HBA chip, does tend to run hotter.
 


While I agree in principle, I'll also note that many consumer-grade desktop enclosures (especially external drive units from OEMs like Seagate, WDC, etc.) do a fabulous job of roasting enclosed hard drives in their own juices. After a multi-hour backup (which 12TB capacity basically ensures), I'd wager the drives in there are significantly hotter than the ones in the storage pods at Backblaze.

My spinners tend to fail in the backup enclosures, not the main server, even though they get used a lot less hours per year than the server drives. I presume that has to do with turning them off and moving the enclosure off-site.

Inside the server, I keep my spinners below 31*C year-round; they rest on silicone grommets inside a large metal cage with plenty of air flow. The solid state stuff, like the Optane p4801x or the LSI 2116 HBA chip, does tend to run hotter.
The thermal issues are ones Backblaze's Pods are unlikely to have, given the numbers of high-capacity fans in each one, and the fact that they're in a server room environment (though I don't know what temperature is maintained in them).
 


In the era of the 1.5TB Seagate drives, my favorite storage system accessory was a 140 mm desk fan used to cool drive enclosures and bare drives in 'toasters'. Forgetting the fan meant that any backup taking more than about half an hour would simply stop.

Even now I routinely use a fan for any first full Time Machine or clone backup for drives in a 'toaster'. The latter is especially useful for beta test systems which may be backed up/tested using older drives previously removed from service but still showing no SMART errors.
 


'Way back' in 2014, Backblaze ran an analysis on their data set to compare the longevity of drives to the temperature they were operating in. The first-order conclusion was that it didn't, at least for most drives. However, the temperature range was only 17-31*C, which suggests a heavily cooled data center with the colder drives being at the front of the storage pod and the hotter drives being closer to the rear.

Here is a link to Storage Pod 6.0 with good pictures of the 60-drive 4U assembly. Note that the blowers pull air from the front of the unit towards the back and expel it once it's been pulled over the motherboard. There are no fans in the front of the unit and all air has to pass multiple rows of vertically-oriented hard drives. What is notable about this design is its simplicity and how it ensures that the maximum surface area of every drive is exposed to an even air flow.

There is a huge backplane at the bottom of that unit, with each drive being pulled by gravity into its SATA connector. To access a drive, the pod is pulled out of the rack (ball bearing slides help!), the lid is opened, and the offending drive is pulled and replaced. To my limited knowledge, no SOHO server (8-16 drive) cases offer a similar, simple design. Most SOHO server cases are designed around "hot swap" drive holders that allow the hard drives to be pulled and replaced quickly from the front of the unit - even though few of them ever fail.

Hot-swap designs typically feature much more constricted air spaces between drives to accommodate the mechanisms, guides, and so on. That in turn leads to greater static pressure drop, necessitating stronger (and usually louder) fans to get good air flow. I have yet to find a 8-12-drive SOHO server with the simplicity of the backblaze pod. LIAN LI models like the Q26 come close, but the hard drive holders in there are not quite ideal for good air flow over the drives, and the case doesn't allow for a Flex-ATX motherboard.

I have been trying to keep my drives happy at below 31*C, which is the upper end of the Backblaze temperature range. Crummy air flow designs in other cases than the one I use now have driven my drives to exceed 45*C during multi-hour scrubs. Now imagine what the drive temperature would be inside a a typical external plastic case without forced cooling.

My hard drive toaster from OWC may have a known-defective design (it doesn't allow formatting via eSATA), but USB 3 is good enough, and each drive only sees it once – it's used to zero obsolete spinners before they are donated to a local medical center for reuse in research.
 


Amazon disclaimer:
As an Amazon Associate I earn from qualifying purchases.

Latest posts