What to do with spare SSDs

I have an TerraMaster D5-300 connected to a 2016 Mac Mini via Thunderbolt (Dongle Town.) running Unraid. In the first two bays, I have two Samsung 512GB SSDs and the last three have 4TB WD. The SSDs is configured as a ‘pre-cache’ for the main array. Tthe onboard 1Gbit ethernet connection and a second via Thunderbolt and configured them as LACP.

The idea is when you write to it you hit the SSD first, then the array syncs out to the standard HDD. And it pre-caches commonly used stuff on the SSD for faster access.

Via the LACP I can get 210~230 MB/s fairly easily. The array is used for Lightroom, Plex, and Backups. For anything data rate intensive, I write to the local SSD and move off the array when it is done.

1 Like

As it so happens, I make SSDs for a living.

For a single user environment, network attached SSDs are a waste. Network attach works well when there’s a high duty cycle workload. Single user workloads are mostly idle time. The latency over the network will wipe out most of the gains you would expect to see from an SSD. And even if you’re running 1Gb network connections, that’s only around 100MB/S. HDD speed. A single SSD is going to be 5x faster than that, minimum. Network storage for single user workloads is still best implemented with HDDs (and you can buy a 10TB HDD for about $100).

You want a direct-attach solution for your SSDs.

Flash memory is a consumable media, it wears out with use. SSDs have a physical capacity which is often greater than the logical capacity. The extra memory is called “over provisioning” and it’s used to improve write performance and increase endurance. For example, a drive with 256GB of raw memory would report 240GB of logical capacity and be said to be “7% over provisioned”. The “256GB” logical capacity point means these drives don’t have a lot of over provisioning, which means that they’re not high endurance drives. Also means their write performance probably won’t be that stellar. But if they’re year old low-over-provisioned drives, depending on how they were used they could have an appreciable amount of wear (meaning their remaining life is going to be shorter). If you poke around on the Internet, you can probably find a tool from the SSD manufacturer that you can use to see how worn out the drives are (it’ll be expressed as a percent).

Assuming they all have about the same life used percentage, I would take the drives and RAID them. RAID-5 if you can find a controller that supports it. An array of 7 drives would give you 1.5TB, give you fault tolerance for a single drive failure, give you read and write performance that’s going to scale close to linearly as a function of drive count (meaning it’ll be about 6 times faster than a single drive). Keep one drive as a spare. This will spread the write activity uniformly across the drives, which will also extend the endurance. If each drive in the array has 1 year of life left, 6 drives would give you 6 years at the same level of writing.

If you can’t do RAID-5, then shoot for RAID 1+0 (a mirrored 4x drive stripe set), it’ll give you 4x performance (if the controller is smart enough, it could actually give you 8x performance on reads) and 1TB total capacity with redundancy. Any external box with Thunderbolt should be able to keep up with the speed of the SSDs. A lot of them do RAID 0+1.

You want redundancy. SSDs are a heck of a lot more reliable than HDDs, but the more of them (anything, for that matter) you are using the greater the odds you’ll have one will fail. RAID is the best way to protect yourself from the inevitable. When a drive in a RAID fails, the array continues to function with no data loss, but in a “degraded” state (where the loss of an additional drive will result in data loss). But when you put a new drive in to replace the failure the array becomes fault tolerant again. Theoretically, as long as you’re there to replace drives when they fail, the data on a RAID array will last forever, no matter how many drives fail (as long as they only fail one at a time).

Are you sure the drives are SATA and not NVMe? One year old systems, I’d expect m.2 NVMe.

My home system has 10x SSDs. A 3.2TB RAID-5 (3x 1.6TB 12G SAS drives), a 1.6TB RAID-5 (5x 400GB 6G SATA drives) and two stand-alone 400GB SATA drives that I use as scratch disks. The RAIDs are handled by an LSI Logic 9260 PCIe Gen3 controller. The motherboard is a SandyBridge and it does a decent job of handling the stand-alone drives. The system is actually getting a bit “long in the tooth” I built it about 6 years ago. But it still delivers quite respectable transfer rates to the RAIDs up in the 1.5GB/S range. Every time I think about upgrading, I ask myself “why?” and then I don’t. I could build something faster but it’d only matter to the benchmarks, I wouldn’t notice…

Unfortunately, I can’t give you a lot of advice about Mac. But I can tell you almost anything you want to know about SSDs and storage arrays.

10 Likes

What a wealth of information! Thank you! Quite a setup you have too. I can confirm that are SATA - These drives actually. - Any thoughts on them? Our IT professional installed them as boot drives in our existing towers last year. He’s upgrading them to 512 because someee of my office mates dont understand the difference between the SSD booth drive and the 1TB storage drive.

It seems like it would be best to buy something enclosure like @karaelena is using in RAID 5 / 1 as a DAS

Yes. Thoughts. Kingston is OK, but they’re not in the same league as Intel, Micron, Western Digital or Toshiba (now “Kioxia”) which are companies with their own captive NAND supplies. These Kingston drives use a controller from Phison which is well respected. But they are boot drives. Even with their 5 year warranty, they’re not designed for heavier-duty workloads. They’re really designed for a Write Infrequently Read Often workloads. So knowing that, it’s even more important that you RAID the drives, they’re at the lower-end of the SSD reliability spectrum. Designed to be inexpensive first, reliable second. Having said that, they’re still going to be more reliable than a HDD. And of course they’ll perform quite a bit faster.

Yes, a search for USB 3.1 or Thunderbolt DAS RAID enclosure will yield many results.

Note USB 3.0 is only good to about 500MB/S where 3.1, like Thunderbolt, will go to 1GB/S.

4 Likes

Thanks for the advice! I’ll let everyone know how it goes! need to get the drives before I do anything so hopefully that happens soon

Great thread. I’ve been looking around to try and build out an affordable NAS that I’m able to use Lightroom off of (without ripping my hair out). It seems affordable doesn’t really apply though.

I really need to upgrade the whole network. Using a mifi for access at this point, which has a USB share option but you lose charging capability. :slight_smile: an embedded 4g modem/router would be the way to go, but that’s even more money. So I’ve just set up an AC1900 as a bridge and have an old single-disk NAS connected through it as a temporary solution to setting up an accessible file share.

I’m probably going to get hate for this but I have a synology NAS that I bought off ebay and it is a dream. Easy to setup, expandable, Timemachine Backup, I can remote log in, remote access files, and I’m running a plex server. Super easy to have different users and different permissions per user. It’s setup in RAID 5 and cloud backups to Backblaze. I couldn’t recommend it enough. I think I would be able to use Lightroom off it if my mac was hardwired.

ALL THAT SAID - you can setup a pretty simple network share with an old PC that will give you decent performance.

1 Like

I run a Unix server with a raid 5 on spinning disks (enterprise grade for RAID drives). All on switched gigabit. Even giant video files move quickly. For WiFi I have a mesh network, that is pretty fast.

That is likely to be a shingled (SMR) drive, which I would recommend avoiding like the plague.

1 Like

I did HDD development for a couple of decades before switching to SSD. There’s nothing inherently wrong with SMR. The storage industry is always pushing for higher densities and lower costs (thank you Internet social networks, and HD and now 4K video). SMR is going to be all but inescapable. If the minimum capacities don’t mandate the higher recording density of SMR, they’ll start making the disk platters smaller to take out a few cents of aluminum, and still go to SMR.

1 Like

where does one fine a 10TB drive for $100?

Sorry, I may have “overstated my case” a bit with the 10TB number. But you can get 4TB for less than $100. And HDD densities double every 1.5-2 years and when they do, the cost/bit usually drops by nearly half. So by late in 2020, I’d expect to start to see 8TB or greater for the costs of current generation 4TB.

That being said, when I started designing SSDs a few years back, we were getting almost $1/GB for our drives. Now, we’re looking at 10 cents. So while HDD prices might decline by 50% from one gen to the next, SSD prices have been falling exponentially. You can get 1TB SSDs for $100 now. Wasn’t so long ago one of those was good for $10K.

This topic was automatically closed 32 days after the last reply. New replies are no longer allowed.