sooo I already have a media server (NAS) setup. Its a Synology 415+ with 5+ TB of space. AND I’m going to have to upgrade that soon - running out of space… I have lots of data. The SSDs would be overkill for a media server but it might be interesting to use them in tandem. I wonder if I could expand the storage and make a super fast potion for commonly used files…
I have a very similar setup and a very similar amount of data. If you experiment with segmentation for commonly used files I’d love to hear how it goes.
There’s no economical way to build an external device that would emulate DAS, and power/noise of a tower would be annoying (to me).
Does your Mac have native 10Gb or eSATA? Adding another high speed NAS is really the only practical option I can thing of.
Congrats on your windfall, however.
That’s kind of what I’m realizing…
I can add eSATA with a thunderbolt3 - eSATA adapter, No 10Gb networking. I do have a 4 drive DAS kicking around with eSATA. That could be interesting. Do you know of any way to pair 2 2.5" drives to a 3.5" bay?
Tons of options there. Amazon shows 630 results. As there are no moving parts, however, I’ve just laid them flat in the bays. If I were going to move the device, I’d probably just use velcro rather than buying expensive adapters.
hmm I tried an amazon search and only found this one Since I have 8 drives but the DAS only has 4 space for 3.5 drives they would need to combine 2 drive to one SATA connection. The one linked above is a little pricey to buy 4 of them. I could just buy two of these!
definitely a possibility. Is it possible to take advantage of the performance of the SSD with something like that? I have reasonable transfer speeds with my NAS now but its too slow to work from or pull things back and forth. 10Gb networking would solve some of that bottle neck (also my iMac is wifi right now - working on running a cat 6 cable to it)
The short answer is no, but what are you doing that requires (or simply could take advantage of) that kind of performance anyway?
Processing TB of 4k video is about the only common use case for “home-gamers”…
I do a lot of photo and video editing. Most of it 2K rending out to 1080p. My computer only has 512Gb of internal storage. So usually what happens is I have to offload everything to the NAS which for 10Gb takes about 30 mins (wifi is the bottle neck there). And when I go to work on a project I need to prepare by pulling what I need well before I need it. So having some kind of DAS with SSD performance would be a game changer. I have one of the 256Gb drive in a usb3 enclosure and it has been a dream to work with. If I had my way I would by a 2TB DAS SSD but I can’t spend that much on it (this is all hobby related). So inheriting 2TB of SSDs is awesome but having it as 8 drives is somewhat of a conundrum.
Sounds like it’s worth investing in a lightning- or 10GB Ethernet-attached NAS device then. Won’t be cheap but it’s about the only practical option. USB3-connected won’t really give you much better performance than a single drive as you already have.
I’ve used a USB3 SSD for similar video work with my MBP, so I get where you are coming from.
I figured thats where I’d end up. When I bought the iMac I upgraded to a 512Gb SSD because I didn’t want the spinning garbage fusion drive. I now wish I had upgraded to a 1TB or 2TB SSD. It would have been worth every penny. I’m not at all setup for 10GB networking so I guess I’ll stick with adding a Ethernet run to the iMac for now. That should help a lot
Depending on use case the SSD may be very sub-optimal (which is why I went with the tiered storage [i.e. fusion drive]) since I was doing a lot of read/write to the disk (database) which is very very unhealthy to a SSD. The lifespan of a SSD is surprisingly poor particularly in that use case. And I was also surprised at how hot our SSDs got in an enterprise RAID use case. They were hot enough that I went back to 15000 rpm spinning storage actually as it was cooler.
Its not so much a performance issue as an accessibility issue. There’s no easy way to open the iMac if the drive fails and I’ve had enough spinning drives fail that there was no way I would want one in a $4000 sealed box (although limit experience I have never had an issue with a SSD). That said your argument makes it even more apparent as to why I need to have more DAS storage that I can work off and save all the read write cycles to my main drive.
One more option I just thought of - if you’re happy with the performance of USB connected drives, why not just use them individually as needed? Find a USB-SATA adapter that meets your needs, and drop drives in as needed.
I’ve been using spare drives connected to bare (i.e. removed from enclosure) adapters for years. I keep my (~400GB) iPhoto/Photos library on one, my primary backup on another, and I just plug them in as needed. The adapter may eventually get damaged but (again) I’ve been doing this for many years. I used to have a piece of velcro on my MBP lid cover, but I do so little video editing these days, I just lay it on the surface behind the machine when I need to connect.
I’ve replaced drives in iMac 27" numerous times. It wasn’t for the feint of heart… The stupid vertical sync cable at the top is sooooo day to break… And I had all the special tools (the screen sucker, etc)
It really is unfortunate that its not more easily accessible. Its just not worth the risk.
@eflyguy The only issue I have with this is that they are 256GB drives. I feel like I would be swapping between all 8 drives all the time. That said maybe a couple of these 4 drive things will do that trick
I’ve replaced/upgraded the HD in my ex’s iMac twice, it’s easy enough but not a task for the average consumer. I was recently happy to discover I could replace the failing battery in my MBP as well.
Unfortunately, they keep “improving” the designs to make it more challenging, even for more technically-competant owners.
I have an TerraMaster D5-300 connected to a 2016 Mac Mini via Thunderbolt (Dongle Town.) running Unraid. In the first two bays, I have two Samsung 512GB SSDs and the last three have 4TB WD. The SSDs is configured as a ‘pre-cache’ for the main array. Tthe onboard 1Gbit ethernet connection and a second via Thunderbolt and configured them as LACP.
The idea is when you write to it you hit the SSD first, then the array syncs out to the standard HDD. And it pre-caches commonly used stuff on the SSD for faster access.
Via the LACP I can get 210~230 MB/s fairly easily. The array is used for Lightroom, Plex, and Backups. For anything data rate intensive, I write to the local SSD and move off the array when it is done.
As it so happens, I make SSDs for a living.
For a single user environment, network attached SSDs are a waste. Network attach works well when there’s a high duty cycle workload. Single user workloads are mostly idle time. The latency over the network will wipe out most of the gains you would expect to see from an SSD. And even if you’re running 1Gb network connections, that’s only around 100MB/S. HDD speed. A single SSD is going to be 5x faster than that, minimum. Network storage for single user workloads is still best implemented with HDDs (and you can buy a 10TB HDD for about $100).
You want a direct-attach solution for your SSDs.
Flash memory is a consumable media, it wears out with use. SSDs have a physical capacity which is often greater than the logical capacity. The extra memory is called “over provisioning” and it’s used to improve write performance and increase endurance. For example, a drive with 256GB of raw memory would report 240GB of logical capacity and be said to be “7% over provisioned”. The “256GB” logical capacity point means these drives don’t have a lot of over provisioning, which means that they’re not high endurance drives. Also means their write performance probably won’t be that stellar. But if they’re year old low-over-provisioned drives, depending on how they were used they could have an appreciable amount of wear (meaning their remaining life is going to be shorter). If you poke around on the Internet, you can probably find a tool from the SSD manufacturer that you can use to see how worn out the drives are (it’ll be expressed as a percent).
Assuming they all have about the same life used percentage, I would take the drives and RAID them. RAID-5 if you can find a controller that supports it. An array of 7 drives would give you 1.5TB, give you fault tolerance for a single drive failure, give you read and write performance that’s going to scale close to linearly as a function of drive count (meaning it’ll be about 6 times faster than a single drive). Keep one drive as a spare. This will spread the write activity uniformly across the drives, which will also extend the endurance. If each drive in the array has 1 year of life left, 6 drives would give you 6 years at the same level of writing.
If you can’t do RAID-5, then shoot for RAID 1+0 (a mirrored 4x drive stripe set), it’ll give you 4x performance (if the controller is smart enough, it could actually give you 8x performance on reads) and 1TB total capacity with redundancy. Any external box with Thunderbolt should be able to keep up with the speed of the SSDs. A lot of them do RAID 0+1.
You want redundancy. SSDs are a heck of a lot more reliable than HDDs, but the more of them (anything, for that matter) you are using the greater the odds you’ll have one will fail. RAID is the best way to protect yourself from the inevitable. When a drive in a RAID fails, the array continues to function with no data loss, but in a “degraded” state (where the loss of an additional drive will result in data loss). But when you put a new drive in to replace the failure the array becomes fault tolerant again. Theoretically, as long as you’re there to replace drives when they fail, the data on a RAID array will last forever, no matter how many drives fail (as long as they only fail one at a time).
Are you sure the drives are SATA and not NVMe? One year old systems, I’d expect m.2 NVMe.
My home system has 10x SSDs. A 3.2TB RAID-5 (3x 1.6TB 12G SAS drives), a 1.6TB RAID-5 (5x 400GB 6G SATA drives) and two stand-alone 400GB SATA drives that I use as scratch disks. The RAIDs are handled by an LSI Logic 9260 PCIe Gen3 controller. The motherboard is a SandyBridge and it does a decent job of handling the stand-alone drives. The system is actually getting a bit “long in the tooth” I built it about 6 years ago. But it still delivers quite respectable transfer rates to the RAIDs up in the 1.5GB/S range. Every time I think about upgrading, I ask myself “why?” and then I don’t. I could build something faster but it’d only matter to the benchmarks, I wouldn’t notice…
Unfortunately, I can’t give you a lot of advice about Mac. But I can tell you almost anything you want to know about SSDs and storage arrays.