MacInTouch Amazon link...

SSD, Fusion and flash drives

Channels
Security, Products
Add this one: I delegated ordering two new Samsung NVMe 1TB SSDs as cache drives for our new Synology to my co-worker, who handles the business Amazon account. I thought I'd been clear, NVMe, Samsung, !TB, but Amazon search for those phrases tossed her a set of SATA M.2 SSDs, which don't meet the Synology specification. Fortunately, I was able to "blame" Amazon search results for the incorrect purchase, and return shipping was free.
I've learned that when specifying something for someone else to order, send them a link directly to the product. Don't know if that would have worked in your case, but I've found it has eliminated 98% of search/selection errors for me. I still get people clicking on other product links that appear on the product page on most sites but far less than if I specify an item search. Cheers.
 


Ric Ford

MacInTouch
Here's a 1TB Samsung 970 EVO NVMe SSD in a 10Gbps Sabrent USB-C NVMe enclosure1. 2., connected to the 2017 iMac 5K's USB-C port... I can't see SMART data for this enclosure, either with macOS 10.12 Sierra or macOS 10.14 Mojave —I tried DriveDX, Disk Sensei, SoftRAID and SMART Utility. Trim doesn't seem to be enabled, either.
I cannot see SMART data when I move the SSD to a Fledging Shell enclosure, either. This seems to be an issue with the enclosure/controller/driver, because Samsung 970 EVO specs list both SMART and Trim as being supported.
 


I cannot see SMART data when I move the SSD to a Fledging Shell enclosure, either. This seems to be an issue with the enclosure/controller/driver, because Samsung 970 EVO specs list both SMART and Trim as being supported.
Apparently, only a subset of a storage device's parameters are available over USB. A few months ago, I moved four spinning hard drives from a USB enclosure to a Thunder3 QuadX Thunderbolt 3 enclosure. Although "aware" of the existence of the Thunderbolt bus, my Mac now talks (more or less) directly to the hard drives' SATA interface. One of the benefits to this is that APFS works absolutely wonderfully. Over USB, APFS was a masochistic nightmare. Besides that, the raw speed over Thunderbolt 3 is just amazing and, of course, SMART works just fine.

The Thunder3 QuadX has a second Thunderbolt 3 port. Both are powered at 20W. I plugged a Fledging NVMe into the second port with a 2TB Sabrent SSD. So now, wow! 32 TB of spinners and 2 TB of SSD (with TRIM) over one bus. It's like having a new machine. And it just works!

Maybe not exactly correct, but I look at it as connecting to individual devices over a Thunderbolt bus, whereas with USB you are connecting to generic "USB storage". Definitely cheaper and has its place, but nothing like Thunderbolt, billed as extending the PCI bus, which appears to be true to me.
 


Ric Ford

MacInTouch
Apparently, only a subset of a storage device's parameters are available over USB....
While that may be true, I'm used to seeing full SMART data over USB (with the SAT SMART driver installed and SATA devices in the USB enclosures). Something's different and broken with the USB-NVMe enclosures/controller/software.

Thunderbolt 3 has lots of advantages over USB, especially in performance, but it's awfully expensive by comparison, and there have been reliability problems with it, too, as well as compatibility problems with older systems (e.g. requiring a powered dock and adapter).

If you have the money and compatible ports, and portability isn't an issue, and you aren't experiencing data integrity issues, then Thunderbolt 3 should be great. :-)
 


While that may be true, I'm used to seeing full SMART data over USB (with the SAT SMART driver installed and SATA devices in the USB enclosures). Something's different and broken with the USB-NVMe enclosures/controller/software.
I'd never heard of SAT SMART and had to search for it. Pretty interesting and might be useful on an old Snow Leopard machine I keep running. Thanks for that. The GitHub development site does point to a list of compatible/incompatible enclosures (www.smartmontools.org). which doesn't claim to be exhaustive. I see four Sabrent and no Fledging enclosures there. None appear to be the ones you are talking about, so maybe yours are just incompatible, not broken.
 


... Thunderbolt 3 has lots of advantages over USB, especially in performance, but it's awfully expensive by comparison, and there have been reliability problems with it, too, as well as compatibility problems with older systems (e.g. requiring a powered dock and adapter). If you have the money and compatible ports, and portability isn't an issue, and you aren't experiencing data integrity issues, then Thunderbolt 3 should be great. :-)
Oh, it certainly is great. I recently purchased a 1TB NVMe SSD in a Thunderbolt 3 enclosure from macsales.com. I connected it to my 2017 5K 27" iMac's Thunderbolt 3 port, and the throughput is utterly amazing. Using Blackmagic's Speed Test, I get around 1600 MB/s write speeds and 2300 MB/s read speeds, which makes the iMac's internal 7200-rpm hard drive/ 32GB SSD "fusion drive" laughable. You can well imagine which is my boot drive!
 


Ric Ford

MacInTouch
I recently purchased an 1TB NVMe SSD in a Thunderbolt 3 enclosure from macsales.com. I connected it to my 2017 5K 27" iMac's Thunderbolt 3 port, and the throughput is utterly amazing. Using Blackmagic's Speed Test, I get around 1600 MB/s write speeds and 2300 MB/s read speeds, which makes the iMac's internal 7200-rpm hard drive/ 32GB SSD "fusion drive" laughable. You can well imagine which is my boot drive!
Here are some Samsung X5 benchmarks and Samsung 970 EVO tests for comparison.

I'm still a little concerned about longevity and thermal throttling when using a portable Thunderbolt 3 SSD for intensive applications (e.g. as the boot drive). Alternatives, such as the Thunderbolt 3 RAID enclosures and Thunderbolt 3 PCIe enclosures, might provide better cooling longevity, and even performance, depending on their details. In addition, self-powered Thunderbolt 3 enclosures should be compatible with older Macs via Apple's Thunderbolt 3-to-Thunderbolt 2 [Mini DisplayPort] adapter (with more limited performance).
 


Here are some Samsung X5 benchmarks and Samsung 970 EVO tests for comparison.
I'm still a little concerned about longevity and thermal throttling when using a portable Thunderbolt 3 SSD for intensive applications (e.g. as the boot drive). Alternatives, such as the Thunderbolt 3 RAID enclosures and Thunderbolt 3 PCIe enclosures, might provide better cooling longevity, and even performance, depending on their details. In addition, self-powered Thunderbolt 3 enclosures should be compatible with older Macs via Apple's Thunderbolt 3-to-Thunderbolt 2 [Mini DisplayPort] adapter (with more limited performance).
Yes, longevity remains the question. As for thermal throttling: I haven't seen it. Yet. Not only is it my Mojave boot drive, but I also run Linux VMs from the drive at the same time and have not seen any appreciable throttling. Then again, I haven't really measured it in-depth yet. I do know that the macsales/OWC drive does not have a fan.
 


Ric Ford

MacInTouch
Yes, longevity remains the question. As for thermal throttling: I haven't seen it. Yet. Not only is it my Mojave boot drive, but I also run Linux VMs from the drive at the same time and have not seen any appreciable throttling. Then again, I haven't really measured it in-depth yet. I do know that the macsales/OWC drive does not have a fan.
You should be able to see temperatures in the SMART data (something I couldn't do with the USB-C NVMe enclosures I tried, which weren't providing SMART data).

It might be interesting to run a couple of quick benchmarks after using the drive heavily and having it heated up, to see if they differ from results with a "cold" SSD.
 


Ric Ford

MacInTouch
I cannot see SMART data when I move the SSD to a Fledging Shell enclosure, either. This seems to be an issue with the enclosure/controller/driver, because Samsung 970 EVO specs list both SMART and Trim as being supported.
This seems to describe the problem with SMART data and NVMe-USB bridges/controllers:
smartmontools said:
USB devices and smartmontools
To access USB storage devices, the operating system sends SCSI commands through the USB transport to the device. If the USB device is actually a (S)ATA or NVMe drive in an USB enclosure, the firmware of its USB bridge chip translates these commands into the corresponding ATA or NVMe commands. This works straightforward for read and write commands, but not for SMART commands.

To access SMART functionality, smartmontools must be able to send native ATA or NVMe commands directly to the drive. For USB devices, at least the following conditions must be met:
  • The USB bridge provides an ATA or NVMe pass-through command.
  • This command is supported by smartmontools.
  • The operating system provides a SCSI pass-through I/O-control which works through its USB-layer.
  • SCSI support is implemented in the operating system interface of smartmontools.
Many recent USB to SATA bridges support the pass-through commands from the SAT (SCSI/ATA Translation, ANSI INCITS 431-2007) standard. Other USB bridges provide vendor specific pass-through commands. The NVMe SCSI Translation Reference does not yet specify a pass-through command for NVMe.
 


You should be able to see temperatures in the SMART data (something I couldn't do with the USB-C NVMe enclosures I tried, which weren't providing SMART data). It might be interesting to run a couple of quick benchmarks after using the drive heavily and having it heated up, to see if they differ from results with a "cold" SSD.
I may well do that in the near future, Ric. I'll provide data once I do, if you're interested.
 


Problem with SSD and OWC enclosure:

I recently upgraded my Early 2015 MacBook Air 13-inch with a Samsung 970 EVO 500GB NVMe M.2 SSD. It required this Sintech adapter. So far, so good. It has been working fine since January of this year.

My second step was to purchase the OWC enclosure to be able to utilize the 256GB Apple-branded SSD that was shipped with the MacBook Air. If my memory is correct, it worked properly exactly once. After a power-down and re-start I immediately got error messages that the drive was no longer bootable, that it had errors, and that I should copy any data that I could from it and erase and start over.

I tried Disk Utility and it indicated it could not repair the disk, it had a "disk full error", and the disk had been converted to read only. I did as suggested - used Disk Utility to re-format. I put some files on it and, surprise, after a power cycle, it was the same set of error messages all over again.

I had a hunch that this was some sort of incompatibility between HFS and the formatting scheme being used by Mojave. I also thought that perhaps Apple would at least make the new [macOS] compatible with Windows FAT, so I formatted again using ExFat, and at least that works.

Has anyone seen this sort of problem, and is there a better solution? Many thanks to the experts here at MacInTouch.
 


Ric Ford

MacInTouch
My second step was to purchase the OWC enclosure to be able to utilize the 256GB Apple-branded SSD that was shipped with the MacBook Air. If my memory is correct, it worked properly exactly once. After a power-down and re-start I immediately got error messages that the drive was no longer bootable, that it had errors, and that I should copy any data that I could from it and erase and start over.... Has anyone seen this sort of problem, and is there a better solution? Many thanks to the experts here at MacInTouch.
This sounds like something you should check with OWC Support. (Please let us know what you find out.)
 


Oh, it certainly is great. I recently purchased a 1TB NVMe SSD in a Thunderbolt 3 enclosure from macsales.com. I connected it to my 2017 5K 27" iMac's Thunderbolt 3 port, and the throughput is utterly amazing. Using Blackmagic's Speed Test, I get around 1600 MB/s write speeds and 2300 MB/s read speeds, which makes the iMac's internal 7200-rpm hard drive/ 32GB SSD "fusion drive" laughable. You can well imagine which is my boot drive!
It is for this reason I am not so quick to dismiss recommending a 27" iMac with a Fusion Drive in particular circumstances. As long as we are easily able to boot from an external drive of reasonable speed, the internal Fusion Drive can be used for user data or as archival storage. Should Apple take away the ability to easily boot from an external volume, as it it seems they are doing with T2-equipped Macs, I see no reason for their existence.
 


It is for this reason I am not so quick to dismiss recommending a 27" iMac with a Fusion Drive in particular circumstances. As long as we are easily able to boot from an external drive of reasonable speed, the internal Fusion Drive can be used for user data or as archival storage. Should Apple take away the ability to easily boot from an external volume, as it it seems they are doing with T2-equipped Macs, I see no reason for their existence.
I have a 2019 MacBook Pro which has the T2 chip and I boot from external drives all the time. Initially you cannot. But after interrupting normal startup with the Command-R keys you can change that.

If they ever do completely away with booting from an external drive, a lot of people are going to be pissed.
 



This sounds like something you should check with OWC Support. (Please let us know what you find out.)
I spoke with technical support at OWC, and this is not a problem that they had heard of. They did concede that, quoting approximately, large files, especially incompressible data generally from videographers could mess up the directory structure.

I did have virtual hard disk files for VirtualBox on the drive. Is that the type of file they are referring to, I wonder?

Also, as I searched around on the OWC site I found this technical bulletin:
Relevant Part Number: OWCMAU3ENPRPCI

Please note that the SSDs which Apple supplied with some of its computers are not compatible with the OWC Envoy Pro (OWCMAU3ENPRPCI). They work only when installed in the Mac model(s) they were designed for. At the time of this writing the only machines potentially affected will be the MacBook Air 7,2 | MacBook Pro 11,4 | MacBook Pro 11,5 | MacBookPro12,1 — these computers ship with SSDs that may not be compatible with any OWC Envoy Pro (PCIe) enclosure.
Now that the original Apple SSD is no longer in the MacBook Air, I have no way, that I know of, to determine if it is one of the SSDs that "may not be compatible".
 


They did concede that, quoting approximately, large files, especially incompressible data generally from videographers could mess up the directory structure.
I'm generalizing here without trying to look up the specifics of your SSD. The SSDs OWC sold when I bought one were based on the Sandforce controller that compressed files "on the fly" as they were written... There's quite a variety of files that can't be compressed because they already are compressed — video, jpg, pdf, even the guts of a Microsoft Word .docx is a zip.

Sending a huge already compressed video file "through" a "compressing" drive controller that's processing other reads/writes simultaneously does seem a possible source of problems.

I've copied VM files around, and they are big. I've always presumed they're somewhat analogous to applications on Mac that are packages which present as folders. Maybe someone else with real knowledge can clarify.
 


... Also, as I searched around on the OWC site I found this technical bulletin:
Relevant Part Number: OWCMAU3ENPRPCI
Please note that the SSDs which Apple supplied with some of its computers are not compatible with the OWC Envoy Pro (OWCMAU3ENPRPCI). They work only when installed in the Mac model(s) they were designed for. At the time of this writing the only machines potentially affected will be the MacBook Air 7,2 | MacBook Pro 11,4 | MacBook Pro 11,5 | MacBookPro12,1 — these computers ship with SSDs that may not be compatible with any OWC Envoy Pro (PCIe) enclosure.
And just going to their site now, they still will sell you one for those models!
 


I spoke with technical support at OWC, and this is not a problem that they had heard of. They did concede that, quoting approximately, large files, especially incompressible data generally from videographers could mess up the directory structure.
I did have virtual hard disk files for VirtualBox on the drive. Is that the type of file they are referring to, I wonder?
Typically, no. As later posters in this thread have mentioned, large files that are already compressed defeat the compression-speedup capabilities of the Sandforce controllers. But VM disk image files are typically not compressed, and are typically highly compressible. I don't think that's the issue you're seeing.
 


... MacBook Air 7,2
Now that the original Apple SSD is no longer in the MacBook Air, I have no way, that I know of, to determine if it is one of the SSDs that "may not be compatible".
Actually, you don't need the Apple SSD in the MacBook Air. Just go to "About this Mac" at the apple menu, click on "System Report", and look at the number at "Model Identifier: #,#", as the number is not dependent on the SSD but the computer model.
 


... You could get an external SSD but at what cost? A backup drive needs high capacity; at least as large as the Fusion drive you're cloning. What we need is an SSD/hard disk drive combo in an enclosure that could be mounted as an external Fusion drive. Does such a thing even exist?
"Out of the box"? No, but it could be cobbled together. First, there are multiple-drive enclosures. Three examples Akitio NT U31C or OWC Mercury Elite Pro Duo Mini or Thunderbay 4.

Fill one bay with an SSD and the other with hard disk drive. They'd need to be in a mode where they present as independent disks (may need to avoid enclosures where only one of the bays is bootable when in independent disk mode — that may knock out some of the examples above).

Second, there is a command line diskutil APFS subset command, e.g.

disktutil apfs create device /dev/disk4 /dev/disk5 newFusionDisk

which should create a new Fusion disk, if those are recognized as two different kinds of drives (e.g., disk4 is an SSD and disk5 is a hard disk drive). ('apfs create device' is a compound of 'createContainer" and 'create volume'.) Again, the enclosure needs to present the SSD as an SSD.

There is no GUI interface for doing this. There is probably not one coming in the future, either.

A backup and a "production" (fast) bootable drive are really two different roles. If your primary drive died and you can get anything up and running at any speed in 3-5 minutes, that is better than nothing. Lots of large-scale disaster recovery operations run at diminished capacity. That isn't necessarily a flawed state for a given budget.

I wouldn't be surprised if "home grown" Fusion drives with better SSD:hard disk drive ratios didn't better than the stuff that Apple sells. If the SSD is sub-10% caching and you have this widespread colocation of metadata and user data, then a bigger cache probably works better. Apple is controlling costs with their Fusion drive configs, not focusing on performance. So if you make the external Fusion drive have twice as much capacity (so it can hold more snapshots or volumes), then it's probably a reasonable idea to make the SSD bigger as you make the hard disk drive bigger. Having high performance 'fail over', though, is going to cost substantively more.
 


Ric Ford

MacInTouch
Both portable SSDs and large-capacity, fast thumb drives offer a way to back up securely when away from home, without the Cloud.
I no longer use "thumb" drives, because real SSD prices have dropped so much, while they're far more reliable (with built-in error-correction missing from "thumb" drives) and far faster, as well. And they're very compact, as well, nowadays (e.g. Samsung T5).
 


I no longer use "thumb" drives, because real SSD prices have dropped so much
With 2.5" SATA SSDs so very cheap, I've been using them with USB-SATA adapters, much as I previously used thumb drives. But I do have a thumb drive attached to my keychain, give them to friends and families as a way to distribute photos, and use them for keeping my collection of bootable Linux ISOs and macOS installers.
 


Ric Ford

MacInTouch
But I do have a thumb drive attached to my keychain, give them to friends and families as a way to distribute photos, and use them for keeping my collection of bootable Linux ISOs and macOS installers.
I got a call a while back from a friend. His wife had been keeping files on a thumb drive (photos, I think) with no other backup. She'd been doing this for a long time - on the same thumb drive. The thumb drive had just died... And, no, we couldn't recover any files from it.
 


David, your description of the difference between Trim and "garbage collection" is clear, and one of the best I've read. What isn't clear is what you mean by logical blocks being overwritten triggering garbage collection in the absence of Trim.
It is my understanding that opening a file is just a read operation. But that if the file is edited and saved back to an SSD, the drive controller may write the new save to a different memory location and "know" to mark the prior location for collection. Is that what you mean?
It remains my understanding that, in the absence of Trim, deleting a file in a computer's OS does not mark the file's memory location on an SSD as available to clean. We certainly wouldn't want the SSD controller removing files just because they haven't been (e.g.) accessed in months. I have a couple of Android devices with Google programs that offer to do just that, to improve performance and open space, and it's actually scary.
I think I wrote about this a while ago, but I can't find the link, so I'll try to recall everything.

Any storage device, seen from the outside (e.g. over the SATA interface) consists of a sequence of logical blocks. Each logical block is a fixed size (typically 512 bytes, but some devices may use 4K or 8K logical blocks, in order to align with the flash memory's page size - see below). The storage presents itself as a linear sequence of logical blocks. The logical blocks are numbered, and the number is used to identify the physical location in the storage media where the data resides.

On a hard drive, each logical block directly corresponds to a physical location on a platter (a particular cylinder, head and sector). The mapping may be complicated but is (more or less) fixed. Every time you read/write a given logical block, the drive hardware reads/writes the same physical location. In practice, it's a little more complicated, because drive firmware may relocate a logical block to a new physical location if it thinks the old location has failed or is expected to fail soon, but overall these mappings don't change.

On an SSD, things don't work this way. There is a mapping from logical blocks to physical storage, but that mapping is constantly changing, due to the nature of how flash memory works.

Flash memory is organized in pages and blocks. A page is the smallest unit that you can write to (typically 4K or 8K bytes, but may be as low as 512 bytes). A block is the smallest unit that you can erase and will consist of a lot of pages (typically 32-128 pages plus some housekeeping overhead, yielding a typical block size between 16K and 512K bytes). For the purpose of this post, I'm going to assume a 512 byte logical block, a 4K page and a 512K flash block (128 pages).

In an SSD like this, when you write a 512 byte logical block, the SSD controller must find an unused (4K) page and write that logical block to it. If you're writing multiple logical blocks at once, the controller will probably combine them together (e.g. 8 of them at a time, so it can fill entire pages) in order to be more efficient. If you're not writing data in multiples of the page size, then the controller may find a page with free space (e.g. one where not all of its 8 logical blocks are used) and merge its content with your new logical blocks. But it can't just write your data to a page that already contains data - it must read the whole page into RAM, merge your new data, and then write the merged page back to a different page of flash memory (updating the logical-to-physical mapping tables, so no data is lost). Once this is done, the page that those other logical blocks came from is no longer used by anything - it is garbage and is marked as such, so it can later be erased.

But the controller can't just erase the page; it can only erase whole blocks (128 pages in our example). So it will just keep track of which pages are garbage (and which logical blocks within each page do or do not contain valid data). When an entire block (128 pages) is left with nothing but garbage (or garbage and empty pages), it is safe to erase the block and will eventually do so, making all of its pages available for writing again.

As you can see, every time you write to an SSD, whether you're writing to a new logical block or repeatedly overwriting the same logical block, you are actually writing data to new/different pages in the flash memory, and other pages are marked as garbage. And the controller's mapping from logical block number to physical location in the flash memory changes each time - not just for the logical block you're writing, but for any other logical blocks that share pages with it.

If your usage (and the SSD controller's firmware) is perfect, then this system might be all you need. But in actual practice, you're randomly writing data all the time, so you find lots and lots of blocks of flash memory where there are some blank pages, some in-use pages and some garbage pages. Eventually, you run out of blank pages. Once this happens, the SSD controller needs to erase something, but there are no all-garbage blocks to erase.

This is where garbage collection comes in. The SSD controller needs to rearrange the pages with good data in order to create blocks it can erase.

One (probably naive) algorithm might be for the controller to read all the valid pages (up to 512K bytes) into RAM, erase the flash block, then write the valid pages back, leaving the formerly-garbage pages blank and ready for writing.

A better algorithm might do this in the background when the SSD is idle, in order to avoid making your OS hang (which is what seems to happen if an SSD actually runs out of free pages and is forced to garbage-collect at that point).

Other algorithms may track usage statistics and proactively relocate logical blocks and pages to different locations in a way to maximize the ability to free blocks in the background.

Actual SSDs use pretty complicated algorithms for garbage collection in order to deal with issues of wear leveling and write amplification, both of which can shorten the life of an SSD and degrade performance if they are not dealt with properly.

And now here's where Trim comes in.

Most operating systems do not try to erase logical disk blocks when you delete a file. They just delete directory entries and record that the logical blocks are free. This is a good thing (unless you require secure erasure), because it's a lot faster. But seen from the SSD controller's view, the logical blocks are not free - they're still holding data, and they will continue to hold that data until the logical block is overwritten with new data — which means the flash memory's blocks and pages may contain non-garbage data corresponding to files that have been deleted. We'd like to treat those blocks as garbage, so they can be collected, but we can't, because [the SSD] has no way to know that they correspond to deleted files.

Trim is an API where the operating system can tell the SSD that certain logical blocks no longer contain valid data. Typically, these are blocks corresponding to deleted files, but it could be for any kind of deleted data (like a partition you just deleted or erased). When the OS calls Trim for a logical block, the SSD controller immediately marks the underlying storage (a piece of a page) as garbage. So when garbage collection is performed on that page or block, the controller won't try to preserve that chunk of data. This allows the garbage collection algorithm to erase more blocks and not waste time preserving the content of deleted files.
 


I think I wrote about this a while ago, but I can't find the link, so I'll try to recall everything.
Again, thanks, David — where I get left behind in the forest of blocks, pages, etc. is this, from a 2010 article explaining SSDs that I believe is still applicable:
Enterprise Storage said:
Fixing SSD Performance Degradation, Part 1
Without TRIM, the SSD controller does not know when a page can be erased.
... The only indication that the controller has is when it writes a modified block to a clean block from the block pool. It then knows that the “old” block has pages that can be erased.
I think the latter comment applies to, e.g., editing a file that's already on the SSD that, when saved back to the SSD, is written to a "clean block." That differs from the "delete" function communicated by Trim.

This is now, I think, less important than when Trim was new and Apple didn't provide for its use on third-party SSDs. Still, we have several Mac Minis at work, all booting from external USB 3-connected SSDs, all without the benefit of Trim. And I've started using SATA SSDs with USB-SATA connectors in place of thumb drives, again without the benefit of Trim. Those drives are so cheap, I don't lose any sleep worrying they'll fill and "just stop", as some of the first SSDs available to consumers in 2008 did. 128 GB for $22 is one thing, $595 for 80 GB quite another.
 


If you have Trim enabled, then those files will become garbage (and subject to collection at some point in the future) as soon as you empty the trash.
My 2017 iMac shipped with Mojave 10.14.3. Is Trim enabled by default, or do I have to do something to enable it?
 


My 2017 iMac shipped with Mojave 10.14.3. Is Trim enabled by default, or do I have to do something to enable it?
Everything I’ve read says that Apple always enables Trim on its OEM SSDs, and it has nothing at all to do with what macOS you run. If you have external SSDs or replaced the internal with a 3rd-party SSD, then consult with the manufacturer to see what they recommend.

I recently read that enabling Trim may not be optimal for the long term, but I haven’t been able to locate what I saw. I’ll be back if and when I do.
 


I recently read that enabling Trim may not be optimal for the long term, but I haven’t been able to locate what I saw. I’ll be back if and when I do.
I've read the same thing - that some SSD makers claim that enabling Trim is bad for their device.

This makes no sense to me. Trim is simply an API where the OS tells the SSD controller about logical blocks that are no longer in use. If the SSD controller's design is such that this information isn't helpful, then it is free to ignore the command. If the SSD controller is doing bad things in response to the command, then its firmware has critical bugs that need to be fixed. Telling people that they simply shouldn't use Trim is a weasel response that is trying to blame customers for their own faulty product.
 



... Fill one bay with an SSD and the other with hard disk drive. They'd need to be in a mode where they present as independent disks (may need to avoid enclosures where only one of the bays is bootable when in independent disk mode — that may knock out some of the examples above). Second, there is a command line diskutil APFS subset command, e.g.
disktutil apfs create device /dev/disk4 /dev/disk5 newFusionDisk
which should create a new Fusion disk...
I have an iMac with a fusion drive made up of a 128GB SSD plus a 2TB 7200-RPM hard drive. I suspect that one could replace the standard SATA hard drive with an SSD and improve performance. As others have pointed out, it is far easier and less costly to buy a 2TB Samsung T5 and run it from the USB-C port as the boot drive. I would then use the fusion drive as the backup via Carbon Copy Cloner. The T5 is so small, one can hide it on the iMac stand.
 



I have an iMac with a fusion drive made up of a 128GB SSD plus a 2TB 7200-RPM hard drive. I suspect that one could replace the standard SATA hard drive with an SSD and improve performance. As others have pointed out, it is far easier and less costly to buy a 2TB Samsung T5 and run it from the USB-C port as the boot drive....
Same path for even a hard disk drive-only iMac. The 500GB version of the T5 linked above 'street price' is $89. Apple wants to charge $100 to slide in a measly 32GB (or 64GB) SSD for the base-level Fusion drive. For anyone whose working space is 425GB or less, it is cheaper to buy the external SSD and just boot that way.

The street price for the Samsung EVO 2TB SSD is about the same as the T5. So, if Apple had just used a SATA SSD in the first place, would end up at the same cost for their Fusion setup (even with some margins on top). Apple's PCIe blade SSDs and T2 SSD solutions are faster, but the SATA SSDs are now a more viable solution they are avoiding for entry models. Since the SATA SSDs are capped on bandwidth (simple SATA is probably not moving forward any time in the immediate future), the focus is more on $/GB affordability. The prices for SATA SSDs are around same zone that HDDs were in after the Thai floods (and Apple managed to do entry models in that era just fine at similar price points to now).

For an iMac, Apple wants to charge $600+ for 1TB and $700+ for 2TB versus the less than $300 "street price" for 2TB here. For sub-3TB clone backup drives, yes, an external SSD is probably better than a "build your own" external Fusion drive for a "fast" clone. As long as they are highly active backups (hooking it up on a regular basis). For more archival, offline (powered down) backups, an APFS hard disk drive backup is better, even if slower.

However, for capacities over 3TB, things get more into a grey area. In 2020, that may change with the next round of even more affordable (lower $/GB) SATA SSDs. Apple's Fusion drives stop at 3TB, so not much long-term impact there. For over 3TB capacity drives, though, HFS+ isn't going away any time soon.

The bigger "painted into a corner" problem for Apple is that, if they had cost-effective SSD in products alongside their highly marked-up ones, they'd have deeper comparison issues.
 


An Apple-standard SSD should automatically have Trim enabled. You can confirm this by running the System Information app and look for the information about your SSD. It will say if Trim is enabled or not.
Thanks! It says Trim is supported (TRIM support: yes), but there is no specific language to say whether it is enabled or disabled.

Is Apple... using supported to mean enabled?
 



Same path for even a hard disk drive-only iMac. The 500GB version of the T5 linked above 'street price' is $89. Apple wants to charge $100 to slide in a measly 32GB (or 64GB) SSD for the base-level Fusion drive. For anyone whose working space is 425GB or less, it is cheaper to buy the external SSD and just boot that way....
I know it's a bit costly, relatively so anyway, but what I purchased from OWC is their 1TB Envoy Pro EX Thunderbolt 3 external drive with an M.2 NVMe inside. (Cost is around $280). Throughput, when hung off the back of one of the TB3 ports on my 2017 5k iMac is approx. 2300MB/sec reads and 2470MB/sec writes. The performance when using the drive as the boot drive is phenomenal.
 


Ric, just in case anyone requires it, I have a set of free "TRIM Tools" I crafted using some already-written Terminal commands I found on the Web that I wrapped up in three Automator workflows. Designed for internally installed third-party SSDs, they will turn TRIM on, off, and report the status of TRIM. They, along with a "read me" may be found here.
 


Same path for even a hard disk drive-only iMac. The 500GB version of the T5 linked above 'street price' is $89. Apple wants to charge $100 to slide in a measly 32GB (or 64GB) SSD for the base-level Fusion drive. For anyone whose working space is 425GB or less, it is cheaper to buy the external SSD and just boot that way....
I have a new older OWC Envoy external Thunderbolt 3 SSD that operates somewhere in the 1000-2000 MB/s range and does quite well. Their newer Envoy Pros are even faster, so I cannot see buying anything more than the minimum SSD on a new machine and then using the external Thunderbolt SSD.
 


I've read the same thing - that some SSD makers claim that enabling Trim is bad for their device. This makes no sense to me. Trim is simply an API where the OS tells the SSD controller about logical blocks that are no longer in use.
The API isn't quite that simple. Trim hasn't been monolithic.
Wikipedia said:
Trim (computing)
A drawback of the original ATA TRIM command is that it was defined as a non-queueable command and therefore could not easily be mixed with a normal workload of queued read and write operations. SATA 3.1 introduced a queued TRIM command to remedy this.
There are, at this point, three variations of Trim.
Wikipedia said:
Trim (computing)
There are different types of TRIM defined by SATA Words 69 and 169 returned from an ATA IDENTIFY DEVICE command:
  • Non-deterministic TRIM: Each read command to the Logical block address (LBA) after a TRIM may return different data.
  • Deterministic TRIM (DRAT): All read commands to the LBA after a TRIM shall return the same data, or become determinate.
  • Deterministic Read Zero after TRIM (RZAT): All read commands to the LBA after a TRIM shall return zero.
Casually, lots of folks treat Trim as if it is the last of the three. (Some "Trim working" tests will do a read for zeros after the call to see if it is 'working' - that Trim will relatively quickly scrub the "old" block data out and replace it with zeroes. If there are multiple read/write requests coming in and a minimally capable SSD controller, that means kicking parallel queues out of the "swimming pool" so that it can single-track the work. Note that it is only the LBA that has to be zero, so an advanced controller can just send back zeros if the block mapping is on the "to be garbage-collected" list. The 'old' data would still be in the NAND block, even though the user level would look like zeroes, which is why writing zeroes isn't necessarily a secure way to erase an SSD.)

In the context of wanting maximum performance with a relatively high read/write queue I/O depth, the original 'bad' Trim pragmatically demanded that the depth be pushed down to zero until had finished at least a portion of the Trim 'work' (impact on controller internal state).
If the SSD controller's design is such that this information isn't helpful, then it is free to ignore the command. If the SSD controller is doing bad things in response to the command, then its firmware has critical bugs that need to be fixed.
The problem was that you couldn't completely ignore it. The way the original Trim was formed, file system folks could expect Trim to clean up race conditions for them. Trim was a big serial lock on the SSD. Once folks write code expecting that, you can end up with issues.
Telling people that they simply shouldn't use Trim is a weasel response that is trying to blame customers for their own faulty product.
I don't think there was a vendor who said never use Trim in any context. It may have been SandForce, who said something to the effect of don't use Trim if you want max performance. That was also in the context of other SSD vendors proclaiming that Trim has to be used or else the "sky will fall." That, too, was a limitation of their SSD controllers (bigger locking around Trim was needed, if they did not do something more sophisticated). There were controllers with really bad garbage collectors that basically needed the Trim metadata to do a reasonable job. Trim should be additional, "cherry on top" data for garbage collection... not the whole mechanism.

So part of "bad" versus "good" was really a battle over queued vs non-queued Trim. Once queued Trim arrived, the whole "bad" argument faded.

P.S. I think some of Apple's "our Trim is the only good Trim" notion was grounded in some of these controllers versus the standard quarrels. At this point, the standards have evolved, and the controllers have had multiple iterations on the stable standards, so that position is much weaker now. A new, modern SSD controller and trimforce enable is probably very low risk. But Trim has changed, and if you go back far in time with a relatively older drive that is on very old standards, then it is a higher risk.
 


Amazon disclaimer:
As an Amazon Associate I earn from qualifying purchases.

Latest posts