MacInTouch Amazon link...
Channels
Products, Questions
I understand that Ethernet is probably easier/better for connecting a whole bunch of users to one NAS, but the irony is that 10GbE connections will likely require a Thunderbolt adapter, since only special-order 2018 Mac Minis have the option of built-in 10GbE ports.

2-meter Thunderbolt 3 cables are readily available:
And long Thunderbolt 2 cables are available, too:
(Thunderbolt 2 optical cables can be used with Thunderbolt 3 Macs via Apple's Thunderbolt 3-Thunderbolt 2 adapter.)
Corning adapter and cable available on Amazon, $215 for 18 ft. Max speed: Thunderbolt 2 20Gb/s bi-directional. There's also a 100 ft version for $529. There aren't a lot of reviews, but the proportion of reported product failures is discouraging.

I know there are use cases for extremely fast internal, external, and network storage. Some of those needs, at least for managing "business"-type files are reduced by "intelligent" strategies. Our main work Synology backs up to the one in the vault daily; the backup takes only minutes a day, even over "slow" 1GbE, because it's an incremental transfer. Backing up to the one connected via the Internet is slower but still pretty fast; it's automated and happens in the background "after hours" to be sure all the day's work is transmitted.
 


Ric Ford

MacInTouch
SSDs theoretically can run a lot faster than their spinning ancestors but it's usually pretty cost-prohibitive as SSDs optimized for fast writes (Intel Optane P4801x series, etc.) offer only 1GB for every $2.50, well above the usual price points for regular SSDs.
The Samsung 970 EVO (Plus) is blindingly fast at $230/TB (i.e. $0.23/GB), so your pricing example is an order of magnitude off the norm.

You may have missed the earlier testing I did of various data transfer methods, in which GigE was very mediocre in performance (but I did not have 10GigE equipment on multiple computers/devices to test).

I also posted more benchmarks of directly attached storage, including tests of the Samsung X5 compact Thunderbolt 3 SSD. I'm pretty certain that 10GbE isn't going to give you performance like this, no matter how "saturated" it is:
Let's test a 1TB Samsung X5 Thunderbolt 3 SSD, straight out of the package, connected to a 2018 MacBook Pro:

AJA System Test
  • Write: 2093 MB/sec
  • Read: 2681 MB/sec
Blackmagic Disk Speed Test:
  • Write: 2033 MB/s
  • Read: 2627 MB/s
 




And long Thunderbolt 2 cables are available, too:
Corning said:
Thunderbolt™ Optical Cables
Thunderbolt™ Optical Cables by Corning connect computers and devices at incredible speed and over longer distances. They’re thin, light and remarkably tough — Optical Cables by Corning can be bent, squeezed, and tangled.
Available in 5.5m, 10m, 30m, and 60m lengths.
Corning adapter and cable available on Amazon, $215 for 18 ft. Max speed: Thunderbolt 2 20Gb/s bi-directional. There's also a 100 ft version for $529.\
And just for reference, here's the shocking price for the 60m Corning Optical Thunderbolt 2 cable: $799.95!
 


I'm in the process of selecting another Synology NAS for work, and the model that seems to best match our small office environment is the relatively new DS1019+. It's a five-disk NAS, with two slots for NVMe cache (read or write or read/write), but the reviews have lamented it is only 1-Gigabit Ethernet and offers no upgrade path to 10Gbit. It does come with two 1Gbit RJ45 ports, which can be used for link aggregation for more speed, though from what I've read, that isn't easy to set up and wouldn't much matter in the real world when the connected computers are themselves limited to 1Gbit.
We have a couple of DS1517+ devices, which sound similar to the DS1019. They are 5-bay and have the cache slots (which we've never used). They have four 1Gbit Ethernet ports, but I found the aggregation to be fiddly to set up and made basically no difference to performance.

We've had them about a year now, and they have settled down nicely - they just sit there and tick away. The problem is performance - frankly it's just poor. Indexing, which is virtually impossible to shut off, cripples the machine, but even when fully indexed, it's not even close to the performance of our macOS Server machines. This is despite tweaking a lot of little things (SMB etc) which are known to cause slowdowns.

It would be interesting to know if the 10Gbit ports would make a difference, but I'd be surprised. I just don't think they're made for performance, although I can't be critical of their reliability and stability. They're rock solid.
 


The Samsung 970 EVO (Plus) is blindingly fast at $230/TB (i.e. $0.23/GB), so your pricing example is an order of magnitude off the norm.
... It all depends on the the application. I don't have a benchmark for the Samsung EVO 970 using the same tool as I do for my Optane stick. However, the folk at userbenchmark.com note that 4K random write performance is 144MB/s (250GB module), 145MB/s (500GB module), 144MB/s (1TB module), and 147MB/s (2TB Module). Per my test above, the Optane module is 6x faster than that at the same block size (and priced accordingly by Intel). Some of the EVO random read speeds recorded are truly shocking, down in the 50MB/s range.

Whether to care about this difference is likely a question of how the drive is being used. If there is a lot of sequential writing going on, then random writes / reads are likely not so important (it's all sequential).

A general-purpose server and computer likely would benefit from very good random-write performance, because it improves responsiveness regardless of the file type being written. Additionally, the Intel Optane P4801x series features inherent power-loss protection while the Samsung does not. That is important for data integrity in my FreeNAS.
I also posted more benchmarks of directly attached storage, including tests of the Samsung X5 compact Thunderbolt 3 SSD. I'm pretty certain that 10GbE isn't going to give you performance like this, no matter how "saturated" it is:
FWIW, I'm not arguing that Ethernet is better than Thunderbolt 2+ re potential transfer speeds - just the overhead difference will settle that issue in no time. It's more a question of what tool is more appropriate for your application. I use eSATA for my DAS, because the enclosures are cheap (Oyen Digital Mobius 5), feature adequate performance, and if I want hardware RAID, I can have that, too.

Saturating a 10Gbe link means transferring ~1000MB/s. It's hard to do, just as it's hard to saturate a Thunderbolt 3 cable - though if you hang too many things off a single Thunderbolt bus (DAS, 4K monitor, etc.), you might notice slowdowns. I "spread the love" and use multiple Thunderbolt cables for high-throughput services like 10Gbe connections, 4K monitors, and the like.
 




I'm in the process of selecting another Synology NAS for work, and the model that seems to best match our small office environment is the relatively new DS1019+. It's a five-disk NAS, with two slots for NVMe cache (read or write or read/write), but the reviews have lamented it is only 1-Gigabit Ethernet and offers no upgrade path to 10Gbit. It does come with two 1Gbit RJ45 ports, which can be used for link aggregation for more speed, though from what I've read, that isn't easy to set up and wouldn't much matter in the real world when the connected computers are themselves limited to 1Gbit.
In my experience, the maximum sustained speed you'll get out of a gigabit ethernet connection is about 120-128MB/s. How well Link aggregation (LACP) will work depends a lot on the protocols in use, the switches, etc. For SMB/AFP, there is no advantage for a single computer, as each network process is single-threaded and will only choose one path. When multiple computers access a NAS, sometimes you get lucky (i.e. each computer uses a parallel data path) and sometimes you may not (both connections use the same cable). It all depends on the hardware, configuration, and so on.

Given how low the cost of quality 10Gbe hardware has become, I'd jump to that instead. For example, Mikrotik has a whole bunch of switches now that have two or more SFP+ ports (each of which is good for 10Gbe). Depending on your situation, either drop in a direct attached twinax cable (cheapest option) or an optical transceiver (about $30 each, used - remember to use the right optical light pipe between these transceivers). The beauty of optical is that it's inexpensive, offers great performance, and is pretty immune to distance limitations the way that Thunderbolt and copper ethernet cables are.

I'll put in another plug for a quality FreeNAS rig if you're looking for performance and data integrity. For example, a great SOHO motherboard is the Supermicro X10SDV-2C-7TP4F, which offers a 16-channel LSI 2116 HBA (for up to 16 SATA3 or SAS2 drives), two PCIe 3.0 x8 slots, a single PCIe 3.0x4 slot for the likes of the P4801x, a low-power CPU (25W TDP), two SFP+ ports, and awesome performance. The board is about $450 - memory and so on will likely double the cost. Add a professional case off eBay (about $300 for a CSE-836), and you have an expandable system with redundant power supplies, a kicking motherboard, and oodles of expansion capabilities. Naturally, you can also go smaller.

Used Supermicro servers come on eBay all the time and can be much cheaper, with the only downside being that their CPUs tend to be much more power-hungry than the new embedded rig I referenced. Best of all, FreeNAS does a great job supporting both AFP and SMB (now including Time Machine support on SMB as well). The only downside is the learning curve to get the system installed, set up, and so on. That's where the community really shines.
Over my years of using computers, I've paid considerable amounts to "future-proof" for technology options that unfortunately never seem to distill from vapor. Or I find, as in my adoption of Thunderbolt 1, that my expensive technology is effectively (or even absolutely) obsoleted.
It's sometimes hard to differentiate between the "must" vs. the "want". For example, I'd love to have inexpensive, external RAID arrays for my backups that feature hardware RAID chips, 5+ hot-swappable, tool-less drives, and Thunderbolt 3 connections... Unlikely to happen anytime soon! Instead, pretty much every external DAS that runs Thunderbolt 3 relies on SoftRAID to do the RAID (which I reject) and is expensive to boot.

I know, some people like SoftRAID better... I like a solution that "just works" whether I plug it in today or in 10 years. I recently had that experience when I connected an old 128MB phase-change optical drive to my "modern" Mac with a combination of SCSI, FireWire, etc. It just worked. Keep it all in the Apple universe re drivers, and usually things are backwards compatible.

Similarly, the eSATA connection to my DAS is rock solid, allows me to diagnose SMART error issues, and is fast enough to keep up with my 10Gbe connection to the server. Coming back to Apple AirPorts, it's ditto: Apple Airport base stations may no longer be the fastest rig in town but they just work.

While Apple may have discontinued making them, they still support them, the hardware / software is stable, and the lower performance is an acceptable tradeoff vs. dealing with the vagaries of unstable hardware / software like I have experienced with some other vendors. Sometimes, it's better not to be on the bleeding edge - let the other users be the beta testers.
 


Apple Airport base stations may no longer be the fastest rig in town but they just work.
While Apple may have discontinued making them, they still support them, the hardware / software is stable, and the lower performance is an acceptable tradeoff vs. dealing with the vagaries of unstable hardware / software like I have experienced with some other vendors.
I wish they "just worked" for me. I have a last-gen 3TB Time Capsule connected to an Arris DOCSIS 3.1 cable modem (endorsed by, but not "contaminated by", Comcast; i.e., it doesn't contain a router that Comcast can employ for its own purposes). The AirPort Extreme does DHCP/NAT for all devices on my LAN, and the wireless network is extended by a last generation "pizza box" AirPort Express running in bridge mode. Periodically, one or the other Apple device does something wonky, and I find myself connected to the internet via some one else's neighborhood Xifinitywifi SSID. Fortunately, my IoT Nest "cpu" (NestGuard, which I think uses proprietary protocols to "talk" to its sensors but WiFi to talk to my Internet access point) can fall back to a cellular connection to the internet when that happens, and the Nest Guard informs me of such changes via a change in a prominent LED on its surface, as well as an automated email and a local voice message.

I loved the simplicity of configuration of Apple's AirPort devices, and I'm still puzzled and angered by Apple's decision to exit that market. Monitors, printers, and hard drives became low-margin commodity items, so I accepted Apple's abandoning them, but so much of the consumer router market (mesh networking products possibly accepted) is dominated by absolute junk, that I still don't understand why Apple bowed out.
 


I wish they "just worked" for me. I have a last-gen 3TB Time Capsule connected to an Arris DOCSIS 3.1 cable modem (endorsed by, but not "contaminated by", Comcast; i.e., it doesn't contain a router that Comcast can employ for its own purposes). The AirPort Extreme does DHCP/NAT for all devices on my LAN, and the wireless network is extended by a last generation "pizza box" AirPort Express running in bridge mode. Periodically, one or the other Apple device does something wonky, and I find myself connected to the internet via some one else's neighborhood Xifinitywifi SSID.
The only thing I can recommend is trying out another AirPort Extreme 6th-gen as an extender instead of a 5th-gen AirPort Express. The Express had a pretty marginal antenna in my experience, which made it an OK music repeater / print server for cheap USB printers but not something I would depend on to transfer lots of data.

I am also not a fan of how the AirPort Express fused it all into one box, exposing the WiFi electronics inside to potentially-damaging heat. The 6th gen AirPort Extreme has a fan (which is quite hidden and rarely comes on) for that purpose.
 


I am also not a fan of how the AirPort Express fused it all into one box, exposing the WiFi electronics inside to potentially-damaging heat. The 6th gen AirPort Extreme has a fan (which is quite hidden and rarely comes on) for that purpose.
Of course, if I'm to follow your suggestions, that would mean I'd need to find a last gen Extreme in the marketplace…

There are a few other things I can do to improve signals in my home; e.g., move the Extreme from its current position on the floor in a corner under my desk, and connect it to my iMac via Ethernet rather than WiFi (the latter won't help much, given that it's only 3 feet away from the iMac).

And, stimulated by your suggestion, I just looked at BestBuy and Amazon. On Amazon, the Extremes are available from one "marketplace" seller, and at prices reminiscent of original Shelby Cobras (living off the "they just work" patina). At BestBuy, "geek squad refurbished" Extremes are just below the original Apple retail price!
 


Of course, if I'm to follow your suggestions, that would mean I'd need to find a last gen Extreme in the marketplace…
Try eBay. There are lot of highly-rated resellers there that will sell you a "fully-functional", refurbished one for less than $100. That's the route I would choose. All of my recent 6th Gen Extremes have been refurbs without any notable issues, I just updated them to 7.9.1.
 


One of our neighbors (I don't know who) has an open WiFi server called dlink with a signal that comes in so strong, our Macs often pick it up as the strongest accessible. We have used it intentionally a couple of times when the WiFi was out, but usually we only notice it when we can't connect to the printer or our other Macs, and I check what WiFi network we're on. This has been going on for at least a few years, and I doubt the neighbor knows their server is open.
 


One of our neighbors (I don't know who) has an open WiFi server called dlink with a signal that comes in so strong, our Macs often pick it up as the strongest accessible.
That is a bug in macOS Sierra for me, as well. Have two networks of equal intensity, select one to be “preferential”, and macOS Sierra (and by extension the iPhones via iTunes sharing) will frequently select the less preferred network anyway (guest vs. private, using the same AirPort base stations).

The most common denominator for this behavior is coming home from another location with a network whose names / password are the same - macOS apparently doesn’t bother to check if a preferred network is even available, ditto the iPhones.

The only solution I have found is to name every network differently and then tell the computer to forget the credentials for the less preferred networks on a site by site basis. A bit annoying!

Along a similar vein, I have found my network source preferences in macOS so unreliable that I manually disable WiFi whenever I have large files to transfer to the server. That way, I can force all TCP/IP traffic through the 10GbE interface.

All this, despite setting use preferences in the control panel, etc. The implementation is simply buggy in my experience. Whenever I upgrade to Mojave, I hope it’s better. But I’m not holding my breath!
 


I recorded some video files for the radio station I work at on my iPhone 8 (after recently upgrading to iOS 13). I needed to quickly turn the files around for use for a deadline....
The non-support from Apple referred to by JohnRiley is indeed indefensible.

Regarding the video transfer part, though, I have found AirDrop to be a very reliable way to transfer files from iPhone/iPad to computer and vice versa. It is quick and easy, and would have been faster, probably, than over a USB-Lightning connection, and would have solved his first problem.

The one time I gave up on it and instead used a flash drive was when I realized I was trying to send files that totaled over 3GB; that seemed like it was going to take more time than my patience that day would allow. In the end, the flash drive (even though it was USB 3) wasn't as speedy as I wished, either, due to my impatience.

As an aside, it certainly seems that when I need to print something in a hurry because I'm running out the door, the printer stops working; when I need to send an email in a hurry because a window of opportunity is closing, the internet goes flaky; when my wife needs to find information located in an old email for an imminent meeting, it can't be immediately found. For my family, anyway, computers do not reward impatience.
 


Ric Ford

MacInTouch
Regarding the video transfer part, though, I have found AirDrop to be a very reliable way to transfer files from iPhone/iPad to computer and vice versa. It is quick and easy, and would have been faster, probably, than over a USB-Lightning connection...
Doesn't AirDrop rely on Bluetooth? [Apparently not entirely - see below...] That seems dreadfully slow for any large transfers. In fact, I already tested its speed:
#performance #transfer #benchmarks #airdrop
 



Ric Ford

MacInTouch
AirDrop only uses Bluetooth for discovery and to negotiate a Wi-Fi Direct connection for the actual transfer.
Not quite, apparently, but close:
Wikipedia said:
Wi-Fi Direct
As of March 2016, no iPhone device implements Wi-Fi Direct; instead, iOS has its own proprietary feature, namely Apple's MultipeerConnectivity.[22] This protocol and others are used in the feature AirDrop, used to transfer large files between Apple devices using a similar (but proprietary) technology to Wi-Fi Direct.
 


Ric Ford

MacInTouch
Interesting performance comparison across different file-sharing protocols:
Photography Life said:
AFP vs NFS vs SMB Performance on macOS Mojave
... Based on what I see from two different NAS vendors, it looks like SMB v3 is the best network protocol one can use in terms of overall performance on macOS, with AFP being the second best. Both SMB v1 and NFS should be avoided – they demonstrated rather disappointing write performance. If specific macOS features that AFP provides are important to you, then you should test both SMB v3 and AFP on your NAS device and see how each does on its own.
#benchmarks #afp #smb #nfs
 


The good news is that Apple appears to have addressed many of its bugs in SMB, allowing it to perform as well as the deprecated AFP. Better macOS SMB interoperability has also been achieved by storage vendors, in terms of supporting OS X machines, SMB bugs and all. A lot of work went into enabling Time Machine and other Apple-specific support to make SMB as responsive as possible.

However, actual network transfer performance is dependent on so many factors that one should focus on each particular use case – connected hardware, software, the network, etc. – in addition to the protocol. Sometimes, even NFS may make sense (it's multithreaded, unlike SMB or AFP – large sets of small files may process faster, depending on the environment). While SMB2+ or AFP have been about equal in performance in my experience, there are a lot of factors that can significantly impact performance, regardless of protocol.

For example, my FreeNAS server has a 1TB L2ARC that only caches metadata. Using Carbon Copy Cloner to sync stuff has been accelerated by up to 12x, thanks to the metadata being on hand quickly to help rsync (underlying Carbon Copy Cloner) to execute. Directory browsing is also much faster. However, at least on FreeNAS, this cache is flushed with every restart and has to get "hot", which means running through a directory with Carbon Copy Cloner at least three times for maximum benefit.

On the write side, the number of disks, their latency, RAID config, etc. also have a tremendous impact. Caches can help but.... is your cache protected in case of power failure? Consider including the power supply inside your machine in that equation, if the data is important enough – some servers offer redundant power supplies for this sort of issue – or use a powerloss-protected cache.

Also, beware using cheap SSDs as cache drives without doing your research. Some OEMs goose short-term SSD performance by internally pairing a small fast cache up front with much slower and cheaper flash in the back. Once that front-end cache is full, performance plummets... see this thread with lots of user-submitted test results.

FWIW, I am in the process of switching protocols from AFP to SMB for all but the legacy shares as part of my transition to Mojave. Time marches on!
 


It's interesting to see how sometimes moving in new directions can create unintended distractions. I tried out hosting my ZFS shares using SMB instead of AFP and ran into an unexpected problem: Time Machine support still seems marginal on FreeNAS (worked at first, then stopped working, followed by not being able to re-init the same share it used to work on). Eventually, I'll figure that out.

Worse, I ran into the 255-character filename path-limit on SMB. Directory names got mangled. Here I thought Mojave would have no AFP support, but it does... for Time Machine as well as everything else. So, that ends my SMB migration dead in its tracks.
 


... FWIW, on my LAN, I reserve a hundred addresses (192.168.1.1 through 192.168.1.100) for my static pool, leaving the rest of the subnet (192.168.1.101 through 192.168.1.254) as the dynamic pool. (Addresses 192.168.1.0 and 192.168.1.255 are reserved for the "null" and "broadcast" addresses. Don't assign them to any devices, ever.)
I configure my major equipment (desktops, laptops, printers and routers) with static addresses in the static pool, leaving everything else (phones, tablets, home appliances, etc.) with dynamic addresses that the DHCP server assigns from the dynamic pool.
A different approach is to create DHCP reservations for IPv4 addresses using end-device MAC addresses (for example - c8:d0:83:97:7d:2c) and leaving the devices at default DHCP-configured settings. The DHCP service may allow reserving addresses from within the DHCP scope (dynamic pool), but operator confusion is reduced by choosing reserved addresses outside the dynamic pool.

Pro:
  • No special device configuration is required – plug and play is the effect.
  • DHCP IP address reservations persist over device reboots or reconnections.
  • Default route, mask, DNS, and other settings are automatically supplied with all DHCP-supplied IPv4 addresses, reserved or pool.
  • A regular visitor's device may be configured for the local network without requiring direct access to the device after obtaining the MAC address via DHCP lease data or ARP date.
  • Firewalls depending on constant IPv4 addressing may be configured concurrently with DHCP IPv4 address reservations.
Con:
  • The MAC address of each device needing a fixed address must be configured in a DHCP address reservation. The MAC address is usually printed on the device itself or may be obtained from current DHCP lease data or ARP data.
  • Configuring a new or changed DHCP reserved address may require restarting a router, potentially interrupting network connections.
Notes:
  • Local devices capable of IPv6 networking use Stateless Address Autoconfiguration (SLAAC) by default. Local DHCPv6 is rare.
  • A device using its MAC address to generate a Link-Local IPv6 Unicast Address per RFC 4291 can be considered to have a constant IPv6 address.
 


For example, some routers will not route Bonjour/mDNS packets between wired and wireless networks or between 2.4GHz and 5GHz wireless networks.
Yep. I found this out the hard way years ago using a Verizon/Actiontec FiOS router. It wouldn't route multicast packets from the wireless network to the wired network (but would route them the other way).

This bug manifested for me as a problem with an iTunes shared music library (streaming from iTunes on one computer to iTunes on another). My desktop system (wired connection) could see my laptop (wireless) but the laptop could not see the desktop.

I worked around this problem by disabling Wi-Fi on the Actiontec router and attaching my own Linksys Wi-Fi router (in bridge mode) to one of its LAN ports. The Linksys router had no problem passing multicast traffic between the wired and wireless segments and the problem went away.
 


I have to agree, and I forgot about this. I have static IP addresses assigned to my Canon all-in-one and our aging HP laser jet.
I use static IP's on everything that allows them on our LAN. It also helps in keeping tracking of all the IoT, not to mention the useful stuff, with the device count > 20.
 


I use static IP's on everything that allows them on our LAN. It also helps in keeping tracking of all the IoT, not to mention the useful stuff, with the device count > 20.
If you have a lot of stuff on your LAN, then it also pays to set up a DNS server.

I used to recommend macOS Server for this, but the latest version for Catalina has removed every useful feature, so that's no longer a recommended option. Apple's migration guide seems to indicate that upgrading from an older version of Server will retain DNS capabilities (and it also describes how to migrate to maintaining your own installation of BIND), but I can't recommend something that is clearly deprecated, even if present.

Fortunately, you can install BIND using your favorite UNIX package installer: MacPorts and Homebrew both have it, but it appears that Fink does not.

Or, if you prefer, run your DNS server on something other than your Mac. I currently do it with a Raspberry Pi for my LAN.
 


If you have a lot of stuff on your LAN, then it also pays to set up a DNS server.
I used to recommend macOS Server for this, but the latest version for Catalina has removed every useful feature, so that's no longer a recommended option. Apple's migration guide seems to indicate that upgrading from an older version of Server will retain DNS capabilities (and it also describes how to migrate to maintaining your own installation of BIND), but I can't recommend something that is clearly deprecated, even if present.
Fortunately, you can install BIND using your favorite UNIX package installer: MacPorts and Homebrew both have it, but it appears that Fink does not.
Or, if you prefer, run your DNS server on something other than your Mac. I currently do it with a Raspberry Pi for my LAN.
Thanks, David. I have had DNS set up on a Mac Mini with macOS Server (along with CalDAV, and file sharing), but as you say, that is becoming problematic. I am reluctant to try manually installing BIND, after two spectacularly frustrating failures to manually install CalDAV (days lost in both attempts). I've been thinking about Linux, but it's hard to see how that could be any better.

Also, those devices that won't let me assign fixed IP's won't let me assign a DNS IP either (Sonos being a major offender).

As for Catalina, I don't even have a test volume anymore.
 


Amazon disclaimer:
As an Amazon Associate I earn from qualifying purchases.

Latest posts