MacInTouch Amazon link...

Linux

Channels
Other
Hi Folks,

I followed some posts here and elsewhere regarding creating an Ubuntu boot drive with Etcher and a new USB thumb drive. It worked, and I now can run Ubuntu 18.10 from that USB drive.

But somehow in the installation, with my effort to place the entire Ubuntu setup only on the thumb drive, I now have an edited EFI partition on my 2014 Mac Mini. It's not a problem, but if I hold the Option key on boot and then select the EFI partition with the Ubuntu drive inserted, it loads and works mostly fine.

But once I do that, the EFI partition becomes the first startup one; if I shut down, remove the Ubuntu drive and boot up, the Mini loads the Linux Grub loader (?) on a black screen and waits for the nonexistent Ubuntu drive. I can type "exit" in the window and the Mac boots into Yosemite (the original and only OS on this Mini). I then reset the Startup Disk in System Preferences, enter the admin password twice and reboot and it's fine -- until I boot into Ubuntu again.

I found an article on the web where some folks erased that EFI partition, but I'm not sure I want to do that. Any suggestions?

Thank you, everyone on MacInTouch.
 


Ric Ford

MacInTouch
In case this helps other folks...

I'm making progress in a days-long odyssey to get a modern Linux running on a Mac Pro 1,1, despite Apple's highly questionable refusal to enable 64-bit booting on that computer, showstopper issues with Apple USB boot code, and mediocre documentation, support, and software packaging in the Linux world. Anyway... here are some things I've learned the hard way (i.e. with many, many hours of trial and error, far exceeding in time the value of the computer). In this case, I'm working with Ubuntu for various reasons, and I don't have much to offer, personally, regarding other distributions, although Elementary OS is quite Mac-like and also may be worth a look.
  • To install Linux on a Mac Pro 1,1, you want to download the relevant .iso image file, then burn that to a DVD-R for booting/installation. Tony Gray's ImageBurner 2.0 works for me. I used a 2011 MacBook Pro 13" to burn the DVD-R.
  • Here's a really tricky part: The stupid 32-bit ROM on the Mac Pro 1,1 won't boot a lot of Unix installers. I'll spare you the giant maze involved and just point to you to a true Mac hero, Matt Gadient, who has prepared bootable .iso images that can install 64-bit versions of Linux on this extremely balky computer.
  • (You also could install 32-bit versions of Linux, e.g. Ubuntu 16.04.5 i386, and (despite indications to the contrary), this can be updated to Ubuntu 18 (via the internal update mechanism). However, this leaves the system still in 32-bit mode, and some software (BeerSmith in particular) needs a 64-bit system.)
  • Trust me on this... It's best to remove all drives from the Mac Pro, except a new, empty drive on which you'll install Linux. Make sure it's smaller than any backup drive you'll use later. And, trust me on this, too... avoid using an SSD at this point, and just use a good, solid 7200-RPM hard drive, preferably a few years old, in good condition and happily compatible with the Mac Pro 1,1.
  • I discovered that this Mac Pro 1,1 didn't have as much memory as I thought - in fact, it was about a gigabyte. That's awfully skimpy and slows things down. OWC has memory upgrades available, and they're not as expensive as I'd feared (e.g. 8 GB for well under $100).
  • You really, really want to hook up the computer to a working Ethernet connection before you install. You also want a wired Mac keyboard and mouse plugged into a USB port. (You can use a USB extender cable to reach the computer if needed.)
  • Burn that installer DVD-R - in my case: ubuntu-16.04-desktop-amd64-mac-mattgadient.com.iso
  • Get the DVD into the working Mac Pro optical drive (e.g. by hitting the eject key on the Mac keyboard after power on), and boot it (e.g. by holding down the "C" key to select "CD-ROM drive" or using Option to select the system, which should show up as "Windows").
  • Follow instructions, taking the defaults, erasing the hard drive, and choosing to download updates and extra components from the Internet (Ethernet) in the process to save time and trouble later. This is going to be a long process (optical drives are slowww...), and you'll have to create an admin account, password, etc.
  • Eventually, you should be able to boot into the Linux system from the hard drive. Depending on the version installed, you may need to do a lot of software updates (e.g. to patch Firefox security holes, etc., etc.)
  • Standard Mac keyboard shortcuts may not work... because the Control key is substituted for the Command key. You can work around that.
I'm still working through this process one more time after many fits and starts, but I have gotten it all to work on a test drive, so it's been proven possible.

A subsequent stage may be swapping a working Ubuntu system drive inside a Dell Optiplex with the Ubuntu hard drive in the Mac Pro - I'm curious to see if there are any problems in doing that.
 


In case this helps other folks...

A subsequent stage may be swapping a working Ubuntu system inside a Dell Optiplex with the Ubuntu hard drive in the Mac Pro - I'm curious to see if there are any problems in doing that.
Watch out for UUID. Just learned yesterday there's also potential issues with the computer's MAC [Ethernet] address being replaced by a reference to UUID in some distros. There's some suggested fixes for the UUID "block," but I don't know if you would encounter it, and which fix would work.
 


In case this helps other folks...

A subsequent stage may be swapping a working Ubuntu system inside a Dell Optiplex with the Ubuntu hard drive in the Mac Pro - I'm curious to see if there are any problems in doing that.
Very cool !! Keep us posted. Your last thought should be interesting.
 


Watch out for UUID. Just learned yesterday there's also potential issues with the computer's MAC [Ethernet] address being replaced by a reference to UUID in some distros. There's some suggested fixes for the UUID "block," but I don't know if you would encounter it, and which fix would work.
What is the actual problem here?

As I understand it, this feature has Linux generate a UUID for your Ethernet hardware (once, at setup time) and that UUID is used to associate the port's configuration file with it. This is better than older mechanisms (like device name or MAC address) because other kinds of IDs can vary across reboots or can be reconfigured in device drivers.

Does this cause a problem on Mac hardware for some reason? Or is it just something inconvenient to work with because it's new (much like how volume labels and UUIDs were confusing for storage configuration before people got used to working with them).
 


Ric Ford

MacInTouch
What is the actual problem here? ...
We were talking about swapping Ubuntu boot drives between a Mac and a PC, and this came up as a potential issue. (It might be an issue even when swapping drives between two Macs or two PCs). I don't know if it is actually a problem... yet. Haven't tried the swap.
 


Ric Ford

MacInTouch
I'm still working through this process one more time after many fits and starts, but I have gotten it all to work on a test drive, so it's been proven possible.
I got 64-bit Ubuntu 18.04.1 LTS and BeerSmith 3 installed. BeerSmith seems to run OK. Everything else - in particular, Firefox, plus just basic operations - is horrendously slow with constant disk accesses. I'm guessing that 1 GB of RAM just isn't enough for modern software like this, running in 64-bit mode.

Options:
  • Buy more RAM and hope that it installs OK and solves the problem.
  • Trash this Mac that's cost me many thousands of dollars (hundreds of hours) in wasted time, and get a cheap PC to run Linux.
  • Install a lighter version of Linux and hope it supports the needed apps. (But that's more wasted time on top of all the water under the bridge.)
  • Try to get Ubuntu running on a Chromebook.
  • I've wasted time with SSDs that just never worked right with Linux (but were fine with OS X), so I'm reluctant to spend more time on that approach.
 


What is the actual problem here?
As I understand it, this feature has Linux generate a UUID for your Ethernet hardware (once, at setup time)
We were talking about swapping Ubuntu boot drives between a Mac and a PC, and this came up as a potential issue. (It might be an issue even when swapping drives between two Macs or two PCs). I don't know if it is actually a problem... yet. Haven't tried the swap.
The problem is mostly my very basic understanding. Early in my time with Linux, I used Clonezilla to clone an internal boot SSD to an external USB drive. Cloning is so easy and so powerful on Macs, that from my "research", I thought Clonezilla would work like Carbon Copy Cloner or SuperDuper. To some extent, it does. I successfully cloned my internal boot SSD to an external USB hard drive. Then, because an untested backup is no backup, I tried to boot from the external drive. Oops. Turns out I should have been able to boot the clone, but not with its source mounted. Then I got lost researching how to change the clone's UUID so I could test it without physically removing my boot drive . . .
Clonezilla said:
Disk to disk clone
//NOTE// You can only keep one of the disks in the same machine before you boot it. If you boot the machine with the source disk and the cloned destination disk on the same machine, the booting OS will be confused since there are two identical file systems on the same machine. They have same UUID so the booting OS might mount the wrong file system.
/home/liquidat said:
UUIDs and Linux: Everything you ever need to know [Update]
UUIDs are probably best known in Linux as identifier for block devices. The Windows world knows UUIDs in the form of Microsoft’s globally unique identifiers, GUID, which are used in Microsoft’s Component Object Model.
. . .
As mentioned UUIDs are most often used in Linux to identify block devices. Imagine, you have a couple of hard disks attached via USBs, than there is no persistent, reliable naming of the devices: sometimes the first USB hard disk is named “sda”, sometimes it is named “sdb”.
What I was trying to suggest Ric should look up before moving his working Ubuntu drive from the old Mac to a PC is the possibility that the UUID scheme would keep it from working there. No big deal to just give it a try, but we're both walking here in terra incognita, and it can take a long time to research and apply fixes when we're outside our own experiences.

It's my understanding that the UUID we're discussing here is the identifier for a hard drive. David's reference to the MAC Address / Ethernet is different. UUIDs are, from what I've read, hashes that may or may not be derived from the computer's MAC address.

I recently heard a far more sophisticated Linux user than I am on a podcast discussing, this stuff goes by fast, as I recall, his effort to create and clone virtual Linux machines, something he was used to doing using VirtualBox. A schema had changed and, instead of generating a unique MAC Address for each VM so it could access the Internet, it had applied a UUID that, as I think he said, was identical across the clones. Which stands to reason, they're clones. Just something new that he had to work through.
 


I got 64-bit Ubuntu 18.04.1 LTS and BeerSmith 3 installed. BeerSmith seems to run OK. Everything else - in particular, Firefox, plus just basic operations - is horrendously slow with constant disk accesses. I'm guessing that 1 GB of RAM just isn't enough for modern software like this, running in 64-bit mode.

Options:
  • Buy more RAM and hope that it installs OK and solves the problem.
  • Trash this Mac that's cost me many thousands of dollars (hundreds of hours) in wasted time, and get a cheap PC to run Linux.
  • Install a lighter version of Linux and hope it supports the needed apps. (But that's more wasted time on top of all the water under the bridge.)
  • Try to get Ubuntu running on a Chromebook.
  • I've wasted time with SSDs that just never worked right with Linux (but were fine with OS X), so I'm reluctant to spend more time on that approach.
How often do you need to run Linux (or BeerSmith)? Would Linux in a VM keep some desk space open? A relatively recent Mac Mini would probably outperform whatever you can do with a Mac Pro 1,1.

I understand your pain. I had to give up a beloved Dual-G5 tower for a Mini.
 


Trash this Mac that's cost me many thousands of dollars (hundreds of hours) in wasted time, and get a cheap PC to run Linux.
As a data point, I see that Dell Small Business has a sale today on pre-configured Inspiron 3650 Intel Core i5-7400 quad-core Win 10 Pro desktops with 12GB RAM and 1TB, 7200 rpm drives for $449.99 with free shipping and a one year warranty.

According to Geekbench 4 benchmarks, this particular quad-core Dell doubles the single-core performance and beats the multicore performance of the eight-core, 3GHz MacPro1,1 configuration.

Between the Dell's warranty, the expected reliability of new hardware, the performance, and the value of one's time, it's hard to justify investing a lot of time into repurposing a MacPro1,1 with Linux for any reason aside from learning/hobbyist/zero cash budget purposes.

With respect to learning/hobbyist purposes, I do enjoy experimenting with ways to extend the useful working life of older hardware. In fact, at the moment I have a 2GHz Core Duo MacBook 1,1 with 2 GB RAM and an SSD drive sitting on my bench. With the SSD, it runs Snow Leopard astonishingly well.

While it's obviously much slower than any recent Mac in terms of processing power, the responsiveness of Snow Leopard with Snow Leopard-era software on an SSD is a bit of a revelation. Basic operations in the Finder, like opening windows, sorting long lists of files, and so on, actually feel snappier on that old machine than on my 2012 i7 MacBook Pro running macOS Sierra.

It's a useful reminder of just how enjoyable using an older Mac with Snow Leopard can be. However, I don't really need another Snow Leopard machine, so I plan to see if I can get a current release of a lightweight Linux or BSD distribution working well on it.
 


... What I was trying to suggest Ric should look up before moving his working Ubuntu drive from the old Mac to a PC is the possibility that the UUID scheme would keep it from working there. No big deal to just give it a try, but we're both walking here in terra incognita, and it can take a long time to research and apply fixes when we're outside our own experiences....
Ah, yes. These are always issues when moving a fully-installed system to a new computer. Even without UUIDs, there are various configurations that are created by the OS installer that may tie it to hardware. Those configurations need to be changed when moving the system to different hardware.

Hard drives (as you point out) are a good example.

Back in the "old" days, storage volume device names would be a function of their attachment point (e.g. Solaris used /dev/dsk/c0t3d0s6 to mean "SCSI controller 0, target 3, drive 0, slice (partition) 6"). This is easy to understand, but it means moving the drive to a new attachment point (a different SCSI controller or changing its ID) changes the device name, forcing you to change the corresponding configuration files (e.g. /etc/fstab).

Linux still does this but in a more confusing way, naming devices sequentially based on the device driver that controls them. So all your IDE drives are /dev/hd* (where the * corresponds to each port in the order the driver discovers them), SCSI drives (including SATA and FireWire) are /dev/sd*, and so on. So moving a drive to a new port changes its name (breaking configuration files). Furthermore, if you have multiple device drivers sharing the name space (e.g. motherboard SATA ports and a SCSI card), the names are affected by the order the device drivers load - a change in order can cause device names to change.

To work around the problem of volumes changing names as a result of changing attachments or device driver loading, Linux introduced the concept of volume labels, where you can assign a label to a partition. You then use the label in your configuration files instead of the device name. The system scans all your drives at startup and will automatically find each partition by its name.

This works great, but it can create problems if you end up with two partitions that share the same name. For example, if you performed a clean install on a new drive (with the same labels) and then attached the old drive (maybe to copy content off of it).

UUID naming solves this problem. A UUID is assigned to a volume when it is formatted, and it is (effectively) guaranteed to be unique. So you can move the drive anywhere you want and attach it alongside anything else without breaking your configuration files.

UUID naming can fail in two ways. One is if you made an image-clone of a volume. The original and the clone end up with the same UUID and could conflict if both are attached at the same time. The other problem is if you clone a system by copying files - your configuration files will break because the new volumes will have different UUIDs. So you'll have to get the new UUIDs and update your configuration files to use them.

It looks like the latter caught you by surprise, which is not surprising, since Linux distributions tend to automatically set up systems using labels or UUIDs. This is generally a good thing, until it isn't.

Incidentally, there is the exact same issue with network interfaces. Originally, you found them numbered sequentially based on device category (e.g. eth0, eth1, etc.), but that numbering can break if device drivers change or if ports are discovered in different orders (e.g. install a new Ethernet card and suddenly the new card becomes eth0, changing the name of what used to be eth0).

For a while, systems worked around this by associating device names with MAC addresses. This works around the problem of device drivers discovering ports in different orders, but it means your configuration will not be used if you replace a card with a new one (e.g. because the old one failed).

Linux systems today support predictable network interface names which generally reflect their physical connection. For example, "enp2s0" meaning "Ethernet network port 2 in slot 0". This means you can swap a faulty card for a new one and the configuration will predictably go to the new card as long as it is plugged into the same slot and has the same number of ports. But it also means that moving a card to a different slot will cause configuration to break.

Neither predictable names nor MAC addresses solve all the problems. There is no solution that allows you to retain configuration when a card moves to a slot that also works when a card is replaced. But that's just the nature of the problem.

But there's one final problem that neither of the above solve - virtual interfaces. These are especially important if you're running VMs or when configuring virtual networks. You may end up with many different interfaces that all have the same physical attachment and an unpredictable (because it's software-generated) MAC address. The use of a UUID for network interface solves this. A particular network interface (whether physical or virtual) will retain the same UUID no matter how its attachment may change, and without regard to its MAC address. But this doesn't solve all problems - replacing a network interface card with a new card will break configurations, as will moving/copying the configuration files to a new computer (moving the hard drive or cloning the files), since the new network interface(s) will have different UUIDs.

Ultimately, what we're looking at is a case of modern solutions to old problems that end up creating new problems. Hopefully the new problems don't happen as often as the old ones, but that's not much consolation when they happen to you.
 


Ric Ford

MacInTouch
How often do you need to run Linux (or BeerSmith)? Would Linux in a VM keep some desk space open? A relatively recent Mac Mini would probably outperform whatever you can do with a Mac Pro 1,1.

I understand your pain. I had to give up a beloved Dual-G5 tower for a Mini.
This is actually for friends with zero budget who are using a free Dell Optiplex 360 I set up with Ubuntu (and they use BeerSmith). Their second (far older) PC died, and I wasn't using my Mac Pro, so I wanted to set it up with modern software and give it to them.

I just decided to buy the 8GB RAM upgrade from OWC, because I want to see for myself how that will affect the horrendous performance I'm seeing. (R&D expense. :-)

The Dell they're using runs 64-bit Ubuntu 16.04 LTS well with its 4 GB of RAM and Intel Core 2 Duo (E7500) at 2.93 GHz.

My Mac Pro 1,1 currently has 1 GB of RAM and a quad-core Xeon 5150 at 2.66 GHz, and it's unusably molasses-slow with 64-bit Ubuntu 18.04 LTS.

The Dell and the Mac both have the same Samsung 4GB 7200RPM drives installed. (I did get an improvement with a Crucial MX200 SSD in the Mac Pro, but it was flaky and I abandoned that approach.)
 


Ric Ford

MacInTouch
I understand your pain. I had to give up a beloved Dual-G5 tower for a Mini.
That was my production system for many years, and I had some very fine audio software running on it, too. Unfortunately, that system was never as reliable as I'd have liked, and it eventually got so flaky, I was running it in reduced CPU power mode. Not fun. And the Samsung 840 EVO I installed was also very problematic in the end, though it had made a wonderful difference in performance.
 


I got Ubuntu 18 and BeerSmith installed. BeerSmith seems to run OK. Everything else - in particular, Firefox, plus just basic operations, is horrendously slow with constant disk accesses. I'm guessing that 1GB of RAM just isn't enough for modern software like this.
Even the "lean and mean" Linux distros (e.g., Peppermint) suggest a minimum of 2GB.

Though there's a version of Raspbian for x86, including Macs:
The MagPi said:
Hoping the 8 GB RAM you ordered keeps that old warhorse from the rendering service.
 


Well, since we have a Linux thread now, I've been meaning to ask - how does Linux compare with macOS for running a server (file sharing and CalDAV/CardDAV mainly). Does it run any leaner (RAM, CPU usage, storage usage)? Couldn't be anymore user unfriendly than macOS with the latest Server, at least for CalDAV.
 


Well, since we have a Linux thread now, I've been meaning to ask - how does Linux compare with macOS for running a server (file sharing and CalDAV/CardDAV mainly). Does it run any leaner (RAM, CPU usage, storage usage)? Couldn't be anymore user unfriendly than macOS with the latest Server, at least for CalDAV.
There are some nice options, from Ubuntu Server installed with the GUI to OpenMediaVault (Debian-based, and pretty straightforward). FreeNAS is solid but is very hardware-particular. But I’ve found I can do almost everything with OpenMediaVault, and just use a Mini for OpenDirectory and the machine management functions. I’m happy with that, and it’s let me use much more flexible hardware options (e.g. 12 internal drives is nice).
 


Well, since we have a Linux thread now, I've been meaning to ask - how does Linux compare with macOS for running a server (file sharing and CalDAV/CardDAV mainly). Does it run any leaner (RAM, CPU usage, storage usage)? Couldn't be anymore user unfriendly than macOS with the latest Server, at least for CalDAV.
iCloud is running on Amazon S3, Google Cloud, and Microsoft Azure. Likely on Linux instances, and unlikely on a version of MacOS.

Synology NAS (a Linux box) supports CalDAV.
Synology said:
Calendar
CalDAV clients
Calendar allows you to synchronize events with various CalDAV clients as Apple Calendar, Outlook or Thunderbird. You have no need to update the schedules on all your different devices, and manage them just on a single platform, Calendar.
Probably a lot more difficult to set up a "Linux server" than a Synology NAS.
Ubuntu said:
CalendarServer
This page explains how to install Apple's Darwin Calendar Server (also called DCS, and the basis for their iCal Server).

There are many other Calendar Servers, so the title of this page is misleading. There are also other CalDAV serves, including DAViCal, some of which work better than the Darwin Calendar Server. The reader is advised to research other calendar servers and keep in mind the limited scope of these instructions.
 


DGH

My base system is Debian Jessie 8.11. My goal is to get some version of Stretch (9.x) up and running on an EFI-based system. I upgraded the video card to Radeon, one of those required by Stretch for this purpose.

Here is a Grub option posting specific to the Mac Pro 1,1, detailing some Grub boot options which help with the Mac Pro 1,1:

I have found that I can mount the Debian installer DVD and pick out Grub menu stanzas for specific options to install Debian. video=efifb and noefi seem to be very useful options to add to any non-booting Grub stanza for this Mac Pro 1,1.

While experimenting with the noefi Grub menu option, I managed to get the system in a state where I had to reset the NVRAM (command-option-P-R at power on).

I count on this machine as my super-reliable file server with maczfs (on Snow Leopard) and zfs-fuse (on Linux) accessing a 3-way ZFS mirror shared between the two operating systems.

After this experience with noefi, I'm reluctant to try this on my file server. Debian Jessie is looking good enough.
 


Well, since we have a Linux thread now, I've been meaning to ask - how does Linux compare with macOS for running a server (file sharing and CalDAV/CardDAV mainly).
I had several servers running on macOS that I've migrated to Linux, specifically various iterations of Turnkey Linux. Everything's been working great.

I think it would be daunting task to try and build a Linux server completely from scratch, but Turnkey greatly simplifies things. They provide something like 150 different versions, each one tailored for a specific usage. For example, I used the LAMP (Linux/Apache/MySQL/PHP) version of Turnkey to replace an old Mac-based web server. Aside from Apache/MySQL/PHP all being preconfigured and ready for use, the install also includes a preconfigured web-based GUI, web-based CLI interface, and a web-based PHP interface.

Basically, install Turnkey, reboot, and you have a fully functioning server that's ready for you to tailor to your specific needs. Best of all, Turnkey is free and runs on pretty much anything... previously had it running on some old Mac Minis; now it's running on 9-year-old PC hardware.
 


As a data point, I see that Dell Small Business has a sale today on pre-configured Inspiron 3650 Intel Core i5-7400 quad-core Win 10 Pro desktops with 12GB RAM and 1TB, 7200 rpm drives for $449.99 with free shipping and a one year warranty.
I have one of those, and it's been a good reliable PC. I tried putting SSD in it, but I must be too much of a Mac-head, 'cause I couldn't get it to work. Oh well, the 7200-rpm spinner is plenty fast for what I am doing.
 


Well, since we have a Linux thread now, I've been meaning to ask - how does Linux compare with macOS for running a server (file sharing and CalDAV/CardDAV mainly). Does it run any leaner (RAM, CPU usage, storage usage)? Couldn't be anymore user unfriendly than macOS with the latest Server, at least for CalDAV.
So, I found this mentioned back in the archives here: #143

In short,
NethServer​


Cheers,
Jon
 


... Well, for many years I run one or other Linux server for several things. I buy a "new" one once in a while for various reasons. Currently, I'm running an HP Proliant DL380G7, which came with two Xeon E5645 6-core CPUs at 2.40GHz, no drives and 144 GB of RAM (yes, 144!) for $499. I mean, you can hardly buy 32 GB of RAM for that price.

Then I added eight 300GB, 10K SAS drives, as it has that many internal drive slots. I can use SATA drives in those same slots, and maybe I will put 8 1TB SSDs in there one day. The 8 drives were another $500 at the same place. The drives are arranged in one 300GB RAID1 array and one 900GB RAID10 array using the built-in HP RAID controller (Smart Array P410i).

When I bought it, the machine was 3 years old, and I have it for about 2.5 years now.
It is extremely quiet; you can hear it hum when you stand right next to it, but outside its room, you can't. That changes when I install another HP RAID controller that for some unknown reason doesn't have a temperature sensor and makes the machine think "OMG, that card might be overheating, so turn on the fans at full blast". Since I have no basement right now - we just sold our house and are currently renting - I decided to live without the external drive cage and the second controller.

The external drive cage is also HP and cost me $138 and can hold 25 2.5'' SAS drives with one drawback: I had to buy two for the price of one (on eBay). I had 25 x 146GB SAS drives in there, but they were too noisy, and I can live without them for now.

For backups, I used an OWC Mercury Elite something in a 4 x 3.5'' drive enclosure with eSATA, FW800 and USB 2.0 ports, which match the HP USB ports just fine. Adding arbitrary eSATA cards gets us back to the "OMG, no temperature sensor" issue, so USB it is (was).

The most disturbing issue with these OWC enclosures is that I have one of the four drives die about every 2 months, and I finally got tired of it. So I have a base backup of the machine, and since it's not doing all this much these days (running a VM for a friend that he can access from the outside is the biggest thing it does), I'm not doing any type of regular backup. RAID 10 gives you some assurance that it's not suddenly going to die; the controller usually tells me what the status of the drives is. It's configurable from the Linux command line, including checking the status - meaning that it can do that once a day and email it to me, so I don't have to manually log in and check.

The drives are a lot cheaper now; you can get 300GB 10K SAS drives for $23 (quick look at eBay) and I just saw somebody selling 10 900GB 10K SAS drives for a total of $9.99 (that's about $1 apiece - happy bidding. :)

As an OS, I'm using CentOS 7 for the simple reason that I have always used Red Hat one way or another (the old free Red Hat 1 - 9 versions, then Fedora, which was a bit unstable, and now the free clone, CentOS). You might not get everything out of the box, and some stuff requires some tinkering, but generally, it gets the job done.

Happy Linuxing.
 


As a data point, I see that Dell Small Business has a sale today on pre-configured Inspiron 3650 Intel Core i5-7400 quad-core Win 10 Pro desktops with 12GB RAM and 1TB, 7200 rpm drives for $449.99 with free shipping and a one year warranty.
Or, even more economically, purchase a refurbished system. I’ve used HP Elite 8x00 desktops with good success - multiple SATA ports and space for at least two drives in a small form factor (SFF) case.

I’ve seen an 8200 or 8300 around $100 from NewEgg. I purchased a refurbished 8000 that I then ran for almost 8 years until, presumably, the power supply died. Not too bad for ~$100 machine.

Cheers,
Jon
 


Or, even more economically, purchase a refurbished system. I’ve used HP Elite 8x00 desktops with good success - multiple SATA ports and space for at least two drives in a small form factor (SFF) case.
I’ve seen an 8200 or 8300 around $100 from NewEgg. I purchased a refurbished 8000 that I then ran for almost 8 years until, presumably, the power supply died. Not too bad for ~$100 machine.
Good point about the small form factor (SFF) systems! I'm a bit surprised there hasn't been more discussion here of non-NUC SFFs as alternatives to Mac Minis. While many SFFs are significantly larger than Minis, they are much smaller than standard-sized desktop PCs, so they can occupy a nice middle ground between price, compactness, and power.

Some current Dell examples are the 2.6 lb, 1.4" x 7.0" x 7.2" OptiPlex 5050 Micro and the 7.0 lb, 3.64" x 11.42" x 11.5" Inspiron 3472 Small Compact Desktop. As you mentioned, you can get some great deals on refurbished units. For folks on very tight budgets, off-lease refurbs (typically 3 years old) can be particularly good values if purchased from a reputable source.

Note: I only mention Dell models because I have a little more recent direct experience with them than with other brands. I have no connection with Dell other than being a recent customer. HP and others also have good options in this space.
 


I have one of those, and it's been a good reliable PC. I tried putting SSD in it, but I must be too much of a Mac-head, 'cause I couldn't get it to work. Oh well, the 7200-rpm spinner is plenty fast for what I am doing.
Hmm. I added a SanDisk SSD to mine and kept the original spinning HD installed without a problem. I needed to buy the appropriate internal data cable, put an OS on the SSD, and set the boot order in the BIOS. Are you sure the SSD was good?
 


There are some nice options, from Ubuntu Server...
Thanks. To clarify my question, I have a 2014 Mac Mini running macOS Server. Besides the obvious, dubious future of that, lately it has been having "Memory Pressure" issues, as well as being generally unresponsive (even though I'm not asking it to do anything more than when I set it up in 2014). Hardware upgrades are one possible answer.

My question is, would a Linux server (file-sharing and CalDAV, mainly), run significantly leaner, or, either way, do I need to upgrade hardware?
 


Ric Ford

MacInTouch
My question is, would a Linux server (file-sharing and CalDAV, mainly), run significantly leaner...
It should. I talked with a friend tonight who runs lots of Linux instances in virtual machines, and he said that server versions require far fewer resources than the desktop versions with all their extra software (let alone big demanding apps like Chrome and Firefox).
 


I talked with a friend tonight who runs lots of Linux instances in virtual machines, and he said that server versions require far fewer resources than the desktop versions with all their extra software (let alone big demanding apps like Chrome and Firefox).
Keep in mind that the only difference between server and desktop versions of major Linux distributions is the set of packages that are preloaded/installed.

If you have a desktop installation that will primarily be used as a server, it's not a big deal to configure it to boot to a text console instead of a GUI login screen. This greatly reduces the memory and CPU requirements.

You can still manually start a desktop from the text console (type "startx") when you need it. A "logoff" from the GUI will quit the desktop, returning you to your text console.

Furthermore, if you will primarily be accessing it by remote, you can install software like x2go which will dynamically create GUI sessions for remote-access sessions and destroy them when the sessions are terminated.

I do this a lot for Linux installations running in VMs.
 


Another interesting choice if you're setting up a PC to act as a server is to set it up as a bare-metal host for virtual machines.

You can download and install VMWare's free vSphere hypervisor. Once installed, it is a platform where you can create and manage VMs, each of which can run whatever OS you choose to install into it. We use this at work on a few servers to host several dozen Linux VMs.

You can also create virtual networks to connect these VMs in any way you want. (Connectivity to the rest of the world can be set up by configuring your physical Ethernet ports as members of virtual networks.)

It can be really convenient to create and destroy VMs on an as-needed basis. It lets you experiment with all kinds of platforms and configurations that you might be very reluctant to try out otherwise.

The only significant (to a non-enterprise user) difference between the free and commercial version of vSphere is that the free version won't let you configure more than 8 CPU cores per VM, while the commercial version has no limit (although there's probably no point in configuring a VM with more cores than the host CPU has).

There's a bit of a learning curve to set up and install vSphere, but if you're familiar with the basic concepts of working with VMs (e.g. from VirtualBox or a desktop version of VMWare), you're probably more than halfway there.
 


Ric Ford

MacInTouch
Keep in mind that the only difference between server and desktop versions of major Linux distributions is the set of packages that are preloaded/installed.
Interesting point, and maybe I should have mentioned that I checked all the boxes for extra packages when installing the 64-bit Ubuntu 18.04 Desktop system.

I'm waiting for delivery of 8 GB of Mac Pro 1,1 RAM - hopefully tomorrow - hoping that it will transform this unusable molasses of an Ubuntu Desktop system into a fast set-up. (Of course, this computer will run Mac OS X 10.6 brilliantly without adding extra RAM, but that long-unsupported system seems like a poor choice for a computer that will be used a lot for Internet access.)
 


Interesting point, and maybe I should have mentioned that I checked all the boxes for extra packages when installing the 64-bit Ubuntu 18.04 Desktop system.
Yeah, that will definitely up your hardware requirements.

Another thing to consider is to install a "flavour" distribution - this is the same OS, but with an alternative set of preloaded packages. You can find links to all of Ubuntu's supported flavors here:

  • Kubuntu - replace the desktop with KDE
  • Lubuntu - replace the desktop with LXQt
  • Ubuntu Budgie - replace the desktop with Budgie
  • Kylin - tunes for Chinese users
  • Ubuntu MATE - replace the desktop with MATE (the current evolution of GNOME 2)
  • Ubuntu Studio - Tunes for media creation
  • Xubuntu - replace the desktop with Xfce
FWIW, my preferred Linux distribution these days is Xubuntu. I find Xfce to be powerful, full-featured, and much lighter weight than Ubuntu's default desktop (Unity or GNOME 3, depending on what version you install).
 


Hmm. I added a SanDisk SSD to mine and kept the original spinning HD installed without a problem. I needed to buy the appropriate internal data cable, put an OS on the SSD, and set the boot order in the BIOS. Are you sure the SSD was good?
Yup, did all that stuff (except the OS). It wouldn't format the drive. I'm using it as an external now, so the SSD is fine.
 


You can download and install VMWare's free vSphere hypervisor. Once installed, it is a platform where you can create and manage VMs, each of which can run whatever OS you choose to install into it. We use this at work on a few servers to host several dozen Linux VMs.
And a variation on this - install the hypervisor on a cheesegrater Mac Pro, and you can run virtualized macOS instances. Take a look at this blog: virtuallyGhetto.
 


Ric Ford

MacInTouch
I got 64-bit Ubuntu 18.04.1 LTS and BeerSmith 3 installed. BeerSmith seems to run OK. Everything else - in particular, Firefox, plus just basic operations - is horrendously slow with constant disk accesses. I'm guessing that 1 GB of RAM just isn't enough for modern software like this, running in 64-bit mode. ...
So, I bought 8 GB (2 x 4GB) of RAM from OWC for $68 with shipping and it arrived today. I only then discovered that the Mac Pro already had 6 GB in it! However, only 1 GB was showing up. Apparently, a friend who used it (and updated RAM) before had installed the memory the wrong way.

And I went through a whole bunch of trial and error trying to get all this memory installed the right way, confused by instructions for a 2008 Mac Pro that were opposite what is required for an original Mac Pro 1,1. (In the original Mac Pro, a memory card pair has to go on one board, while the 2008 Mac Pro splits a single pair across the two boards. There may be additional issues with order of pairs vs. capacity, etc.)

In the end, I ended up with 14 GB of RAM (visible in both Ubuntu 18.04 LTS and Mac OS X 10.6.8.) And Ubuntu got out of molasses mode and started working usably fast.

However, after years of using SSDs instead of hard drives, the system still feels a little slow accessing the 400GB Samsung hard drive, so now I'm trying to figure out if I can get a Crucial MX200 SSD working successfully as the Ubuntu boot drive (after having some problems in that area previously).
 



Ric Ford

MacInTouch
However, after years of using SSDs instead of hard drives, the system still feels a little slow accessing the 400GB Samsung hard drive, so now I'm trying to figure out if I can get a Crucial MX200 SSD working successfully as the Ubuntu boot drive (after having some problems in that area previously).
I checked the MX200 SSD with DriveDX and found that it had old MU01 firmware. I had to download Crucial's .iso firmware updater and burn that to a bootable CD-ROM, which I ended up doing on my 2011 MacBook Pro when it wasn't easy to do in Linux. With the SSD in the Mac Pro, I booted that disc, and the firmware update proceeded without trouble.

I installed Ubuntu 16.04 LTS on the MX200, which "feels" much better than the hard drive, despite being limited to just 3Gbps in the SATA bay, then did some updates and installed BeerSmith, and I'm now in the process of updating to Ubuntu 18.04 LTS.

One of the things that confused me previously was Ubuntu's updater app, which goes into long pauses that look like it's stuck/failing, along with some other confusing issues.
 


Ric Ford

MacInTouch
I installed Ubuntu 16.04 LTS on the MX200, which "feels" much better than the hard drive, despite being limited to just 3Gbps in the SATA bay, then did some updates and installed BeerSmith, and I'm now in the process of updating to Ubuntu 18.04 LTS.
The Mac Pro 1,1 is running nice and fast with 64-bit Ubuntu Desktop 18.04.1 booted off a 250GB Crucial MX200 (firmware MU05), installed in one of the Mac Pro's standard SATA bays, with 14 GB of RAM installed now. Firefox is plenty fast, and BeerSmith is running, too.

I ran Geekbench tests (which was much more complicated than running them on macOS):

This old, slow system actually feels fast with Linux, adequate RAM, and the SSD.

There's a long boot delay while the startup code apparently thrashes around looking for the startup drive (a problem I haven't managed to fix), but a simple workaround solves the problem: Holding the Option key at boot allows instant selection of the Linux boot drive (labled "Windows"), and it boots quickly enough from there.

One last issue is the Mac Pro's need for a DVI-D monitor or adapter. They're becoming less common, but I see this 24" Acer display on Amazon for $110 with DVI-D and an iPS panel: Acer R240HY.
 


  • Kubuntu - replace the desktop with KDE
  • Lubuntu - replace the desktop with LXQt
  • Ubuntu Budgie - replace the desktop with Budgie
  • Kylin - tunes for Chinese users
  • Ubuntu MATE - replace the desktop with MATE (the current evolution of GNOME 2)
  • Ubuntu Studio - Tunes for media creation
  • Xubuntu - replace the desktop with Xfce
FWIW, my preferred Linux distribution these days is Xubuntu. I find Xfce to be powerful, full-featured, and much lighter weight than Ubuntu's default desktop (Unity or GNOME 3, depending on what version you install).
When I spent a bunch of time experimenting with Linux for personal use about two years ago, I tried every flavor I could and ultimately ended up liking Mint the best, which is, of course, based on Ubuntu. I found Elementary somewhat Mac-like but a bit too basic for me.

I realize that this thread is [partly] about using Linux as a server, but how do you compare the above with what I mentioned? What do you like best for plain old users, techie and/or non-techie? (I'm somewhere in the middle, but can handle the "techie" stuff when I need to.)
 


I realize that this thread is [partly] about using Linux as a server, but how do you compare the above with what I mentioned? What do you like best for plain old users, techie and/or non-techie? (I'm somewhere in the middle, but can handle the "techie" stuff when I need to.)
One Linux distribution I have used and like, and which gets little in the way coverage, is Fedora, which is based on Red Hat Linux. Have not used it in a while now since my retirement.

We used to run Red Hat Linux on our back-end servers for all sorts of needs, such as to run Darwin Streaming Server. We did all we could to avoid installing Windows Server versions and did as much as we could with OS X Server and Red Hat Linux.

In the end, I think each person just needs to try a few in virtualization to see which one strikes a positive note.

Have been tempted to dig out the old Power Mac G5 and try to get a flavor of Linux running on that, but it is perhaps a bridge too far at this stage in life.
 


When I spent a bunch of time experimenting with Linux for personal use about two years ago, I tried every flavor I could and ultimately ended up liking Mint the best, which is, of course, based on Ubuntu. I found Elementary somewhat Mac-like but a bit too basic for me. I realize that this thread is [partly] about using Linux as a server, but how do you compare the above with what I mentioned? What do you like best for plain old users, techie and/or non-techie? (I'm somewhere in the middle, but can handle the "techie" stuff when I need to.)
My experience with Mint has been better than any other distro. That's partly because I find the Cinnamon desktop just doesn't go wonky; I can configure it more to my preference.

The Intel Hades Canyon NUC I bought on Black Friday needs kernel 4.18 and updated Mesa drivers that aren't available in Mint. Trying to boot Mint 19.1 results in a black screen. Thus, I've tried most of the "flavours" in 18.10 - and settled on Budgie as my personal preference. I've found the KDE Plasma interface sometimes has issues with multiple monitors, and while it is very "pretty" and very customizable, it is so customizable I got lost in the options and had difficulty finding my way back. Unique among the Ubuntu file managers, Dolphin in Kubuntu just wouldn't find my Synology NAS. After Googling around how to connect Kubuntu to my network, I gave up and installed the Caja file manager from Mate, and, lo, that worked. Finding no advantage to Kubuntu, I moved along.

Mate worked, but Mate is pushing Snaps. One of the joys of Linux (at least as I found it in 2015-16 with Cinnamon) is the power I had to set my fonts, icons, themes, exactly as I want them. Thus far, the "new" Snaps, Flatpaks, and AppImages simply ignore UI settings. Some look like old DOS programs. Others operate glaringly independently of user destkop theming. In Mate, I had my 4K monitor set at 1920 x 1080 so I could actually see UI elements without magnification, and the first Snap I downloaded ignored that and presented itself in the native 3840 x 2160. Microscope not provided.

I tried Xubuntu, Lubuntu, and Ubuntu Studio based on the xfce desktop in Xubuntu. Spoiled by Cinnamon, I found them sparse. The mainstream Gnome was driving me nutz, because I could not turn off the Window Snapping, in which, if a Window is moved to and touches the top or a side, it zaps itself into half screen (top) or quarter screen (side). There used to be a Gnome extension to shut that off, but I couldn't get it to work in 18.10.

Before settling into Ubuntu Budgie, I gave the Arch-based and rolling release Manjaro and its version of Cinnamon desktop a try. It was great to have Cinnamon and familiar control of UI. The Mint team has been working over stock programs, such as the file viewer and file manager, and the latest and best of those came along in Manjaro. I really like what Mint has done with its Pix viewer / editor, and their PDF viewer allows some basic editing. For whatever reason, Manjaro just wouldn't run the GIMP graphics program, and every time I tried a different way to launch it, my system just locked up tight and had to be restarted at the power switch. That's likely a hardware-specific problem.

Two weeks in, I'm finding Ubuntu Budgie works well, doesn't get in my way, and has a full-featured "Software Center" that offers both "standard" repo applications that will reflect a user's UI settings, and a variety of Snaps, often Snap versions of the same applications but delineated so it is possible to choose which installs, and if the repo version is available, that's what I choose.

Right beside this Hades Canyon NUC is my old i5 standby from 2015, running Mint Cinnamon 18.3. It's reliable, tested, and I'm not tempted to toss it out!
 


Amazon disclaimer:
As an Amazon Associate I earn from qualifying purchases.

Latest posts