Jump to content

Introduction to a Self Managed Life: a 13 hour & 28 minute presentation by FUTO software

From FUTO

Welcome! This page is very large and there are many steps, please take your time. If you would like to edit (you're welcome to fix errors even without an account!), please read this first. Thank you!

Preface[edit | edit source]

Dedication[edit | edit source]

Thank you to Tim Gilles, aka Slipperman, whose remarks on what makes someone a “real professional” stuck with me for a lifetime. I listened to Tim on the mixerman radio show. Tim wanted to demystify his craft in a way that anyone could understand; that would inspire EVERYONE to pick up a microphone & a tape machine and give it a shot themselves. He did this with his own “unique” writing style. His work inspired me to do the same with everything I’ve done, from board repair to self-managed servers. Tim passed away two years ago. I hope his legacy lives on through everyone who tries to open doors for the next generation rather than gatekeep information via ego inflating elitism.

Intro[edit | edit source]

I started using GNU/Linux in 2002, back when I saved up the $79.99 necessary to buy SuSE Linux 8.1 Professional as a boxed set from the Best Buy across the street from the Staten Island mall for my 14th birthday. I started hosting my own servers in 2005, and put together systems for my own business’ use since early 2011. I didn’t do everything outlined here immediately; it was slowly built piece by piece over a long time. I never documented it in a way that would allow my grandma to use it. In 22 years, I can’t remember reading GNU/Linux documentation that felt like it was designed for normal people. That’s what I’m looking to do here.

From 2002 to the present, two things remain true:

  • You can do cool things with GNU/Linux.
  • These cool things are hidden behind a labyrinth of
    • Half baked software.
    • Horrible UI.
    • Forum elitists & gaslighting assholes who will make you think YOU’RE the crazy one for expecting things to work.
    • People that will tell you to “RTFM” with no regard for whether that documentation actually works.
    • black boxes. I mean literally hidden behind actual black boxes. For six months. Unfixed. On the stable version of a server operating system (that bug is present in 24.10 long-term-stable even today).

So much of the open source user experience is not designed for normal people. Whether it was using NDISwrapper 20 years ago to get wifi to work or messing with SCSI emulation to burn a CD, GNU/Linux is pain. It’s all pain.

It’s painful enough that people will happily trade their data, sovereignty, privacy, and their rights to avoid ever having to deal with it; and I can’t blame them.

This has to change. As of 2024, most of you live your life:

  1. Dependent on closed source software.
  2. Running on someone else’s server where you can be kicked off at any time.
  3. Forced into forced arbitration or your device won’t work anymore.
  4. With no privacy.
  5. Training AI with your creations.

Now is a time like no other for you to feel empowered to build systems that you control & understand.

My goal with this guide is not to tell you the way you HAVE to do something, or to imply that my way is the best. My goal is to inspire you by showing you what’s possible. You don’t have to be a computer engineer or someone with an IQ of 160 to figure this all out. And, admittedly, to inspire capable developers to look at the pain points scattered throughout this guide (of which there are many) and decide “enough is enough; let’s make this better”.

The fun here is in building your own system, your own way. This is my sovereign cloud; there are many like it, but this one is mine. I can’t wait to see how you build yours.

Why Build Your Own Sovereign Cloud?[edit | edit source]

Apple and Google push users into closed ecosystems while removing options for personal control over data. Think back to when smartphones had microSD card slots, so you could store your photos, videos, & music locally & cheaply. As these companies started pushing paid cloud services, microSD slots disappeared from every phone. Apple no longer gives you a working “delete” button, and Google has mistakenly flagged people as criminals for sending photos a doctor requested of their sick child during COVID lockdowns. These issues come up because you don’t own the software or services you’re using. If you can’t review the source code, it’s not your software. If you can’t host the service yourself, it’s not really yours.

FUTO is looking to change that. We want to provide solutions that let you take back control, whether it’s running your own cloud or hosting your own services. Many of these services have 1% adoption (if they’re lucky!) because of the barriers to use.

One example is Immich; it’s photo gallery software that uses local AI, so you never have to worry about your personal data being analyzed by some remote server. It’s incredibly fast & efficient! I think it’s the best in its field. Right now, if you want to use it, you need to set up your own GNU/Linux server and use Docker to get everything running. You either become a GNU/Linux sysadmin or you sell your data (and your soul) in exchange for a half-decent UI.

Until now!

FUTO’s belief in self managing your own servers.[edit | edit source]

We believe that any piece of software we create or offer that has a client, must be accompanied by server source code that allows you to run your own server. You have to have control over your devices. At the same time, if we throw the source code at you and tell you “have fun!”, have we really enabled you to run your own system? That’s akin to throwing a party and saying “hey, anyone who wants to join us is allowed in!” when you only tell your best friends where the door is. We want the door to the party to be open to everyone; and for all of you to know where it is. So, let’s see if we can put spicy brownie’s concerns to rest.

The Rabbit Hole to Hell[edit | edit source]

I’m going to show you exactly how to set this up because that’s been a common question in the comments. I’m going to show you how to set up Immich. To do that, I need to show you how I get my files from my phone to my server. If I’m doing that, I’m connecting to my server from outside, which means I have to show you how to set up a VPN tunnel. I’m not going to forward ports for all these random services. If I’m doing that, I might as well show you how to set up a router that will always get updates, which means building your own.

While I’m at it, I might as well show you how to block all ads, even when you’re connected from your phone. While we’re in there, let’s show you how to set up something similar to Google Docs, Google Sheets, calendar, contacts, home surveillance with notifications, self-hosted mail, a business phone system that curses out annoying customers for you, and everything else.

Warning: This becomes a rabbit hole very quickly because there are so many items to cover. I’m not going to breadcrumb you. I want to provide you with everything, which means we have to start from the BEGINNING!

A Long Journey Ahead[edit | edit source]

This isn’t going to be a 10-minute video, nor will it be a 10-page guide. It’ll probably be a ten-hour video, and a 1000-page guide. You’ll get to figure out how much I hate you based on whether or not I provide you with timestamps or a table of contents.

Understanding the Basics: Modem, Router, Switch, and Wireless Access Point[edit | edit source]

Before we dive into discussing building a router, I want you to understand the key components of your home network: the modem, router, switch, and wireless access point. These devices work together to connect you to the internet and allow multiple devices to communicate with each other. Most consumer products package the router, switch, and wireless access point all in one, hiding from you what each component is for. You might even have a modem that includes all three, meaning you have one device on your home network! Let’s break down the purpose of each device.

Modem[edit | edit source]

The modem is your gateway to the Internet, connecting your home to your Internet service provider (ISP).

What a Modem Does:

  • Converts the long-range signal from your ISP (e.g., cable, fiber, DSL) into a short-range signal that your devices can use (eg Ethernet, Wi-Fi). Short range signals are helpful because they are safer and because they can be used with simpler, cheaper electronic components.
  • Reformats the encoded data from the signal to a format understood by your devices.
  • Acts as the bridge between your ISP’s network and your home network.

Types of Modems:

  • Cable Modem: Connects to your ISP via a coaxial cable. It sends (and receives) electromagnetic signals through that cable. (Like the signals used by news-radio stations, but different frequencies, and forced into a cable, rather than broadcast on the air).
  • DSL Modem: Connects via a "twisted pair" cable -- the kind used for landline telephones. Uses electromagnetic signals, like a cable modem, but different frequencies. Be careful of telephone cables: they typically also carry electrical power, to power a landline telephone. The voltage can be high enough to be painful.
  • Fiber Modem: Connects via a fiber-optic cable. More properly called an optical network terminal (ONT). It transmits light (photons). Inside, the cable is transparent, like plastic. Don't point the end of the cable at your eyes: depending on the type of light used, the light can be invisible yet have enough energy to damage your eyes and blind you.
  • Cellular Modem: Connects wirelessly, via the nearest cell tower -- there is no cable going to the ISP. This is the exact same technology used by a cellphone but the ISP provides a box that looks like a DSL modem and that you place, typically, near a window, to optimize reception.
  • Dialup modem: Connects via a "twisted pair" cable. It uses only the tiny range of electromagnetic signals that telephone technology uses to represent sound. Dialup modems often have a tiny speaker, which lets you hear the signal, for troubleshooting. Dialup was the first technology offered by ISPs. It is now extremely rare, because modern technologies are much much faster.

Important: A modem typically has only one Ethernet port, which is why you need additional devices like routers and switches to connect multiple devices in your home. A modem may have a phone jack to attach a standard telephone.

Router[edit | edit source]

The router manages traffic between your local network (your home devices) and the internet (outside world).

What a Router Does:

  • Allows you to have more than one device on your network.
  • If you attach your computer to your modem directly, you are simply connecting to the “outside” world’s network. This is referred to as “WAN” - Wide Area Network is a network that connects multiple LANs over large distances, while a LAN is a network confined to a local area. This can work, but when you do this you do not have an internal network. The computer you attached to your modem is the only computer in your home that can go online with this configuration.
  • Routers create a 2nd internal network for your devices so you can attach more than one thing to the internet (WAN). Wouldn’t it suck if you could only have one wired device attached to your home internet? This is why most people need a router!
  • Routes Traffic: Directs internet traffic from the OUTSIDE (this is called the “WAN”) to the correct device on the INSIDE, your home network (this is called the “LAN”), and vice versa. Now, multiple devices (e.g., computers, phones, smart TVs) can communicate with the internet through your modem, and with each other within your home.
  • Provides NAT (Network Address Translation): Translates your devices’ private IP addresses into a single public IP address provided by your ISP.

Note: The router you get from your ISP or buy from a store, 99% of the time, is a combo device that looks like this: includes a router, switch, and wireless access point all in one box. Understanding their roles separately is key when setting up a more advanced system like pfSense.

Traditional wired router:[edit | edit source]

Below is a traditional wired router. This combines a router & a switch but has no wireless access point.

Cheap Walmart Wi-Fi router:[edit | edit source]

This is a TP-Link wireless router: a router, switch, and wireless access point all in one. This is most likely what you have in your closet right now, covered in wires, under the set of workout pants you bought six months ago after your failed New Year’s resolution to go running every morning. It has slow speed unless you’re 2 feet from it. These often come with SIP-ALG (a component that transforms Voice-over-IP packets, which generally isn't needed any longer today) on by default, and will mess with your phone systems endlessly even if you try turning it off. Avoid the Walmart routers.

Switch[edit | edit source]

A switch expands the number of devices you can connect to your local network using Ethernet cables.

What a Switch Does:

  • Expands Connectivity: If your router only has a few Ethernet ports, a switch allows you to connect more wired devices (e.g., computers, gaming consoles, network-attached storage).
  • Forwards Data: A switch is smarter than a basic splitter. It knows which devices are connected to each port and forwards data to the correct device, improving network efficiency.
  • The type of basic switch I am using for this example is the smaller type below, that has no advanced routing features, settings, or web interface to mess with. It’s just a dumb switch.

Switches come in different sizes, from small 4-port models to large 24-port (or even larger) models used in business environments. The small Netgear switches that cost $15 are more than adequate for most people’s home networks & will not cause random disconnects or issues with our router setup.

Cheap switch[edit | edit source]

This is a basic Netgear switch that you get for $15. It allows you to connect four devices to your pfSense router. You would attach the LAN port on the pfsense router to a port on this switch (any port is fine) & then connect your wired devices (wireless access point for wifi, computers, etc) to other ports on the switch. Some points to note:

  • This switch is gigabit - meaning, 1 gbps.

    • 1 gbps = stuck transferring around 100 megabytes per second real world performance (aka the speed of ten year old hard drives).
    • This means even if you have a fast solid state drive in the server & your personal computer, transfer speed will be around 100-120 megabytes per second.
    • If you have a gigabit internet connection & are downloading a file at 1 gbps, you can also grab a file from your server without slowing your download.
  • This has no Power over Ethernet (PoE)

    • If you want to power wireless access points, office voice over IP (VoIP) phones, or cameras, you have to plug them into something or get a PoE injector later.
    • A Power over Ethernet switch can power devices you plug the ethernet cord into which is very cool for setting up security cameras, because you only have to run 1 wire to each camera.

    These cheapies will usually not have Power over Ethernet to power cameras & wireless access points & office desk phones, nor will they usually support configuring ports for VLANs (we will get into that in the wifi section at the end). This is a good starter switch since it is reported to pass VLAN tags, so if you bought wifi access points or switches that supported creating isolated networks this switch would pass those tags (we’ll get into that at the end of the guide); no need to worry about that right now.

    These cheap switches work great, and also come in 8 port versions for a few bucks more.

Expensive switch[edit | edit source]

The Netgear XS724EM switch is an expensive, fancier switch.

  • Speed
    • Supports 2.5 gigabit or 10 gigabit per second Ethernet on its ports.
    • If you have a network interface card (NIC) that supports 2.5 gbE on each end (these are becoming more common), you can get over 270 megabytes per second transfer rate (more than 2x a normal gigabit switch)
    • If you have a network interface card (NIC) that supports 10 gbE on each end (your computer does not have this unless you bought it separately & installed it), you can get over 800 megabytes per second even with a poorly tuned setup. This is likely faster than any of the drives inside your computer unless you bought fancy NVMe drives.
  • Power
    • Can supply power to a bunch of cameras, phones, wireless access points over Ethernet cables (PoE). Has the advantage that cameras in hard to reach spots only need an Ethernet Cable and not an additional Power Cable since the neccesary Power is transmitted to the camera via the Ethernet Cable.
  • Ports
    • Has 24 ports instead of 5, can connect a lot more stuff.
    • COMPATIBILITY of the ports - does 10 GbE over standard ethernet plugs/jacks. If you wire your house with good cat6a and put good 10 gbE network interface cards in the machines you’re working with you can get 800 megabyte per second networking all around your house without digiorno connectors.
  • Virtual LAN support
    • Allows you to create separate networks on the same switch.
    • Can keep your untrusted internet of things (IoT) devices like cheap light bulbs & cameras & thermostats on isolated separate networks from your trusted devices.

The VLAN support is a big one because later on when we connect wifi access points that are advanced we are going to do far more than just make a “guest network”; we are going to make a network that your IoT devices (bulbs, thermostats, cameras, etc) can connect to and isolate them in a way where your computer running your security cameras & home automation can connect to them, but they are isolated from connecting to anything else. This isn’t necessary though and a bunch of you will probably skip the VLAN part at the end, since that gets a little too complicated for a home setup.

This is an expensive switch. There are many in between, but I thought it’d be useful to show an example of the cheap side & expensive side to show what is available & what you can get for the money. If you are ok with gigabit ethernet you can easily get by with way cheaper; right now you can either buy gigabit switches cheaply, or 2.5 gbe & 10 gbe switches at crazy high prices. There isn’t much in between.

4. Wireless Access Point (WAP)[edit | edit source]

A wireless access point (WAP) provides Wi-Fi access to your network, allowing devices like phones, tablets, and laptops to connect wirelessly. You could add a wireless access point like the ones below, to the old blue Linksys router above, to turn it into a “wireless router”.

What a Wireless Access Point Does:

  • Provides Wireless Connectivity: The WAP connects to your router (or switch) and broadcasts a Wi-Fi signal, letting wireless devices connect to your network.
  • Doesn’t Route Traffic: It’s important to note that a WAP doesn’t perform the same function as a router. It simply extends your network by adding wireless connectivity.

These are mesh network access points. They allow you to connect each to your switch and place them in separate areas of your home to make sure you have great connectivity everywhere.

The way these work is you would place the access points on different parts of your house and have an ethernet wire going to each one. The access points intelligently work together to figure out which one you should be connected to based on which provides the strongest signal to your laptop/phone where you are right now. You’d place one on the side of your house, one in the basement, one on each side of each floor in your home, and wire them all to your switch & you’ll get amazing wifi connectivity from anywhere. Good wireless access points will switch over so seamlessly that your file transfer does not stop or fail as it is happening.

These setups are more expensive since proper mesh equipment that works right costs more & you are buying multiple access points.

This is an ancient wired router with no wifi.

This is a cheap ass wireless access point. I don’t recommend any of these especially when something like a TPLink EAP6120 is about $50 used & offers much better seamless roaming if you want to add access points later, VLAN functionality, etc. I know it’s tempting to buy the lame ones because they are in stock at best buy & walmart for instant gratification but you’ll regret it later.

This is an ancient wireless router that is a legend. The unbreakable, unbeatable, Linksys WRT54G. It is a router, a switch, and a wireless access point all in one.

Internet Protocol addresses[edit | edit source]

You have an address on the front of your building. You have a phone number - this is how people find you. Your router will be how you get an IP address from your internet service provider. It usually looks like 64.91.255.98 or 8.8.8.8 - you may have seen this before.

Most of you with a home internet connection have something called a Dynamic IP. This means that your IP can change.

Your IP address may change for a number of different reasons:

  • When you unplug your modem [for a long period of time].
  • When you plug your modem into a new router.
  • Every day, just for the hell of it!

This can make things more difficult than when you have a static IP - static IPs do not change. You get an internet protocol address, and that’s what you’re stuck with, for better or for worse.

For home users, most people don’t need a static IP. Static IPs are for when I want something to “stay put”. I want my phone number to stay put so people know where to find me. I want my home address to stay put so the mailman knows where to find me (and so I know where to go home!) and, in this case, I want my IP to stay put so I can always find my home server, no matter where I am in the world.

If you are reading this - you likely have a dynamic IP provided by your home internet service provider. We will have a workaround for this that allows you to be able to find your server at the same place every time you go to use it no matter how often its IP changes.

How These Devices Will Work Together in Your Setup[edit | edit source]

For this setup, you’ll use a dedicated pfSense router instead of the combo device provided by your ISP. Here’s how the connections work:

1. Modem to Router[edit | edit source]

  • The modem takes the signal from your ISP and passes it to your pfSense router via an Ethernet cable.
  • The modem will be connected to the WAN (Wide Area Network) port on the router.

2. Router to Switch[edit | edit source]

  • Your pfSense router manages traffic between your devices and the internet.
  • Since the Intel NUC running pfSense has only two Ethernet ports, you’ll connect the second port (the LAN (Local Area Network) port) to a switch to connect multiple devices.

3. Switch to Devices[edit | edit source]

  • The switch is connected to the LAN port of your pfSense router.
  • Any wired devices (like computers, gaming consoles, or network storage) can be connected to the switch using Ethernet cables.
  • This allows multiple devices to communicate with each other and access the internet through the pfSense router.

4. Adding Wireless Access[edit | edit source]

This will allow your phones, laptops, and other wireless devices to connect to the network without wires.

  • If you only plan to have wireless devices on your network, you can attach your wireless access point directly to the LAN port on your pfSense router.
  • If you wish to have a combination of wired & wireless devices on your network, you would attach a wired switch to the LAN port on your pfSense router, and then plug the Wi-Fi access point into a port on your switch.
  • If you have no plans to have wireless devices on your network, you do not need a wireless access point.

A Common Home Network Setup vs. Your New Setup[edit | edit source]

Common Setup (with ISP Combo Device):

  • Modem → ISP-provided combo device (modem + router + switch + WAP)
  • All devices (wired and wireless) connect to the combo device.

Your New Setup (with pfSense):

  • Modem → pfSense Router (dedicated firewall/router)
  • pfSense Router → Switch (for wired devices)

This new setup gives you better control over your network, improved security, and the ability to block ads with pfSense and tools like pfBlockerNG. It is important that you know what each component does & their purposes. By understanding what each component does, you’ll be better equipped to set up and manage your new pfSense-based network!

Why Build Your Own Router?[edit | edit source]

Regular Security Updates & OpenVPN[edit | edit source]

Let’s start at the very beginning with OpenVPN. We are not opening ports to the internet for ANYTHING, except for receiving self-hosted mail. We’re running a bunch of different open source services that less than 0.1% of the population (if I’m being generous) actually use. I LIKE Immich, Home Assistant, Syncthing, FreePBX, OnlyOffice, Nextcloud, Mailcow, Frigate. But I don’t want them just open to the internet.

They’re nice software, but they’re used by 0.0001% of the population. Further, even if they WERE secure, by opening ports to the internet, I am letting every Tom, Dick & Harry who wants to peek in see what I am running on my IP address.

OpenVPN is used by companies in the S&P 500, banks, and governments; it’s everywhere! The beauty of OpenVPN is that if there’s ever a security breach, it’s going to get found and fixed because there are tens of millions of eyes on it at any given moment. There is too much investment in OpenVPN for it to wither on the vine and become fundamentally insecure. OpenVPN is as secure as it gets, and while it’s not perfect, we are massively reducing our RISK of being hacked & exploited by utilizing OpenVPN to get into our home network vs. opening ports willy-nilly to 10 different pieces of software.

I don’t want people to be able to see that these services are all running on my server. That means there are four, six, eight, or 15 different points of failure. I’d rather have one point of failure that’s managed properly. And that’s what a VPN is for—a way to create a secure, encrypted tunnel between your phone and your server.

Why can’t I buy a $30 router at walmart?[edit | edit source]

Short lifespan for firmware updates[edit | edit source]

Consumer routers you find in stores may offer features like OpenVPN, but the problem is that many stop receiving updates shortly after you buy them.

Buggy[edit | edit source]

Many of the lower end store routers are buggy and can cause problems with what I am showing you how to set up. Certain TP-Link routers have randomly messed with SIP traffic in the middle of a call, and the router that Spectrum and Verizon provide have SIP-AlG turned on by default; which will mess with our phone system. They don’t let you turn it off in the configuration settings either!

Back to my point; using a router where you are at the mercy of the manufacturer to provide you with updated firmware leaves you vulnerable to security risks as new exploits are discovered. For example, three years down the line, there might be a very important update for OpenVPN, but your router’s manufacturer might have stopped supporting your model after just six months. Now you’re screwed.

Increased likelihood of getting hacked over time[edit | edit source]

You’re making it harder for yourself by using a router that will become vulnerable to exploits in OpenVPN. OpenVPN is exceptional software: these holes get plugged, and they get plugged fast.

…if the manufacturer actually updates the firmware. They often don’t. Think about it:

  1. You already paid for the router.
  2. Providing you with updated firmware costs them money & time.
  3. But they already have your money.
  4. So they don't care.

You might think I’m being bombastic; what’s so bad about using an older version of OpenVPN?

OpenVPN exploits:[edit | edit source]

A CVE is a common vulnerability & exploit - aka, a way to hack into something. These are a small number that have occurred over the years. Finding CVEs isn’t a bad thing, every piece of software ever created is going to have security vulnerabilities. It is only bad if you are running hardware that you cannot update once a fix has been released.

1. CVE-2024-27459, CVE-2024-24974, CVE-2024-27903, CVE-2024-1305[edit | edit source]

  • Discovered: March 2024
  • Description: Multiple vulnerabilities were found, mainly affecting OpenVPN’s client-side on Windows, Android, iOS, macOS, and BSD. These included stack overflow, unauthorized access, & plugin flaws leading to potential remote code execution (RCE) and local privilege escalation (LPE). Users were advised to update to OpenVPN versions 2.6.10 or 2.5.10 to mitigate the risks. You can only update OpenVPN versions if your router lets you.

Terminology note: “client-side” means the part of the software that runs on your device (like a computer or smartphone), as opposed to “server-side”, which would be the part running on a remote server (Apple/Google’s server).

“Remote Code Execution (RCE)” is a vulnerability that lets a hacker run code they want to run on your device. “Local Privilege Escalation (LPE)” means a vulnerability that lets a hacker get higher permissions (i.e. becoming an admin rather than being a regular user) allowing them to do things they shouldn’t or gain full control over your system.

2. Code Signing Key Intrusion (OpenVPN 2.5.8)[edit | edit source]

  • Discovered: December 2022
  • Description: An intrusion was detected involving OpenVPN version 2.5.8. There’s no evidence suggesting the key was misused & OpenVPN proactively re-released the software signed with a new key for security. This is why updates matter.
  • Sources: OpenVPN Security Advisory

3. CVE-2022-0547[edit | edit source]

  • Discovered: February 2022
  • Description: Enabled authentication bypass in external authentication plug-ins when more than one of them makes use of deferred authentication replies, which allows an external user to be granted access with only partially correct credentials. aka, I can have a sawed off copy of your house key & still get in.
  • Sources: OpenVPN Community

4. CVE-2020-15077, CVE-2020-36382[edit | edit source]

  • Discovered: 2020
  • Description: These vulnerabilities affected OpenVPN Access Server, with risks of information leakage and potential denial-of-service (DoS). Patches were released fast to address these security issues, which requires you have a router that allows you to continue updating it after the manufacturer has given you the middle finger & told you to buy a new one.
  • Sources: OpenVPN Security Advisory

5. CVE-2018-9334[edit | edit source]

  • Discovered: 2018
  • Description: A denial-of-service vulnerability in OpenVPN’s handling of authentication processes, potentially allowing attackers to disrupt services was patched.
  • Sources: OpenVPN CVE List

6. CVE-2017-7521[edit | edit source]

  • Discovered: 2017
  • Description: A memory exhaustion flaw was found where an attacker could exploit OpenVPN’s message handling to cause service disruption.
  • Sources: OpenVPN CVE List

Guaranteed long term compatibility & updates[edit | edit source]

Even a cheap 10-year-old desktop PC can be a good router for the next ten years; as long as it has a good network interface card. If it runs out of RAM or new network technologies come out, you won’t throw it away; you’ll buy a new network card for $40 or more RAM at a yard sale. Ten years from now, going from 2 GB of RAM to 8 will probably cost less than $10.

Using a standard x86 PC as a router, with known good Network Interface Cards, means you are less likely to encounter compatibility or longevity issues when using any of these open source router systems. It gives you more control, and if you’re reading this, you probably have an old desktop PC in the garage or closet you’re not using anyway. Get it two good network interface cards and get it back in commission!

What about OpenWRT?[edit | edit source]

There are open source packages like OpenWRT doing the lord’s work to keep these routers going. This is a good project, run by good people. I do not want to denigrate them in any way; what I am about to say is in no way their fault. They do their best to keep routers running with their firmware for as long as possible, but eventually, it becomes too difficult or untenable to provide updates for older chipsets & hardware, and they fall off the list. Those old routers will only work with older versions of OpenWRT (especially for those 4/32 devices).

But it’s a lot of work to support 100s of different makes & models, all using their own specific hardware. When we build a router using a standard computer, we can install router software like pfSense or OPNsense, which means the chances of our hardware not getting updates/not being supported shrinks to almost nothing. These open source projects do not have to support a gazillion different hardware configurations. They support x86, and if you have x86 (most normal desktop computers are x86), you’re good. It makes it easier to maintain on a mass level & provide regular updates to. The likelihood of your “hardware not being supported” with an open source router distribution when it is a desktop PC with a good network card shrinks to near 0.

By building your own router using pfSense, an open-source firewall, and cheap, dedicated hardware, you guarantee long-term support and control over your setup. With pfSense, you can get regular updates, customize your network settings, and even block ads across all devices using pfBlockerNG.

Building Our Own Router[edit | edit source]

Let’s dive into the first step: setting up pfSense on an Intel NUC (a small-factor barebone PC, Next Unit of Computing) to serve as your router. We’ll be setting this up with OpenVPN, which is very important for connecting securely to your home network.

As for the hardware, I’m using an Intel NUC because it’s compact, reliable, and it has two Ethernet ports, which are necessary for setting up a router. One port is used for your WAN (internet), and the other for your LAN (internal network). For a pfSense router, we must choose a machine with TWO ethernet ports, not one!

Why pfSense?[edit | edit source]

I chose pfSense ten years ago because:

  1. It’s open-source.
  2. It’s fast.
  3. It gets regular updates for security issues.
  4. The parent company has paid corporate & business clients relying on their software, which is based on an open source core. The developments with regards to making certain network cards work well with FreeBSD get included upstream to the free versions.
  5. This means that me, as a scrub who didn’t pay for it, get something that is very similar to what corporate clients who are paying $10,000 or more are getting.
  6. If I mess something up with my very unusual custom setup, I can pay the developers of the software to fix it for me. This level of support is not common in many open source projects. If I want to cry uncle & pay them an annual fee, they will respond to my questions & provide me with REAL answers rather than tell me to go “rtfm”.
  7. It comes with features like pfBlockerNG to block ads, scams, and malware at IP & DNS level with regular updates.

I use pfSense now because:

  1. I’m used to it.
  2. The idea of redoing my complicated setup from scratch gives me hives.
  3. See #2, in regard to becoming acquainted with the unique quirks of other open source software.

I had very good reasons for choosing pfSense ten years ago – and I have good reasons to use it today. That doesn’t mean it’s the best. Feel free to use whatever you want to use. For the purposes of this guide, I will be using pfSense.

There’s a bit of a debate between pfSense and OPNsense. TL;DR, the developers of pfSense are not the nicest people sometimes. If this bothers you, consider checking out OPNsense. Since I’ve been using pfSense for a decade, I’ve built much of my infrastructure around it. I am well aware of its quirks and don’t feel like setting up my network from scratch, so I am using pfSense for this tutorial. Regardless of the developers, you are infinitely better off using pfSense on your own hardware than standard routers.

Choosing the Right Hardware[edit | edit source]

Why an Intel NUC?[edit | edit source]

When searching for hardware to build a pfSense router, you’ll often come across a variety of mini PCs on platforms like Amazon. However, there are several issues with these options:

  1. Inconsistent Quality: You’ll find reputable brands like Mikrotik listed alongside unknown generic random stuff. I trust Mikrotik - I don’t trust random junk. Amazon allows random junk from unverified, untrusted vendors to show up routinely at the top of the search results.
  2. Unreliable Reviews: Amazon’s review system has known issues:
  3. Safety Concerns: Amazon has a history of selling mislabeled or dangerous products, including:

…and the list goes on. This guide is going to be 600+ pages when done; do you want to do all of this work only to have the primary component be a piece of junk from a website that sells cat guillotines? No.

The Better Alternative: Repurpose an Old Desktop PC[edit | edit source]

Instead of risking your project with unknown mini PCs, consider using an old desktop computer:

  1. Reliability: A 10-12 year old desktop is likely more reliable than no-name mini PCs.
  2. Choice of Network Card: Desktop PCs offer PCI Express slots for additional network cards, so YOU can choose the network interface card for your setup. You often do not know what chipsets are used in the no-name-mini-PCs. pfSense & other FreeBSD-based routers are sensitive to poor-quality chipsets.
  3. Cost-Effective: You can re-purpose an old desktop you already have & save money on purchasing new hardware.

Choosing the Right Network Interface Cards (NICs)[edit | edit source]

To transform your old desktop into a capable router:

  1. Add Quality NICs: Install high-quality network cards, preferably Intel-based.
  2. pfSense Compatibility: Check the pfSense forums for compatible chipsets and cards.
  3. Examples of Good NICs:
    • Intel X540.
    • Intel 350.

Caution When Purchasing NICs[edit | edit source]

  1. Avoid Realtek at all costs: Read pfSense and FreeBSD forums to learn about the issues from people who use Realtek network interface cards. Sometimes you’ll get something working, but often you will get headaches and nightmares that are not worth the cost savings. Realtek network cards are best avoided in pfSense & similar setups due to known issues with poor performance & compatibility. Intel network interface cards are preferable for reliability & better support in open-source projects like pfSense.

Note of Appreciation: pfSense developers have created drivers for network interface chipsets like the 225 (citation 1, citation 2) that didn’t exist before. Intel network interface cards are known to have better performance & reliability in FreeBSD systems than Realtek chipsets. The ecosystem of open source firewalls are invested in providing support for these chipsets, providing solutions when the manufacturer doesn’t.

This is an excellent argument in favor of paying money for open source software. The igc driver for the i225 Intel network chip was made available to everyone! Commercial users, non-paying users of pfSense, and other FreeBSD based routers/firewalls all benefit from people paying for open source software. Top notch programmers wrote these drivers because they were able to pay their rent & bills doing so.

When you pay for open source software, you are sending a message that it makes sense for top notch programmers to spend money developing open source code that doesn’t abuse you rather than going to work for facebook.

  1. Buy from Reputable Vendors: Avoid counterfeit products by purchasing from trusted sellers. There are many counterfeit cards out there.
  1. Vendors don’t know the difference: Many vendors selling knockoff cards do not even know they are doing it. Wholesale liquidators operate with low profit margins while selling a wide variety of equipment and lack the time and expertise to vet all of what they sell. As a result, many vendors sell counterfeit and fake Intel network cards.
  • Recommended: The Art of Server on eBay (link)
  • Example product: Intel X540 (link)

Verify Compatibility: Make sure the card fits your PC’s available slots.

  • Be wary of non-standard form factors or connectors.

HINT: Buying cards that are branded from server re-sellers is a good way to avoid fakes. For instance

Don’t buy Digiorno[edit | edit source]

Buying used network cards, and used hardware, is ok. Actually, it’s encouraged; it’s a great way to buy better hardware than you’d otherwise be able to afford, and it avoids senseless waste. However, be careful to not buy Digiorno. There are amazing deals to be found in the used server world, but it is also a jungle ready to eat you alive if you’re naive enough to believe those crazy folks have any respect for the civilized world of standardized connectors.

Good vendors will be able to tell you the difference between normal hardware and Digiorno. If they do not know the difference, YOU DO NOT WANT TO BUY FROM THEM!

Building a DIY pfSense router with an old desktop PC and quality Intel NICs is likely to provide a more reliable and expandable solution than generic mini PCs. With a random mini PC, if you get a bad network interface card, you’re out of luck. With your old desktop PC, you can choose the network interface card. Want 2.5GbE? Get another card. Want 10 Gbps? Get another card. Want fiber? Get another card. Have a card with the wrong chipset? Swap in another card.

We are going down a 10+ hour rabbit hole of hell setting up all sorts of confusing, crazy GNU/Linux software. Even a 1% increase in the likelihood of this being more difficult as a result of random garbage Amazon hardware isn’t worth it to me for $100-$200 in savings.

I chose an Intel NUC because it has two quality NICs, and I was able to find one affordably. You do not have to buy the computer I bought to use as a router: this is your journey!

Note: There is no one “right” way to do this. As long as you use a stable, quality computer with GOOD network interface cards that the pfSense & FreeBSD community approve of, you are set!

Step 1: Downloading pfSense and Preparing a Bootable USB Drive[edit | edit source]

1.1 Download pfSense[edit | edit source]

pfSense’s website has unfortunately become cancer in recent years. While I am all for paying for software, the concept of having to add to cart, checkout, and insert billing information to download a free image… no. Avoid using this version of the website. Instead, go here. Feel free to buy it and pay for their support, but don’t jump through stupid hoops.

  1. Open your web browser and visit the pfSense mirror site.
  2. Choose the correct architecture for your system (usually amd64 for most modern computers, including Intel NUCs). If you don’t know what the difference is between these, pick amd64.
  3. Select the USB installer image (.img.gz) from the available options.

1.2 Unzip the Downloaded pfSense File[edit | edit source]

  1. After the download completes, you’ll need to uncompress (unzip) the file.
  2. The file typically ends with .gz. Use the right tool for your operating system:
  • Linux or macOS: Open a terminal and run the following command:

    gzip -d pfSense-CE-memstick-*.img.gz
  • Windows: Use a tool like 7-Zip. Right-click the file, choose “Extract Here,” and let the tool unzip it.

1.3 Create a Bootable USB Drive with the pfSense Image[edit | edit source]

Warning: This process will erase everything on the USB drive.

  1. Insert a USB flash drive (at least 4GB in size) into your computer.
  2. Use one of the following methods to write the pfSense image to the USB drive:

Windows:[edit | edit source]
  1. Download and install Rufus.
  2. Open Rufus and select your USB drive.
  3. Click the “SELECT” button and choose the unzipped .img file you downloaded.
  4. Click “Start” and let Rufus create the bootable USB.

GNU/Linux or macOS:[edit | edit source]
  1. Open the terminal and type one of the following commands depending on the system used:

    sudo fdisk -l # GNU/Linux
    diskutil list # macOS
  2. Make note of drives in the system. Do not erase these.

  3. Plug in the flash drive.

  4. Open the terminal and type one of the following command again:

    sudo fdisk -l # GNU/Linux
    diskutil list # macOS
  5. Make note of the drive that was not present before. Write it down.

  6. Double-check size/brand/model to make sure this new device is the device you plugged in.

  7. Now, unplug the drive you just plugged in.

  8. Run:

    sudo fdisk -l # GNU/Linux
    diskutil list # macOS
  9. Does the drive you wrote down in step 5 still appear? If so, you made a mistake, and you’re on your way to deleting all of your data. Don’t do that. Do not pass go, do not collect $200 – back to the beginning. If not, you can now plug your drive back in.

  10. Run:

    sudo fdisk -l # GNU/Linux
    diskutil list # macOS
  11. If the drive that did not appear last time, appears this time, and is the same device as in step 5, you are likely on your way to not erasing your entire system. Good job, that makes you less of an idiot than me; a low bar, but it’s something.

  12. Run the following, replacing /dev/sdX with your drive, and replace the pfSense img file with the filename of your image file:

    sudo dd if=pfSense-CE-memstick-serial-*.img of=/dev/sdX bs=1M status=progress

Your bootable USB drive with pfSense is now ready for use! If you managed to erase your entire computer by writing pfSense’s image to your operating system drive EVEN AFTER all of this, congratulations, you’re almost as stupid as me.

Step 2: Disable Secure Boot and Install pfSense on the Intel NUC[edit | edit source]

Before you can install pfSense, you’ll need to disable Secure Boot if you are using a modern computer. Many modern computers, especially those pre-installed with Windows 10 or 11, come with Secure Boot enabled, preventing you from booting into an operating system that isn’t signed by Microsoft initially. Since pfSense is open-source and unsigned, we need to disable Secure Boot to start our installation.

1. Disabling Secure Boot in BIOS[edit | edit source]

  1. Insert the USB Drive
    • Plug in the USB drive containing the pfSense installation image into one of the USB ports on your Intel NUC.
    • Make sure this is done before you power on the device.
  2. Enter the BIOS
    • Power on the Intel NUC and immediately start pressing F2 (or the designated key for your system) to access the BIOS settings.
    • Keep pressing this key until you enter the BIOS. On some systems, the BIOS key may be different (e.g., Delete or Esc), but F2 is common for most systems.
  3. Disable Secure Boot
    • Inside the BIOS, navigate to the Boot section.
    • Locate Secure Boot and toggle it to Off. Depending on your BIOS, Secure Boot may be located under the Security or Boot sections.
    • Once Secure Boot is disabled, you’re ready to install pfSense.
  4. Set Boot Priority
    • In the BIOS, go to Boot Priority settings.
    • Set your USB drive as the first boot device. This will allow the system to automatically boot from the USB drive containing the pfSense installer.
    • Alternatively, you can press F12 (or the appropriate key) during boot to manually enter the boot menu & select the USB drive each time.
  5. Save and Exit BIOS
    • Press F10 to save your changes and exit the BIOS, or whatever key does it on your machine.
    • The system will now reboot, and if the USB drive is set as the first boot option, it should boot directly from the USB flash drive and load the pfSense installer.

Step 3: Installing pfSense on the Intel NUC[edit | edit source]

Boot from the USB Flash Drive[edit | edit source]

1.1 Power on the Intel NUC[edit | edit source]

  • Make sure the USB drive containing the pfSense installer is still plugged into the Intel NUC.
  • Power on the NUC and press F10 (or the relevant boot menu key) to select the USB drive as the boot device.

1.2 Select the USB Drive in Boot Menu[edit | edit source]

  • In the boot menu, you’ll see a list of available boot devices. Select the USB flash drive that contains the pfSense installer.
  • Press Enter to boot from the USB drive.

Begin the pfSense Installation[edit | edit source]

2.1 pfSense Installer Menu[edit | edit source]

  • After a few moments, the pfSense installer menu will appear.
  • Use the arrow keys on your keyboard to select Install and press Enter to begin the installation.

2.2 Choose Installation Method[edit | edit source]

  • The installer will guide you through the process. When prompted to choose an install method, select Auto (ZFS) for the file system.
  • ZFS is a great file system that offers data integrity, snapshots, and other advanced features. You probably won’t use most of them, but it’s still an excellent choice.

Select the Correct Installation Drive[edit | edit source]

Raidz1 is a good option in that it allows one of the drives in your machine to die, and the router to keep going. This requires you have not one, but two drives inside your router machine. This is not a bad idea. You should be making a backup file of your router anyway so that you can restore regardless of what happens to any and all of the hardware on this one: but, this will allow the router to keep working even if a single drive dies. I am using stripe, no redundancy, which is the option you will be picking if you have only one drive in the router.

3.1 Select Internal SSD or Hard Drive[edit | edit source]

  • The next step is to select the disk where pfSense will be installed. This is a very important step, so pay close attention.
  • You will see a list of drives. The USB drive will usually appear as a small capacity device (e.g., 4GB or 8GB).
  • Choose the larger drive that represents your Intel NUC’s internal SSD or hard drive (e.g., 256GB, 512GB).
  • Important: “generic-mass-storage-class” is usually your external USB flash drive. If you’re using a PC with an internal drive, there’s a 99% chance that “generic-mass-storage-class” is NOT what you want to select unless you’re intentionally installing to a USB mass storage device (which is not recommended for a permanent installation).
  • In my case, the Micron SSD was my internal SSD. Your drive name may be different, but look for a larger capacity drive that matches what you know is inside your NUC or PC.
  • Use the arrow keys to highlight the correct drive, press space to select the drive, then press Enter to confirm your selection.

3.2 Confirm Erase and Installation[edit | edit source]

  • Once the correct internal drive is selected, the installer will ask if you want to erase the drive and proceed with the installation.
  • This will erase all data on the selected drive. Make sure you’ve backed up any important data before proceeding.
  • Confirm by selecting Yes. The installer will now copy files and set up pfSense on the internal drive. This may take a few minutes.

Complete the Installation and Reboot[edit | edit source]

4.1 Remove the USB Flash Drive[edit | edit source]

  • After the installation is complete, you’ll be prompted to reboot the system.
  • Before rebooting, remove the USB flash drive from the Intel NUC. This makes sure it boots from the newly installed pfSense system on your internal drive.

4.2 Reboot and Load pfSense[edit | edit source]

  • After removing the USB drive, press Enter to reboot the system.
  • The Intel NUC will now boot into pfSense from the internal drive, and you’ll be greeted with the pfSense console screen.

Now that pfSense is installed, you’re ready to proceed with the initial configuration. This includes setting up your WAN (external network) and LAN (internal network) interfaces to make the NUC function as your network router.

Step 4: First-Time Configuration of pfSense[edit | edit source]

Now that you have pfSense installed on your device, it’s time to set it up and configure the basic settings. This step will cover configuring the WAN (internet) and LAN (local network) interfaces, setting IP addresses, and making sure everything is ready for further setup.

1. Connecting and Booting Up pfSense[edit | edit source]

1.1 Connect Your Devices:[edit | edit source]

  • Plug your cable modem into one of the Ethernet ports on your pfSense device.
  • Plug your desktop computer (the one you’re using to set everything up) into the other Ethernet port.
  • At this point, you don’t need more than these two connections.

1.2 Power On and Watch the Boot Process:[edit | edit source]

  • Turn on your pfSense device.
  • You’ll see a lot of text scrolling on the screen as the system boots up. Don’t worry if it seems overwhelming—this is normal.
  • Pay close attention to the information displayed, especially towards the end of the boot process. Look for any text related to an IP address or interface name, like what is pictured below:

NOTE: Interface names can be ascertained by looking at what is going on as the machine boots. This is helpful for later! Refer to images below.

2. Initial Configuration Steps[edit | edit source]

2.1: VLAN Setup Prompt[edit | edit source]

- One of the first prompts you’ll see is: “Should VLANs be set up now?”

2.1: VLAN Setup Prompt[edit | edit source]

  • What is a VLAN? VLAN stands for Virtual Local Area Network. It’s a way to create separate networks within your network. For example, if you have a switch with 52 ports and want to have five different networks all connected to your router with just one cable, you’d use VLANs. However, this is way too advanced for what we’re doing here.
  • You may see a bunch of random text appear before you have a chance to respond. Don’t worry, you haven’t missed your opportunity to input. You can still type ‘n’ and hit enter when you’re ready.
  • This is just normal open-source nerd UI/UX that is not designed for normal people. You will see a lot of this. That is why we’re here!
  • For now, press ‘N’ to skip VLAN setup. We’re setting up just one local network, so VLANs aren’t necessary at this stage. You may do this later with the wifi section to have segmented wifi networks for trusted & untrusted devices & to limit their access, but that does not have to be done right now and can be done later!

2.2: WAN and LAN Interface Assignment[edit | edit source]

  • Next, pfSense will show you which interfaces are available on your device. This is where you assign the Ethernet ports for WAN (internet) and LAN (internal network).
  • Pay close attention to the bottom third of the screen. You’ll see information about which interface (e.g., em0 or igb0) has received an IP address. The interface that received an IP address is most likely your WAN interface. In my case, em0 is the interface attached to Spectrum cable internet; makes sense that it’s sad…
  • Your desktop PC is not going to “provide” an IP address to the router; it is going to try to retrieve an IP address from the router. This is how we determine that the interface that has received an IP address is the WAN interface connected to our modem.
  • The names of these interfaces may vary depending on your hardware and pfSense version. Don’t worry if they don’t match exactly what you see in this guide.

When prompted:

  1. Enter WAN Interface Name:
    • Input the name of the interface that received an IP address (e.g., em0).
  2. Enter LAN Interface Name:
    • Input the name of the other interface (e.g., igb0).

Confirm the interface assignments when prompted. This tells pfSense which port to use for WAN (internet) and which for LAN (local network).

NOTE: This is the IP address that you would be accessing the pfSense web interface on. This is also your “gateway” address, i.e., what your computer connects to in order to get an IP address, and before it connects to any IP outside of this subnet (subnet = other devices on your LAN, e.g., cellphone, TV, file server, etc.).

3. Configuring LAN IP Address[edit | edit source]

3.1: Default LAN IP[edit | edit source]

After assigning interfaces, pfSense will show you the default LAN IP address, usually 192.168.1.1.

This is the IP address of your router (pfSense) within your local network.

Any device that connects to the router will be assigned an IP address in the 192.168.1.x range by default. For instance, your PC may grab an IP of 192.168.1.46, 192.168.1.16, etc., if set to connect automatically via DHCP (Dynamic Host Configuration Protocol).

DHCP means when you connect to a router it grabs an IP address/DNS server/etc. to you by default, “Plug N Play” style. This is the default configuration of most devices you will ever connect to the internet unless you went out of your way to re-configure them. This includes your computer, cellphone, game console, IoT devices, security cameras, etc. They’re all connecting via DHCP.

3.2: Changing the LAN IP (Optional)[edit | edit source]

Requirements:

You don’t need to change this unless you have a specific reason to do so, such as conflicts with other networks you’re using. I have chosen to change it, and will be working with the following configuration throughout this guide. You do not have to follow what I am doing, but if you want to be able to copy & paste along with me addresses of things, feel free to do it this way, it won’t hurt.

  1. Set Interface IP address
    • The number for the LAN interface was 2 in my case
  2. Configure the new LAN IPv4 address via DHCP
    • Choose n
    • This isn’t referring to having DHCP so that clients who connect can get an IP address. This means should this interface have a dynamic IP itself, meaning the the router/gateway would have a different IP each time we connect to it. There is no need for this.
  3. Enter the new LAN IPv4 address
    • 192.168.5.1 is my LAN IPv4 address that I will choose for my router.
    • This is where your pfSense router will be accessible via web browser. This will be your gateway address, and this will be your DNS server.
  4. Enter LAN IPv4 subnet bit count
    • 24 is the subnet bit count
    • (This is shorthand for a subnet mask of 255.255.255.0).
  5. IPv4 upstream gateway address
    • Press enter for none.
  6. Configure IPv6 address for LAN interface via DHCP6
    • Press y , we’re not using IPv6 in this guide anyway.
    • I hit y, you can hit n and specify an address manually, but I will not be using IPv6 so it makes no difference to me, no need to specify an address I have to remember for something I will never use.
    • You’re welcome to set up an IPv6 home network if you want; I am not covering that here.

3.3: DHCP Setup[edit | edit source]

  1. DHCP (Dynamic Host Configuration Protocol) automatically assigns IP addresses to devices on your network. This makes it easier to connect new devices without manually configuring IP settings on each one. This is what allows clients to be able to get an IP address automatically as soon as they connect via Wi-Fi or with an ethernet cord into your switch. You want this so that by default people can go online without having to specify their IP manually.
  2. When asked if you want to configure DHCP, choose Yes.
  3. Set the DHCP range. This is the range of IP addresses that will be assigned to devices on your network. For example:
    • Start Address: 192.168.5.2
    • End Address: 192.168.5.254
  4. Since we have our router on 192.168.5.1, the next address that’s available is 192.168.5.2 which is the start, and 192.168.5.254 as the end.
  5. For Do you want to revert to HTTP as the webconfigurator protocol, choose n. No need to use HTTP instead of HTTPS. We’re never going to connect to this without a VPN anyway, so HTTP vs HTTPS isn’t the biggest security deal in the world, but it’s a good practice to use HTTPS whenever possible.

This allows up to 254 devices on your local network, which is more than enough for most home setups. If you have more than 254 devices at home, you’re likely not reading a beginner’s guide from a board repair person cosplaying as a sysadmin.

If you want to go crazy, you can do a different setup entirely: change the LAN IP to something even less common if you want to avoid conflicts, such as 172.16.10.1 as a LAN IP, subnet 24. This would allow 254 devices that would be given IPs such as 172.16.10.2, 172.16.10.30, etc.—and your pfSense router web interface would be accessible on 172.16.10.1. When you connect to other people’s networks, if you don’t disable LAN access in the OpenVPN android client, and their network has a 192.168.1.1, and yours has a 192.168.1.1… You see where this is going. Chances are they don’t have a 192.168.5.1 though.

NOTE: If both your home network and a remote network you’re connecting from via VPN use the same IP range, you can end up with routing & connectivity issues. Let’s say you’re at a coffee shop. You connect via wifi. On their network, you are 192.168.1.3. You connect to your home network via your VPN, and you want to connect to your local mailserver… but you both have the same pos linksys wrt54g router, which defaults everyone to 192.168.1.*. so you try to connect to 192.168.1.3. Do you see where this is going?

Changing your home network to a less common IP range can mitigate this risk. Always check the IP range of networks you frequently connect to and adjust your home network accordingly. Or, just make yours some weird-ass number that nobody else will be using. The latter works for me.

4. Finishing Up[edit | edit source]

At this point, the basic configuration is complete. You can now:

  1. Unplug the monitor, keyboard, and mouse from your pfSense device.
  2. Put away your keyboard and mouse.
  3. Turn your cable modem off for a minute or two, and then plug it back in. Some modems get mad when you plug in a new router.

NOTE: Configuring the LAN IPv4 address and subnet mask sounds confusing if you’re used to plugging in your 50 year old Linksys WRT54G & getting going. It’ll get easier with time, but for now, let’s go over what some of these pieces do. You can always come back to this later.

What is the LAN IPv4 Address? The LAN IPv4 address is the IP address assigned to your router on your local network. All your devices from your computer, phone, or smart TV (if you are reading this and still using a smart tv…) use that address as the “gateway” to get to the internet & also to communicate with each other. The default configuration is that pfSense assigns 192.168.1.1 as the LAN IP address. This is the norm for most routers.

  • This address is special because it tells devices where to send data when they want to leave your network. For example, if your PC needs to visit apple.com, it sends the request to the router’s LAN IP (192.168.1.1, otherwise known as the gateway), which then forwards it to the internet.
  • If you’re not changing anything, you can stick with the default (192.168.1.1). I change it because everyone uses 192.168.1.1. If you use a VPN or other networks frequently, changing it to something like 192.168.5.1 can avoid headaches down the line. If I am trying to connect to 192.168.1.1 on my home network, but 192.168.1.1 is the gateway IP of the wifi router my phone is connected to at my friend’s house… you see where this gets confusing.

What is a Subnet Mask? A subnet mask is what defines the “size” of your local network. Your LAN is like a neighborhood; the subnet mask is like a property line that goes over how many houses can fit in the neighborhood.

  • The default subnet mask for most home networks is 255.255.255.0. This tells your router that there can be up to 254 devices (playstations, phones, computers, etc) connected to your network. That’s a lot. If you have more than 254 devices in your house, you’re probably not reading this guide.
  • This subnet mask is written abbreviated as /24 because the first 24 bits (the 255.255.255 part) of the address are fixed while it’s only the last 8 bits are available for device addresses.

Why Configure a Static LAN IP? When you assign a static LAN IP to your router, you’re making sure that its address never changes. It would make no sense to have a router IP that changes constantly. Your servers & devices all need to connect to the router, so keep the router where it is. Moving it around senselessly makes no sense. It would be akin to Walmart changing its address every day.

  • Imagine your router’s address was constantly changing. One moment it’s at 192.168.1.1, and the next, it’s at 192.168.1.87. Your devices would be as confused as I am when I call a New York state tax office.
  • By giving a static IP like 192.168.5.1 to the router, I’m making sure that everything in your network knows where to go.

Step-by-Step explanation if you’re still confused:

Set Interface IP Address: - When it asks you to “Set interface IP address,” this is where you’re assigning the LAN IPv4 address. Think of it as giving your router its permanent address in your local network. Enter 2 to configure the LAN interface.

Configure the New LAN IPv4 Address: - Here, you’re telling pfSense what address you want to use for the router. For example, 192.168.5.1 makes your router accessible at that address. - Remember: This is the gateway address that all your devices will use to connect to the internet. Write it down somewhere because you’ll need it later to log in to the pfSense web interface.

Enter LAN IPv4 Subnet Bit Count: - This is where you specify the subnet mask abbreviated. For most home setups, the bit count is 24, aka 255.255.255.0. This allows up to 254 devices to connect to your network. If you’re just starting out, stick with /24. - To keep it simple when you see 192.168.5.0/24 what they mean is everything from 192.168.5.1 to 192.168.5.254. - Why not use a bigger subnet? Because you’re reading a beginner’s guide. How about you get one device to work in your broom closet before going for over 254?

IPv4 Upstream Gateway Address: - This is asking if your LAN interface needs a separate gateway to reach the internet. Since your router is the gateway for your LAN, just press Enter to leave this blank. - Your LAN doesn’t need to forward traffic anywhere else because the router handles it.

Configure IPv6 Address for LAN Interface via DHCP6: - You’re not using IPv6. Forget about IPv6 for now. We’ll get to how this makes using your VPN a nightmare later on. If you are not a datacenter or a sysadmin for amazon web services, you have no need for ipv6 in your life at this stage.

5. Accessing the pfSense Web Interface[edit | edit source]

Now that the basic network setup is complete, you can access the pfSense web interface to configure more advanced settings.

  1. On your desktop computer (connected to the LAN port), open a web browser.
  2. Go to https://192.168.5.1 or https://pfSense.home.arpa.
  3. You may see a security warning in your browser. This is because pfSense is using a self-signed SSL certificate, which is fine for local networks. Click “Advanced” and proceed to the site.
  4. Log in with the default credentials:
    • Username: admin
    • Password: pfsense
  5. Once logged in, you’ll be prompted to change the default password. Set a strong password to secure your router.

5.1: Initial Web Setup Wizard[edit | edit source]

  1. Set the Hostname:
  • Choose a hostname for your pfSense router. This can be something simple like “pfsense” or “home-router.” You will be able to access the router at pfsense.home.arpa once we set everything up with DNS later, instead of having to visit the router’s web interface based on its IP address. If you typed roflcopter into this box, you would be able to access your router at https://roflcopter.home.arpa rather than typing in https://192.168.5.1 – you get the idea.
  1. Set DNS Servers:
  • For now, you can use a public DNS provider like Google DNS (8.8.8.8), but we’ll replace this with AdGuard DNS or similar later for ad-blocking.
  • Uncheck the option to “Allow DNS server list to be overridden by DHCP/PPP on WAN,” so your ISP cannot override the DNS settings you choose.
  1. Time Zone:
  • Set the correct time zone for your location (e.g., US Central if you’re in Texas).
  1. Final Steps:
  • Once these settings are configured, hit “Next.” It’ll ask you to configure the WAN interface. Unless you have a funky setup, you need not change anything here. This is not for you to mess with.
  • It’ll ask you to configure the LAN interface again, but you need not touch anything, remember we already did this and the settings you put in earlier should be what shows up.
  • It’ll ask you to make a secure password; it is a good idea to set a secure password and save it in a password manager. No post-it note on the monitor nonsense!
  • You’ll be taken to the final page where you can apply the settings and restart the web interface.

6. Final Check and Preparing for the Next Steps[edit | edit source]

At this point, pfSense is fully installed, and the basic configuration is complete. Here are some final steps and checks:

  1. It’s a good idea to restart your cable modem when you make these changes, especially if it was previously connected to another router.
  2. You might want to reset the internet connection on the device you’re using to access the pfSense web interface, especially if it was connected to a different network before.
  3. Before we move forward to setting up additional features (like ad-blocking), make sure your internet connection is stable and working as expected.
  4. Test your internet connection by browsing the web from a device connected to the LAN.
  5. Remember, you can now manage everything through the web interface. You shouldn’t need to directly connect to the pfSense device with a monitor and keyboard again unless something breaks. Put the keyboard, mouse, and monitor plugged into that pfSense device away; we’re (hopefully) never touching that again. If you are, that means something bad has occurred.
  6. If you encounter any issues, re-check everything you did.

Congratulations! Your pfSense router is now set up and ready for use. Now the real fun begins. :)

Setting Up FreeDNS for Dynamic DNS[edit | edit source]

Why Do You Need Dynamic DNS?[edit | edit source]

Your IP address changes.

Your IP address is like your home address or phone number. You want this to be static - as in, doesn’t change. Imagine if all of the road names and highway exits changed each day, or if your friend’s phone number changed every day. This would be a mess. How would you know who to call? It would be very confusing. This is how it is when you have a dynamic IP.

Most of you setting up a home server likely have a residential internet plan from providers like Spectrum, AT&T, or Verizon. Unlike professional hosting services with static IPs, residential plans assign dynamic IP addresses that change as often as the relationship partners of people with borderline personality disorder. This can be a problem when you want to access your home network remotely.

What if you had a speed dial button that automatically kept track of that friend’s changing number, and just allowed you to reach your friend every time you pressed on their name? That’s how a dynamic DNS works.

Even if you DO manage to memorize 33.84.182.1, imagine having to memorize a new number every week. Or every day!

And what if it changes in the middle of the day? Imagine having to check your IP address every day, or calling home & going “hey honey, can you go to whatismyip.com and give me the number so I can add something to my calendar? Thanks!”

That would be horrible.

What you want to do is go to chrisserver.mooo.com or mysite.ddns.net and it takes you right to your server, every time. This is possible because someone else can do the work of keeping track of your router’s IP address and assigning it to that domain name. Or, something. That thing is a dynamic DNS provider.

This is where Dynamic DNS comes in handy. It automatically updates a friendly hostname to point to your current IP address, so you can always access your home network using a consistent address.

Setting Up FreeDNS[edit | edit source]

Step 1: Register on FreeDNS[edit | edit source]

We’re going to use a service called FreeDNS. It’s free, easy to use, and even has some fun domain options.

1.1 Create a FreeDNS account[edit | edit source]

  1. Visit FreeDNS: Go to freedns.afraid.org.
  2. Register: Click on “Sign up Free” in the lower center of the page.
  3. Fill out form: Fill in the required fields (username, password, and email) and click “Create Account”.
  4. Verify your account by clicking the link in the confirmation email.

1.2 Log into FreeDNS & create subdomain[edit | edit source]

Continue with the steps to set up your subdomain as needed.

This is going to be the “website name” we associate with our home server internet connection. When you visit rossmanngroup.com, this actually means 208.113.140.53. When you type http://rossmanngroup.com in your browser, you’re asking your browser to go to 208.113.140.53 and knock on port 80 to serve us a website. When you type https://rossmanngroup.com in your browser, you’re saying we’re going to 208.113.140.53 and knocking on port 443 to be served a website with https/ssl.

The subdomain enclosed in red in the screenshot above is the first part of the website name, and the domain enclosed in green is the second part of the website name. The destination enclosed in blue is where our combined website name leads us. So, louishomeserver.chickenkiller.com in the configuration above, would lead us to 8.8.8.8

  1. After clicking the activation link from the FreeDNS email, you should be immediately logged in. You should save the username & password they gave you in a password manager.
  2. Add a New Subdomain: Once logged in, click on “Add a subdomain” in the middle of the screen from the main menu. Or, click Subdomains on the left side menu.
  3. Fill out the fields:
    1. Subdomain: Choose a custom name (e.g., “louishomeserver”). That’s the part I circled in red in my screenshot above.
    2. Domain: Select one of the available free domains (e.g., chickenkiller.com). This is the green field in my screenshot above. You can get your own pretty, custom named .com address, but you’ll have to pay for it.
    3. Destination: Here’s the trick - put in a WRONG IP address on purpose (e.g., 8.8.8.8). This will help us confirm if our setup is working later.
  4. The entire point of this is for our router to constantly update FreeDNS by telling it what our IP address is. If we put what our IP address is RIGHT NOW in this field, we won’t know for sure if pfSense is working properly with FreeDNS. We’d have to debug it through log files. Ew.
  5. Click “Save” to create your hostname.

NOTE: Setting an incorrect initial IP address lets us test that pfSense is correctly updating the dynamic DNS entry. This diagnostic step is an important one; screw things up & make sure that the system you put into place to auto-fix-it fixes it. This is far less dangerous than the alternative, which is “assuming that it works.”

1.3 Get the update URL from FreeDNS[edit | edit source]

The update URL is the URL pfSense will access to tell FreeDNS that your domain name’s IP address has changed & should change to the IP that your router is accessing that FreeDNS URL from.

  1. After saving, click “Dynamic DNS” from the upper left menu of choices.
  2. You’ll see your new subdomain at the bottom.
  3. Right-click on the “Direct URL” link next to your hostname and copy the link address.
  4. This URL is how we will update our IP address automatically. DO NOT SHARE THIS WITH ANYONE OR THEY WILL BE ABLE TO MESS WITH YOU ENDLESSLY BY CHANGING THE IP THAT YOUR NEW WEBSITE NAME ATTACHES TO AWAY FROM YOUR SERVER!!

NOTE: The Direct URL contains what is like a “password” necessary for updating your dynamic DNS record. Keep this URL secure and don’t share it publicly unless you want your dynamic dns domain name redirecting to goatse

1.4 Leave FreeDNS page open & make sure it has the WRONG IP for you.[edit | edit source]

  1. Either the dynamic DNS page on FreeDNS OR the subdomains page on FreeDNS. Make sure the IP address is as we entered before, which is 8.8.8.8.
  2. IT IS IMPORTANT THAT THIS IP ADDRESS NOT BE YOUR IP ADDRESS! WE WANT IT TO BE WRONG!
  3. Make sure it is still set to the 8.8.8.8 I told you to set it to before.
  4. If it is not, set it to 8.8.8.8.
  5. Reload both pages. Still 8.8.8.8? Good.
  6. We want this to be wrong – it changing from “wrong” to “not wrong” when we finish our work will mean that our setup works.

Why are we doing things this way? It takes an insignificant amount of extra time to do things like this, as a check against everything; from software glitches to my own carelessness and absent-mindedness.

Starting with the assumption that nothing works properly allows us to figure out at the very beginning if everything has been configured properly. Starting with the assumption that nothing works will allow us to figure out if our system ACTUALLY works BEFORE WE NEED IT TO WORK!!

You’ll see when we are testing certain features and functionalities and software like syncthing later that this comes in very handy. In the video, a connection will work & have a green checkbox the first time, but local discovery will not work the second time, even though the server & client IPs remain unchanged.

NEVER ASSUME A FIX WORKS WITHOUT BREAKING THE THING IT IS SUPPOSED TO FIX AND SEEING WHAT IT DOES.

Assume that nothing works, especially your own brain, and you will create systems that guard against much more than your own human error!

Step 2: Configuring pfSense for Dynamic DNS[edit | edit source]

pfSense has to talk to FreeDNS regularly to tell it our IP.

2.1 Log into pfSense[edit | edit source]

Open the pfSense web interface and log in, at https://192.168.5.1 or https://pfsense.home.arpa

2.2 Enter Dynamic DNS settings[edit | edit source]

In the pfSense dashboard, there is a menu on the top. Go to Services > Dynamic DNS.

2.3 Enter Dynamic DNS entry[edit | edit source]

  1. Click the “+ Add” button to add a new entry.
  2. Configure the Settings:
    • Service Type: Select “Custom” from the dropdown. This might seem counterintuitive since “freedns” exists as an option in this dropdown menu, but trust me here. You trust me… right? This is the green box in the screenshot I provided above.
    • Interface to Monitor: Select WAN (your external internet connection), this is the part circled in purple above.
  3. Interface to send update from: Select WAN, the part circled in purple above.
  4. HTTP API DNS Options: I check “Force IPv4 DNS resolution” because I have been scarred by my residential internet service provider’s issues with IPv6 before Spectrum bought Time Warner Cable. You don’t have to check this, but I check it because I hate IPv6 & have it turned off entirely in my own setup. I shouldn’t be passing my prejudices onto my children. But here I am passing this one onto you :’(
  5. Update URL: Paste the Direct URL you copied from FreeDNS. Everything after the question mark in this URL is like your password and username combined.
   IF YOU ARE USING CLOUDFLARE, you need to use your Zone ID as the username and the token you just created (with the Zone.DNS - Edit permission) as the password. Otherwise, if you use the token alone, the status will always appear green, but you won’t be able to connect. You might end up spending four hours, like I did, debugging all sorts of issues until you finally find the answer in a four-year-old Reddit post. Also, make sure you disable proxy on cloudflare.
  1. Max Cache Age: When will this run? By default, this runs when an update is forced by you or when the router notices the WAN address (the IP address your ISP assigned to you) has changed. I see no harm in having it update once per day. If there’s any sort of stupid bug or issue or crap where it tried to update & failed because the wind was blowing the wrong day, packet loss, etc… It costs literally nothing to do this, we’re in 2024, even people in the middle of nowhere have 768/128k DSL… There is zero downside to setting this to check at the minimum allowed interval, of once per day. “Inspect what you expect” as my stepmom would say, who was director of the Brookfield library. :) She could tell you firsthand that….. Nothing you expect people (OR COMPUTERS) to do, will they actually do.
  2. Description: Add something like “FreeDNS IP Update” to remember what this is for.
  3. Save the Configuration: Click “Save and force update” to store your Dynamic DNS settings.

If it went well, the two areas I circled in red above should look similar to mine. A green checkmark under “status”, and the “cached IP” should be your actual IP address that you see when you go to a site like whatismyip.com

Step 3: See if Dynamic DNS actually works[edit | edit source]

We purposely put an incorrect IP of 8.8.8.8 in there rather than our real IP address to make sure this actually works. Now we’re going to see what happens when we try to get it to work.

  1. Go to the pages I had you keep open before, the dynamic DNS page on FreeDNS OR the subdomains page on FreeDNS.
  2. The IP was 8.8.8.8 before. Has it changed to the IP address that you see when you visit whatismyip.com, that is the WAN address in pfSense? If it is, you did good.
  3. Another way: Force an IP Change:
  4. Disconnect and reconnect your home internet connection to force your ISP to assign a new IP address.
    1. You can do this by rebooting your modem or temporarily disconnecting your internet connection.
    2. Sometimes, you may not be able to get a new IP, and that’s ok!
    3. Sometimes, you can’t get a new IP from your ISP immediately.
  5. As long as you input an incorrect IP address into the FreeDNS field for your subdomain when adding your subdomain to FreeDNS, and you saw it change to your WAN IP when you set up FreeDNS dynamic DNS in pfSense, you are fine.

Verify DNS Resolution[edit | edit source]

To make sure your new hostname resolves to your home IP address, perform a DNS lookup from any device:

  1. Open a Terminal or Command Prompt:

    • On Linux or macOS, open Terminal.
    • On Windows, open Command Prompt.
  2. Run an nslookup Command:

    nslookup louishomeserver.chickenkiller.com

Replace louishomeserver.chickenkiller.com with your actual hostname.

  1. Verify the Result:

    • The output should show your current public IP address associated with your hostname.
    • This confirms that your dynamic DNS is working correctly.
    • You could also just use ping.
    ping louishomeserver.chickenkiller.com

Does it ping your IP address? You’re good.

Why This Setup Is Important[edit | edit source]

With this dynamic DNS setup, you no longer have to remember or manually track your public IP address, even when it changes. By using a hostname like louishomeserver.chickenkiller.com you can always access your home network remotely, no matter where you are or how often Spectrum goes down & changes your IP on you.

This is useful for accessing home servers or services from outside your network via OpenVPN (next section!). pfSense Dynamic DNS service with FreeDNS makes sure that my chosen hostname always points to my current IP address. No matter how often my IP changes, I don’t have to change configuration settings in my programs.

OpenVPN: Setting up Secure Access from Anywhere[edit | edit source]

Why OpenVPN? Why do I need this?[edit | edit source]

Because opening ports for personal use is a bad idea!

but louis, every website & hosting provider opens ports!”

Webhosts and datacenters open ports so that millions of people can access their services. You’re opening ports to access a porn server in your closet. You’re not the same.

Listing the ports we’d have to open.[edit | edit source]

Each one of these things needs its own open port on your router. That’s like having a house with 15 different doors, each one made of cardboard with a cutout in the middle allowing them to see in. No, we’re not doing that.

  • Immich to do machine learning on your photos, because your self-image isn’t bad enough as it is.
  • Home Assistant to pretend you’re Tony Stark
  • Syncthing because screw Google.
  • MailCow because you think you can run email better than Google (if you’re reading this guide, you probably can’t)
  • Frigate to catch your neighbor stealing your packages
  • OnlyOffice because you’re too cheap for Microsoft 365
  • FreePBX because… actually, I don’t know why you’d torture yourself with that. Lenny makes it worth it. Maybe

Why Opening Every Port is Dumber Than an 820-2330 Macbook’s hinge design[edit | edit source]

Here’s why exposing all of this directly is a terrible idea:

You’re Advertising What You’re Running: Any script kid with a port scanner can see exactly what you’re running.

Your Software is Probably Full of Holes: These projects are great, but they have 10,000 users, 5 of which believe they are entitled to 25 years of updates & bugfixes after a $3 donation, maintained by one person in their spare time, whose users are assholes that think feeding yourself off of your work is too much to ask for.

If I were smart (and evil), I could make a list of:

  • Every IP address
  • What software they ran
  • What version they ran

Then, I’d keep up with exploits/vulnerabilities that are announced in the news. I’d go back to my list & double check to see who’s running that software, and see if they work. At best, you become part of a botnet and waste some electricity mining my crypto. At worst, I’ve stolen all of your data & use it to blackmail you.

I like these programs; they’re fun software! But, similar to my taste in relationships; it isn’t about who I like. It’s about who I trust. The software I have the most fun with isn’t who I’d trust with banking credentials (or my future children). Maybe I got that the wrong way around….

OpenVPN: Only 1 Port to open, with better security:[edit | edit source]

One Port to Worry About: Instead of 15 points of failure, we have one potential point of failure.

NOTE: OpenVPN uses a single port for all traffic, which is usually port 1194 UDP. Most OpenVPN servers will default to port 1194. Make sure your ISP didn’t block this. Bad ISPs will block ports commonly used for running servers so you pay 5x as much for the same internet unless you buy a “business”(extortion) plan. I paid $409.99/mo for 10 mbps upstream when I had a store in New York; hint, you’re not paying extra for better internet..

Stealth Mode: To the outside world, you’re just running OpenVPN. They can’t see your unpatched version of hellanzb from 2007. (shout out to pjenvey if he’s reading this today!)

OpenVPN security in four pictures:[edit | edit source]

Here is what it’s like opening ports to a bunch of random open source projects people make in their spare time:

Here is what it’s like only opening a port for OpenVPN.

When you use OpenVPN, you are opening one port to get access to your network, with a door that many commercial interests have a stake in keeping strong. When you open ports for random crap, you have windows people can look through, and doors that look like… Well… Yeah. And 2 guys watching them.

Decreasing Attack Surface with OpenVPN is a best practice[edit | edit source]

OpenVPN isn’t a hobby project coded by your cousin’s methhead roommate. This is used by everyone, from companies with more money than sense to just about anyone who doesn’t want their data plastered all over the internet:

  • Having ONE service open to the public rather than 10 means a smaller attack surface.
  • Having one service
  • OpenVPN is designed with one purpose in mind, a secure connection.
  • It is over 20 years old.
  • Commercial interests (aka people actually paying money for software that rely on it for their infrastructure, not this guy) use & rely on it.
  • There are more eyes on the code of OpenVPN than hellanzb.

Marketing wankery? …Kind of, but they’re not lying here.

Is this 100% accurate? No. Are more people for whom millions of dollars rides on the security of their software using OpenVPN than hellanzb. Yes!

Having a home server is cool. But the programs we’re talking about are used by 0.0001% of 0.000001% of the world. OpenVPN can still have vulnerabilities; it isn’t perfect! But remember, in the world of network security, nothing is perfect! This isn’t about being perfect. It’s about controlling what we can control, and minimizing risk & attack surface every chance we can. A UFC fighter makes a better bodyguard than a mall cop, regardless of the fact that they’re equally useless against a bomb or a comet.

This guide walks you through the process of setting up OpenVPN on pfSense. OpenVPN allows you to access your home network as if you were there.

All of the services we want to use require having access to this network we are placing our server on, from anywhere. This setup will make sure that all traffic from the phone is routed through the VPN with no DNS leaks, which will be important for our adblocking-via-router section that comes after.

Setting up OpenVPN within pfSense for secure access[edit | edit source]

Step 1: Install OpenVPN Client Export package in pfSense[edit | edit source]

This will make it way easier for us to create the files necessary for clients to connect. You click a button and it’ll generate a file that you put on your phone or laptop. You open the OpenVPN client, import the file, put in your username & password, & boom – you’re set.

1.1 Log into pfSense:[edit | edit source]

1.2 Install the package

  • Go to System > Package Manager > Available Packages.
  • Search for “openvpn-client-export”.
  • Install the OpenVPN Client Export Utility.

Step 2: Set up Certificates[edit | edit source]

2.1 – Make a Certificate Authority[edit | edit source]

The Certificate Authority (CA) is what signs and verifies the server and client certificates used to establish secure connections. You don’t have to have any idea what that means to use a VPN. Here’s how we create the CA in pfSense:

  1. Log into pfSense:
  2. Open your browser and go to your pfSense IP address (e.g., https://192.168.5.1 or https://pfSense.home.arpa).
  3. Log in with your credentials (default: admin / pfSense unless changed).
  4. Navigate to the Certificate Manager:
  5. Go to System > Cert Manager in the top navigation menu.
  6. Create a New CA:
  7. Under the CAs tab, click the + Add button to create a new Certificate Authority.
  8. Fill in the CA Details:
    • Descriptive Name: OpenVPN-CA (or any name you choose)
    • Method: Create an Internal Certificate Authority
    • Key Length: 4096 bits (recommended for strong security)
    • Digest Algorithm: SHA-512 (for secure hashing)
    • Lifetime (days): 3650 (about 10 years)
    • Distinguished Name:
      • Country Code: Your country’s two-letter code (e.g., US for the United States)
      • State or Province: Your state or province
      • City: Your city or locality
      • Organization: Your organization name
  9. Common Name: OpenVPN-CA (or another descriptive name)
  10. Save the CA:

2.2 - Creating the OpenVPN Server Certificate[edit | edit source]

Next, create the server certificate that the OpenVPN server will use for secure client connections.

  1. Navigate to the Certificates tab in Cert Manager.

    • Add a New Server Certificate:
  2. Click + Add/Sign to create a new certificate.

  3. Fill in the Server Certificate Details:

    • Method: Create an Internal Certificate

    • Descriptive Name: OpenVPN-ServerCert – name it something that makes it easy to identify as a SERVER certificate later for OpenVPN

    • Certificate Authority: Select OpenVPN-CA (the CA you just created)

    • Key Length: 4096 bits

    • Digest Algorithm: SHA-512

    • Certificate Type: Server Certificate.

      WARNING: Make sure you do not leave this set to user certificate, which is the default option.

    • Lifetime (days): 3650

    • Distinguished Name: Match the details you used for the CA

    • Common Name: louis.chickenkiller.com (you can use whatever you put for your dynamic DNS domain name here)

  4. Click Save. You should now see OpenVPN-ServerCert listed under the Certificates tab.

2.3 Create a VPN Group for your VPN users[edit | edit source]

To connect your Android phone to the VPN, create a user account with an associated client certificate.

Log into pfSense:

Open User Manager:

Go to System > User Manager.

Add a New Group:

  • In the Groups tab of User Manager, click the + Add button to create a new Group.
  • Fill Out the Group Information:
    • Group name: Choose a group name that makes sense for VPN users (e.g., vpnusers).
    • Click Save.

2.4 Create a VPN user[edit | edit source]

  1. In the Users tab of User Manager, click the + Add button to create a new user.
  2. Fill Out the User Information:
    1. Username: Choose a username (e.g., vpnuser1).
    2. Password: Enter a strong password.
  3. Add the user to the vpnusers group you just made.
  4. For Certificate, check “Click to create a user certificate”. DO NOT FORGET TO CREATE A USER CERTIFICATE FOR THE USER.
  5. Create a name for the user certificate, such as vpnuser_client_cert so you can recognize it as the USER cert later.

BEFORE YOU HIT SAVE:

Before you hit save on adding a new user account:

  1. Scroll to the Certificates section of the user creation form:
  2. Click + Add to generate a new certificate for this user.
  3. Configure the User Certificate:
    1. Certificate Authority: OpenVPN-CA
    2. Key Length: 4096 bits
    3. Digest Algorithm: SHA-512
  4. Save the user with the certificate:
  5. Click Save.
  6. Verify User Creation. You should now see the user listed under System > User Manager > Users.

Step 3: Configure OpenVPN Server[edit | edit source]

3.1 Open the OpenVPN Wizard, and set settings according to what you see below in section 3.2 and in images above[edit | edit source]

  1. Log into pfSense:
  2. Go to VPN > OpenVPN.
  3. Click on the Wizards tab.
  4. Fill out details according to what you see above. Keep in mind that when you are DONE, you will have to go back in & edit settings for that VPN server that were NOT EDITABLE while you were creating the VPN.

3.2 OpenVPN Server Configuration[edit | edit source]

After you have finished, go back and edit that server you just made to make sure all of this matches:

  1. Description: openvpn server itself
    • This is for your reference only. You can name it something descriptive like “HomeVPN” or “MyVPNServer.”
  2. Protocol: UDP on IPv4 only
    • UDP is faster and more efficient for VPN traffic, and IPv4 only is typically sufficient unless you have a specific need for IPv6.
  3. Interface: WAN
    • This setting makes sure that your OpenVPN server will listen for incoming VPN connections on the WAN interface.
  4. Local Port: 1195
    • Default is 1194 and TOTALLY FINE. I chose 1195 because I already use 1194 for another system.
  5. TLS Authentication: Enabled

Cryptographic Settings

  1. DH Parameters Length: 4096 bits
    • Stronger than the default 2048-bit encryption.
  2. Data Encryption Algorithms:
    • The following algorithms are listed in the priority you selected:
      • AES-256-GCM
      • AES-128-GCM
      • CHACHA20-POLY1305
  3. Fallback Data Encryption Algorithm: AES-256-CBC
    • Used for compatibility if a client doesn’t support GCM encryption algorithms.
  4. Auth Digest Algorithm: SHA-512
    • SHA-512 provides a high level of integrity protection for your VPN packets, making sure that the data hasn’t been altered.
  5. Hardware Crypto: Intel RDRAND engine - RAND

Tunnel Settings

  1. IPv4 Tunnel Network: 192.168.6.0/24
    • This is the virtual network that your VPN clients will use.
  2. Redirect IPv4 Gateway: Checked
    • This forces all client traffic through the VPN tunnel.
    • IF, for some reason, you have changed the Outgoing NAT to Manual, you'll have to add the outgoing NAT rule yourself.
  3. IPv4 Local Network: 192.168.5.0/24
    • This allows VPN clients to access your local network.
  4. Allow Compression: Refuse any non-stub compression (Most Secure)
  5. Type-of-Service: Unchecked
  6. Inter-Client Communication: Unchecked
  7. Duplicate Connections: Unchecked

Client Settings

  1. Topology: Subnet
  2. DNS Default Domain: newvpn
  3. DNS Server 1: 192.168.5.1
  4. DNS Server 2: 94.140.14.14 (AdGuard DNS)
  5. DNS Server 3: 94.140.15.15 (another AdGuard DNS server)

Advanced Client Settings

  1. Dynamic IP: Checked

  2. Advanced Configuration:

    • Custom Options:

      tun-mtu 1200; mssfix 1160; push "dhcp-option DNS 192.168.5.1";
  3. Gateway Creation: IPv4 only

    For the Gateway creation OpenVPN server setting: CHOOSE IPv4 only This will save you lots of hassle and misery later! Explanation at the end of the OpenVPN section.

NOTE: Let’s talk about RDRAND. This is the hardware random number generator (RNG) built into Intel processors. It’s fast, easy to use & pfSense might already be using it. WARNING: For 99% of the people reading, this will be a total waste of time.

1. What is RDRAND? Why care?

RDRAND makes random numbers using your CPU. but it’s a closed source black box. You can’t see how it works, and there have been [concerns that some random number generators might not be as random as you’d like. There are all sorts of pissing matches going on over this stuff on the internet by people way smarter than you or I.

Point being, if you care about privacy or you’re handling sensitive data, you might not want to rely solely on a system you can’t inspect. At the same time, if you’re reading this guide, you’re enough of a newbie that rdrand is not going to be how someone “gets” you.

2. Why not use just RDRAND?

While it is fast, if the hardware random number generator fails or is compromised, your security goes down without noticing. A VPN depends on top-notch randomness for encryption, so you need more than one source of entropy to stay safe.

3. How do I make it safer?

pfSense already mixes entropy from several sources which includes rdrand. In most cases, you’re good to go.

4. Should I disable it?

Probably not. RDRAND is fine. Think of it as an ingredient rather than the entire thing.

5. THen why did you mention it?

The “uhm, akshually” people. They’re in the bushes, always waiting.

TL;DR: RDRAND isn’t bad, but don’t trust it alone. Let pfSense do its thing and mix it with other entropy sources. If you’re running anything highly sensitive and don’t like trusting Intel, you can disable it—but for most people, you’ll be fine with the default settings.

Step 4: Get .ovpn file to connect your phone to the VPN[edit | edit source]

4.1 Export the OpenVPN Client Configuration for Your Android Device[edit | edit source]

  1. Go to VPN > OpenVPN > Client Export.
  2. For “remote access server,” choose the OpenVPN server you made.
  3. For “Host Name,” enter the URL you made on FreeDNS for dynamic DNS. In our case, this was louishomeserver.chickenkiller.com.
  4. Under Export Type, choose Android - OpenVPN Connect.
  5. Download the configuration file (e.g., vpnuser1-android.ovpn).

4.2 Import the Configuration into OpenVPN Connect on Android – SECURELY!!!!![edit | edit source]

  1. Transfer the .ovpn file to your Android device. DO THIS SECURELY.
  2. Install the OpenVPN Connect app from the Play Store.
  3. Import the configuration file and connect to the VPN.

VPN connectivity can be done with a certificate alone, without a username or password. This means that if you misconfigured something, and this file gets into the wrong hands, any Tom, Dick or Harry has access to your home network!

Don’t upload the file to public file transfer sites

Don’t do this. Do not store the key to your front door on megaupload.

Instead, do this:

  • Connect your phone directly to your computer with a USB cable to transfer the file; simple and secure.
  • Or, use an encrypted messenger you trust. Just make sure it’s actually secure, not just convenient.

Why the extra caution?

  • This .ovpn file is sensitive. It’s part of what allows access to your server.
  • If someone gets this file & figures out your password, they’re in. Not good.
  • And if there’s a config mistake (it happens), they might not even need the password.
  • Without this file, even if someone knows your username & password, they’re not getting in.

Treat this file like your bank details. Don’t put it on a post-it note to the 4:3 monitor in front of your Windows XP Service Pack 1 computer.

Don’t leave it lying around in your downloads folder. Don’t share it casually.

The chances of someone intercepting this file and using it maliciously are low, but we don’t take unnecessary risks with security. It’s not paranoia, it’s good practice.

Do it right, and you’ll save yourself potential headaches down the road. Plus, you’ll have the satisfaction of knowing you’ve set things up properly.

4.3 Edit Settings on OpenVPN Android Application[edit | edit source]

  1. Open the OpenVPN Connect application.
  2. Go to the three lines in the upper left corner and tap Settings.
  3. Scroll down to Advanced Settings.
  4. Switch security level from “legacy” to “preferred”.
  5. Uncheck “DNS fallback”.

NOTE: Disabling “DNS fallback” keeps the VPN connection from going back to using non-ad-blocking(and usually google) DNS when something fails. When your setup breaks, I want you to KNOW - by way of it not working. I don’t want it to training-wheels you back to a working setup using Google’s DNS.

You now have an OpenVPN server on pfSense you can connect to from anywhere in the world; your Android device will have all its traffic routed through the VPN. You’ll fully benefit from pfBlockerNG’s ad-blocking via IP blocking and DNS domain name blocking when you’re logged in through the VPN, and you’ll have access to all of the services we will be setting up for calendar, contacts, email, backup, office, home automation & surveillance, business phone, password management & more.

IPv4 vs IPv4+IPv6 & VPN nightmares:[edit | edit source]

Choosing IPv4 + IPv6 can cause issues. I’ve seen this cause random disconnects after about 10 minutes of connection that is miserable to figure out.

In my case, I am combining two of the worst things in the world: American residential cable broadband & T-Mobile on a Pixel phone. I lose 5G when I walk under a tree, and my internet goes down more often than yours.

Why using IPv4 & IPv6 with OpenVPN for this setup is discouraged.[edit | edit source]

Enabling both IPv4 and IPv6 may be the way to go for enterprise class connections. If you’re reading this, you might be stuck on horrible residential broadband & unable to pick a better ISP. In these environments, the 1% benefit IPv6 enables

  1. NAT64/DNS64 Compatibility Issues: Mobile networks often use NAT64/DNS64 for IPv6-only networks. This can clash with your VPN’s IPv6 routing, causing random failures.
  2. Path MTU Discovery (PMTUD) Quirks: IPv6 relies heavily on PMTUD. If there are issues along the path, you can have connectivity problems that are hard to diagnose.
  3. ISP IPv6 Implementation: Some ISPs (spectrum) can have less-than-great IPv6 implementations. This can lead to unstable connections when you’re trying to use both IPv4 and IPv6.
  4. Dual-Stack Timeout Issues: When both protocols are available, your devices might try connections on both. If IPv6 is unstable, you’ll experience timeouts and apparent connection failures. THIS MAKES UP FOR ANY & ALL POTENTIAL BENEFITS OF IPv6, WHICH YOU WILL NEVER NOTICE IN EVERYDAY USAGE.
  5. Carrier-Grade NAT (CGN) Interactions: The interplay between CGN for IPv4 and IPv6 routing through your VPN can lead to connection state inconsistencies.

The Practical Solution[edit | edit source]

You have two main options:

  1. Live In a Nightmare: Dive deep into network engineering, potentially spend $150,000 backhauling fiber to your house to get around your horrible cable company.
  2. A Practical Approach: Click “IPv4 only” in OpenVPN server settings.

Option #1 can gargle my balls.

Setting Up pfBlockerNG for Ad-Blocking in pfSense[edit | edit source]

Why adblock at the router?[edit | edit source]

Why not?? Isn’t this beautiful?

louis@happycloud:~/Downloads/frigate$ ping googleadservices.com
ping: googleadservices.com: Name or service not known

Seeing Name or service not known trying to contact a google ad server warms my heart. :D

Ad-blocking at the router level offers several advantages:

  1. Simplicity: Instead of installing ad-blockers on every device, you can block ads network-wide.
  2. Complete coverage: Blocks ads on devices where traditional ad-blockers can’t be installed (smart TVs, Android/iOS apps). Somewhere, there is probably some piece of garbage application that has an ad in it that you can’t install ublock origin onto. What if it were blocked from connecting at the router level?
  3. Control: You can manage internet connectivity and ad-blocking for all connected devices from a single point.

We’ll use two methods for blocking:

  • IP address blocking - blocking 103.31.6.184
  • Domain name blocking - blocking googleadservices.com

This dual approach makes sure more effective ad-blocking, as it covers both static IP addresses and changing domain names associated with ad servers.

Step 1: Measure our Baseline[edit | edit source]

1.1 Install stock Google Chrome[edit | edit source]

No ad-blocking extensions, no privacy protections. We want to test our ROUTER’S ability to block ads – not our browser’s. The browser is going to be the “constant” here. In an ideal setup, we want to block ads at the router level (which we CAN control) in order to not see ads in random Android apps & unreliable smart TVs (which we can’t always control).

You won’t always be able to block ads with certain hardware or software. And even if you can, can your boyfriend, your mother-in-law, your kids? Imagine having kids that grow up in a household with no ads. :)

Don’t use your normal web browser with all the ad-blocking stuff built-in because then we can’t tell if what we did actually worked. We’re starting by installing stock, vanilla Google Chrome, no extensions installed, and running a couple of quick tests. Something tells me Google’s business model isn’t going to provide us an ad-free web browsing experience by default…

1.2 Run adblock & DNS tests[edit | edit source]

My Initial results:

  • Ad-block tester: 38 points out of 100
  • D3Ward Ad Block testing: 6 blocked out of 135
  • DNS: Using home device (pfSense DNS resolver)

Your mileage will vary.

Step 2: Install pfBlockerNG[edit | edit source]

  1. Log in to your pfSense web interface.
  2. Navigate to System > Package Manager > Available Packages.
  3. In the search bar, type “pfBlockerNG”.
  4. Find pfBlockerNG-devel and click the Install button (you want the devel version because it receives more updates &, as AvE would say, is more betterer).
  5. Wait for the installation to complete.

Step 3: Configure pfBlockerNG General Settings[edit | edit source]

  1. After installation, go to Firewall > pfBlockerNG.
  2. Under General Settings:
    1. Enable pfBlockerNG: Make sure this is checked.
  3. Click IP next to general.
  4. For Outbound Firewall Rules, make sure both LAN and OpenVPN interfaces are selected for REJECTING.
  5. I had you set up OpenVPN before pfBlockerNG explicitly because it makes this option automatically be checked for you, but double check just in case!
  6. Click Save at the bottom.

Step 4: Set Up DNSBL (DNS Blacklists)[edit | edit source]

  1. Navigate to Firewall > pfBlockerNG > DNSBL.
  2. Enable DNSBL: Check this box to enable DNS-based blocking.
  3. DNSBL Mode: Set to Unbound Mode to use pfSense’s DNS Resolver for DNSBL.
  4. Go down to DNSBL Configuration, make sure some random bs IP is in virtual IP address (LIKE 10.10.10.1), this is where we are directing requests to ad-ridden domain names to.

Step 5: Add DNSBL Feeds & IP blocklist feeds (Lists of Ad Domains)[edit | edit source]

Let me explain how these feeds work in pfBlockerNG because the interface can be intimidating for a newbie.

The feeds tab has two main sections: IP address feeds at the top (for blocking specific IPs) and DNS feeds at the bottom (for blocking domain names like googleadservices.com).

When you’re looking at the feeds, you’ll see these checkboxes and plus signs that can be a bit confusing. Here’s what they mean:

  • If you see a checkbox on the left, that means it’s a GROUP of feeds. If you see a blue checkbox next to “PRI1” that means all the feeds under that group are already enabled.
  • Individual feeds will have their own checkboxes to show if they’re active.
  • The plus signs let you add new feeds to your configuration.

When you want to add feeds, click the plus sign to add the feed.

For IP blocklists, make sure the action is set to “Deny Both”.

For DNS blocklists, set the action to “Unbound”.

Even if you see something’s already checked, sometimes clicking “Enable All” can catch feeds that weren’t properly activated. I’ve had weird situations where I thought I added everything in a group but missed some - the interface isn’t always super clear about what’s actually enabled.

For what to block: I avoid blocking things like Tor or torrent trackers. Why would you block that? That’s like DDoSing Pornhub - they’re giving you free stuff! One of them blocks AWS, avoid that unless you want non-functional internet (sadly the world runs on AWS whether we like it or not).

It is very easy to block too much and then not be able to log into youtube, receive email, visit your bank, etc. More isn’t better here.

  1. Go to Firewall > pfBlockerNG > Feeds.
  2. Scroll to the DNSBL Feeds section.
  3. Add multiple feeds by clicking on different categories and enabling relevant lists.
  4. For each selected feed:
    • For DNS block lists, set “Action” to Unbound.
    • For IP lists, set “Action” to Deny Both.
  5. There is a blue “ENABLE ALL” method at the bottom that will often save you a lot of time.
  6. Recommended categories to add:
    • Easylist
    • Malicious
    • Phishing
    • Malware
    • Suspicious
    • Trackers
    • Spam (for email)
  7. Avoid adding feeds that might block legitimate services (e.g., AWS, public DNS servers, Tor).
  8. After selecting feeds, click Save to apply these DNSBL lists.
  9. Don’t enable/turn them on one by one. When you click on a list of feeds, note the blue “enable all” button. Don’t be like Louis of 2018 & toggle each line to “on” manually like an idiot (I actually did this :’( )

Step 6: Update and Apply Lists[edit | edit source]

  1. Navigate to Firewall > pfBlockerNG > Update.
  2. Select “Force” option.
  3. Set “Reload” option to “All.”
  4. Click “Run” to download and update all lists (both DNSBL and IP lists).

This process can take a while.

Step 7: Testing and Verifying Ad-Blocking Effectiveness[edit | edit source]

  1. Clear cache and cookies in your test browser.
  2. Revisit the ad-blocking test sites:
    1. adblock-tester.com
    2. d3ward.github.io/toolz/adblock.html -> This project is no longer maintained and has been archived.

Expected results:

  • Ad-block tester: Improved score (e.g., 78 out of 100)
  • D3Ward Ad Block testing: Many more blocked (e.g., 119 out of 135)

Step 9: Implement AdGuard DNS[edit | edit source]

  1. Visit adguard-dns.io and go to the “Routers” section.
  2. Copy the DNS server addresses that block ads and trackers.
  3. In pfSense, go to System > General Setup.
  4. Uncheck “Allow DNS server list to be overridden by DHCP/PPP on WAN.”
  5. Remove existing DNS servers and add the AdGuard DNS servers. Use what is on AdGuard’s site: at the time of this writing, they were as follows. Only use the below servers if you see them on adguard-dns.io:
    1. Primary DNS: 94.140.14.14
    2. Secondary DNS: 94.140.15.15
  6. You checked AdGuard’s site rather than copy & paste from here, right? RIGHT?
  7. Save changes.

Step 10: Configure the DNS Resolver[edit | edit source]

  1. Go to Services > DNS Resolver.
  2. Enable DNS Resolver: make sure this is checked.
  3. Click Enable Forwarding Mode.
  4. Save and apply changes.
  5. Reload the DNS Resolver service.

Step 11: Verify adblocking from Desktop[edit | edit source]

  1. Clear DNS cache and browser data.
  2. Rerun the ad-blocking tests.
  3. Visit dnsleaktest.com and run an extended test to confirm you’re using AdGuard DNS. You should see something like the figure above. Your DNS should be DIFFERENT than it was before! If not, something went wrong.
  4. Redo your adblock test:
  5. You should see adblocking become even more better, or more betterer as AvE would say, than what you had prior to installing pfBlockerNG, depending on the feeds you’ve chosen.

Step 13: Verify adblock on mobile via VPN[edit | edit source]

To make sure ad-blocking works on mobile devices connected through VPN:

  1. Clear browser data on your phone.
  2. Disconnect from the VPN we attached to earlier.
  3. Visit the following websites and note the results:
  4. Go over to the OpenVPN app & connect to VPN

Double-check that you’re using the pfSense DNS on dnsleaktest.com & NOTHING ELSE!! You do not want your ISP’s server, or anyone else’s server, to show up. If in doubt, research the IP address & hostname of the DNS that is coming up.

  1. Compare results to those without a VPN connection.

Expected results:

  • Much more ad-blocking on mobile when connected to VPN
  • Confirmation that you’re using AdGuard DNS through the VPN

Step 14: Verify VPN allows connectivity to home network.[edit | edit source]

Try to visit your router’s IP address https://192.168.5.1/ once you have connected to the VPN – and make sure you are connected to the CELLULAR network only, not your home Wi-Fi!!

Congratulations; you’ve set up an ad-blocking system that blocks a ton of ads before your internet connection even wastes bandwidth loading them, for all devices on your network. Blocking ads in a browser using uBlock Origin is fun, but nothing compares to the feeling of blocking ads they think you can’t block. It’s beautiful. :D This means that even inside of Android apps that have ads, you can block them all—it just takes the right feed. :D

REMEMBER: THIS IS YOUR JOURNEY!!! FIND THE FEEDS THAT MAKE YOU HAPPY, YOU DO NOT HAVE TO USE THE SAME ONES THAT I DID!

Installing Ubuntu Server with RAID 1, LVM, and LUKS Encryption[edit | edit source]

Now it’s time to install the operating system on our host server. I’ll walk you through the process of installing Ubuntu Server with a nice configuration including RAID 1 for boot drive redundancy, encrypted LVM for flexibility in expanding storage if we move this setup to a larger set of drives, and LUKS encryption for security. This setup makes sure your server can boot even if one drive fails, while keeping your data secure. Even if someone breaks into your house & steals all of your stuff, all they have is encrypted crap. Unless they’re the NSA, in which case you’re screwed, but if you’re reading this guide, you’re probably not that important.

Installing Ubuntu Linux[edit | edit source]

For our server’s operating system, we’re going with Ubuntu Linux. Why Ubuntu? If you’re watching this, you’re probably more of a newbie than an expert. Ubuntu is user-friendly, has good documentation, and has a huge community ready to help. It’s widely renowned as the first “newbie friendly” GNU/Linux distribution, dating back to 2006 when it was one of the few distros that didn’t require torturing yourself with ndiswrapper to get wifi working. Robert Storey put it best:

The huge collection of Linux/BSD systems listed on DistroWatch is a testimonial to how difficult it is to make a decision. However, after spending weeks trying to get XYZ distro to recognize your wireless card, it’s really nice to have an OS that just works.”

Imagine having a laptop as your only computer, before smartphones with tethering were widely available. You don’t have access to a wired connection. Where were you getting your drivers from? Maybe you do have access to a wireless connection, but your only CAT5 cable is 5 feet long. And your router is in an un-air-conditioned garage. In the middle of summer. So you go to your 98°F garage, sit on the floor, googling only to find a plethora of threads where elitist douchebags tell you to RTFM to get wifi to work.

And they wonder why people used closed source operating systems…

In 2005, the concept of anything in GNU/Linux “just working” was a joke. If you wanted to burn a CD you had to set up something called SCSI emulation to use the optical drive on your computer. From the ground up, GNU/Linux was fundamentally not designed for normal people. Ubuntu changed that in a radical way and continues to have a reputation for being a newbie-friendly “gateway drug” to GNU/Linux. It’s not the best and it has its flaws, but it is designed and developed with ease of use for normal people in mind. For a beginner’s guide, that matters.

Why Not Arch or Gentoo?[edit | edit source]

I use Arch Linux now, SuSE from 2002-2004, and Gentoo from 2004-2015. I enjoy making my life difficult for no good reason. In my 30s, I’ve come to realize that I derive sick pleasure from making my life difficult for no good reason; but I wouldn’t recommend that for beginners (or anyone). With Ubuntu, you get a system that’s easy to set up and maintain without the extra hassle, designed to be as idiot-proof as possible, and designed for normal humans to use. If you wish to use another distro, GO FOR IT! There is NO one “right way” to do any of what I am doing here!

Installing with RAID 1: Choosing Your OS Drive[edit | edit source]

We are going to be using RAID 1. RAID 1 is a mirroring setup, where we use two drives for the operating system instead of one. This means one of the drives can completely fail and the server continues running. I would suggest that you find not one, but TWO SSDs for this purpose. We will be using MDADM for RAID. Ubuntu allows you to do this upon install without having to edit configuration files.

Why software RAID using MDADM instead of hardware RAID with a RAID controller card?[edit | edit source]

RAID controller cards are for people with datacenters that have hundreds of drives and need maximum performance/resilience for specific applications, that want the task of managing these drives separate from the software running the computer. This was also very useful back when machines were powered by Pentium 1 processors.

NOTE: Some hardware RAID controllers will give you improvements in performance, but it’s not worth the downside. There are controllers where when they fail, you have to replace it with the exact same controller for your setup to work again - aka, digiorno all over again. Using software RAID like MDADM means you can take drives out of a pentium 4 and put them into a macbook and it’ll just detect it & work.

It is 2024, and even a ten-year-old computer will do software RAID just fine with no perceivable penalty in performance.

Why not use RAID built into my motherboard?[edit | edit source]

That is called “fake RAID.” Fake RAID is cancer. It is not “hardware” RAID, it is just software RAID by another name.

When you create a RAID array using the garbage built into your motherboard, the RAID configuration is sometimes stored in a proprietary format that is only readable by that specific manufacturer’s RAID implementation. I used the word “sometimes” because it depends on your system. I have no idea what system you have. I want ALL of the people reading this to have a system that works if they transfer these drives to another system, not “some” of you. It costs you nothing to use mdadm, which offers certainty of compatibility when you transfer these drives to other hardware.

When certainty & uncertainty have the same price, all other things being equal, I’ll take the certainty!

MDADM software RAID is a standardized system that transfers across computers – I am not using hardware RAID, I am not using whatever RAID is in the BIOS of your computer, because I have no idea what they are using or whether it is something standard or something that will be aggravating later. If you have to take these drives and put them in another computer, there will be less hassle using software RAID than there is using hardware RAID, it’s literally plug and play (well, you may have to use a liveCD to run grub-install to register the bootloader with the new machine’s UEFI, but… The RAID part will work at least!).

Drive recommendation for OS:[edit | edit source]

We’re going to have two drives in RAID 1. You can use more if you like – RAID 1 need not be two drives! I like Micron SSDs; they have always had consistently lower failure rates than Samsung’s budget “EVO” line for me with regards to NVME devices. I’ve RMA’d the same 2 TB Samsung EVO 970 five times now… Five… Times. You can get two budget 4 TB SSDs for under $500 now – I recommend these.

We are going to be using these SSDs for virtual machines that perform many tasks. Here are some of the storage-intensive ones:

  • Self-hosted mail. Your inbox may be 50+ GB like mine.
  • Complete phone backup of everything – can easily eclipse 2 terabytes. Mine is 1.4.
  • FreePBX phone system – call recordings over time can go over 50 GB easily.

I suggest buying drives for your operating system disk that are considerably fast and have enough space to store all of this. With regards to security camera recordings, and the backup of your 40 terabytes of recipes stored as .mkv files – that, we’ll do on an array of hard drives. You don’t need to get SSDs.

RAID IS NOT A BACKUP![edit | edit source]

IMPORTANT NOTICE: RAID 1 IS NOT A BACKUP!

Many people incorrectly believe that RAID 1 is a “backup.” It is not! RAID 1 sets up your machine so that the operating system is installed on TWO drives rather than one, with each drive being an exact mirror of the other. This way, if one drive fails while you’re using your server, it will still run. Think of RAID 1 like the green goo you can put in your tire to plug up a hole, or a spare wheel, allowing you to limp to a service center for repairs.

Here are a few reasons why RAID 1 is not a backup:

  1. Backups allow you to restore your system if you accidentally mess something up. RAID 1 is a perfect mirror, so it applies to everything you break.
  2. RAID 1 means you’re attaching two hard drives to your computer to install the operating system on instead of one. These drives are both connected to the same computer. If your computer’s power supply fails and sends incorrect voltages to the drives, both get fried.
  3. When one drive in a RAID 1 array fails, the other often fails soon after, especially if they’re the same brand and were purchased at the same time.
  4. RAID 1 works so well that you might not notice when one drive fails until the second one also fails, leaving you with no data.

NOTE: MDADM does work well enough that you won’t tell when a drive fails. Later in this guide we’re going to set it up so that your machine is constantly checking & emails you the moment there is any issue with your drive using mdadm’s monitor command.

Step-by-Step Installation Guide[edit | edit source]

What you should have

  • Two identical SSDs (e.g., Samsung 870 EVO 250GB), but bigger will be better here since we’ll be using this to backup everything on your phone + many other things.
  • A USB drive to put the Ubuntu installation image on
  • An old computer to use as a server (even a 10-year-old desktop or laptop can work)

1. Prepare the Installation Disk[edit | edit source]

Warning: This process will erase everything on the USB drive.

  1. Insert a USB flash drive (at least 4GB in size) into your computer.
  2. Go to ubuntu.com and download the LTS (Long Term Support) version of Ubuntu Server.
  3. Use one of the following methods to write the Ubuntu image to the USB drive:

Windows:

  1. Download and install Rufus.
  2. Open Rufus and select your USB drive.
  3. Click the “SELECT” button and choose the unzipped .img file you downloaded.
  4. Click “Start” and let Rufus create the bootable USB.

GNU/Linux or macOS:

  1. Open the terminal and type the following command:

    sudo fdisk -l
  2. Make note of drives in the system.

  3. Plug in the flash drive.

  4. Open the terminal and type the following command again:

    sudo fdisk -l
  5. Make note of the drive that was not present before.

  6. Double-check size/brand/model to make sure this new device is the device you plugged in.

  7. Run the following, replacing /dev/sdX with your drive, and replace the ubuntu-server.iso file with the filename of your image file. Make sure you use the right PATH, that is the directory your image is in.

    sudo dd if=/path/to/ubuntu-server.iso of=/dev/sdX bs=4M status=progress

Your bootable USB drive with Ubuntu Server Linux is now ready for use!

2. Boot from the USB Drive[edit | edit source]

  1. Insert the USB drive into your server.
  2. Power on the server and enter the boot menu (usually by pressing F12 or another function key).
  3. Select the UEFI option for your USB drive.

3. Begin the Ubuntu Server Installation[edit | edit source]

  1. Choose “Try or Install Ubuntu Server” from the boot menu.
  2. Select your language and keyboard layout.
  3. Choose “Install Ubuntu Server” (not the minimized version).
  4. Select “Search for third-party drivers” for better hardware support. Don’t check this box if you want to live Richard Stallman’s ethics. Check this box if you want to reduce the chances of random things in your computer not working. I check the box. I’m going to hell, I know….

4. Configure Network[edit | edit source]

4.1 Why a Static IP?[edit | edit source]

We are going to set up a server that we are going to consistently access. This means we always want it to be at the same place.

Imagine trying to deliver mail to someone who lives on 20 Main Street today, and 90 Chandler Avenue tomorrow. Imagine trying to frequent a restaurant whose address changes every week. It would be annoying, inconvenient, and perhaps downright impossible.

We want our server to always be at the same address. The “D” in “DHCP” means “dynamic” – as in, changing. We don’t want that. We want a “static” IP, meaning it does NOT change.

When setting up your server, we need to give it a static IP, so we always know where to find it, and it never changes. How do we know what IP to give it? Go back to pfSense’s DHCP server configuration page & you can find it by going to Services —> DHCP Server. The “subnet range” tells you the list of available IPs. Keep in mind that you cannot use the IP address of your pfSense router here.

  • Router Gateway: My router’s IP is 192.168.5.1. This is the gateway address.
  • Address Pool Range: My address pool range is from .15 to .245, leaving .246 to .254 and .2 to .14 available. This setup provides a buffer of IPs for servers and other devices.

Why the Buffer? I don’t want any conflicts where someone plugs in their computer while mine is rebooting and steals my IP. We will be setting up STATIC MAPPINGS so that nobody else can grab the IP address of my server – the IP we choose for our server will be reserved for our server’s specific network interface card and not some hated brother in law that thinks he’ll play games when your spouse has him over. However, this is still good practice.

4.2 Choosing a Static IP[edit | edit source]

For my servers, I pick an IP between 192.168.5.2 and 192.168.5.14. This ensures no one else can sneakily take my server’s IP while it’s rebooting.

  1. In your pfSense router, go to Services > DHCP Server.
  2. Understand your subnet. For example, 192.168.5.0/24 covers IPs from 192.168.5.1 to 192.168.5.254
  3. Your router’s IP is typically 192.168.5.1. We can’t use that. Since we made the address DHCP pool range 192.168.5.15 192.168.5.245, this means that we have 192.168.5.2 through 192.168.5.14 free – no computer connecting with DHCP (which is the default for 99.9999% of all network devices in your home) will be using these, so they’re free for the taking.
  4. Choose the network interface that’s connected (usually the one that has already received an IP via DHCP).
  5. Change the configuration from DHCP to Manual:
  • IP Address: Choose an address outside your DHCP pool (e.g., 192.168.5.2)
  • Subnet: Usually 255.255.255.0 (or /24 in CIDR notation)
  • Gateway: Your router’s IP (e.g., 192.168.5.1)
  • Name servers: Use your router’s IP as the DNS server


Please note: if you skip step 4 by choosing Continue without network, you not be able to set up your internet connection later.

5. Prepare the Drives[edit | edit source]

5.1 Format the drives[edit | edit source]

  1. In the installer, locate your two SSDs (ignore the USB installer drive).
  2. For each SSD:
    • Select the drive and choose “Reformat”.
    • Select “Use as boot device” – this will create an EFI partition on each.

5.2 Configure EFI Partitions[edit | edit source]

For each SSD:

  • Locate the automatically created EFI partition (usually 1GB).
  • Edit the size to 512M.
  • Make sure it’s set to mount at /boot/efi.

5.3 Create Boot Partitions for RAID[edit | edit source]

  1. On each SSD:
    • Create a new 1GB partition.
    • DO NOT FORMAT IT. CHOOSE "Leave unformatted".
    • DO NOT CHOOSE A MOUNT POINT. This is important for setting up RAID 1 later.
  2. Set Up RAID 1 for /boot
  1. Select “Create software RAID (md)”.
  2. Choose both 1GB partitions you just created (one from each SSD).
  3. Set RAID Level to “RAID 1 (mirrored)”.
  4. Name it “bootraid” or something meaningful to you.
  5. Select “Create”, hit enter.

5.4 Create Root Partitions for RAID[edit | edit source]

  1. On each SSD:
    • Create a partition using all remaining space. Don’t fill in the “size” text box; it will automatically use the rest of the space on the drive.
    • DO NOT FORMAT IT. CHOOSE "Leave unformatted".
    • DO NOT CHOOSE A MOUNT POINT. This is important for setting up RAID 1 later.

5.5 Set Up RAID 1 for Root[edit | edit source]

  1. Select “Create software RAID (md)” again.
  2. Choose both large partitions you just created.
  3. Make sure RAID Level is set to “RAID 1 (mirrored)”.
  4. Name it “osdriveraid” or something meaningful to you.
  5. Go to “Create” & hit enter.

5.6 Configure the /boot Partition[edit | edit source]

  1. Select the “bootraid” you created.
  2. Format it as ext4.
  3. Set mount point to /boot.

5.7 Set Up LVM on Root RAID[edit | edit source]

  1. Select the “osdriveraid” you created.
  2. Choose “Create volume group”.
  3. Name it “ubuntuvolumegroup” or something meaningful to you.
  4. When selecting the device for the LVM, you’ll encounter a bug in the installer: > The installer will show multiple devices without clear identifiers. This is a known issue that persists in the non-beta release of a stable, mission very important server operating system. Welcome to the world of open source software; this is part of the fun of using open source! Remember: it wouldn’t be open source if it worked!
  5. To select the correct device:
    • Look for the option that’s around the size of your install (e.g., 231 GB for 250 GB SSDs).
    • Choose the largest option, which should correspond to your RAID 1 array for the root partition.
    • Ignore the smaller sizes, as they likely represent other partitions or devices.
    • Pray.
  6. After selecting the correct device, proceed with creating the volume group.

5.8 Create Encrypted Volume[edit | edit source]

  1. With the LVM volume group selected, choose “Create encrypted volume”.
  2. Set a strong password. Consider using a password manager.
  3. It’s recommended not to create a recovery key, as this could be a potential security risk.
  4. Optionally, create a recovery key. If you do this, realize the recovery key can be used to decrypt your volume. Don’t do this unless you have a place to hide it that not even your cat can get to!

5.9. Create Logical Volume for Root[edit | edit source]

  1. Select the encrypted volume you just created.
  2. Choose “Create logical volume”.
  3. Name it “ubunturootvolume” or something meaningful to you.
  4. Use the maximum available size.
  5. Format it as ext4.
  6. Set the mount point to / (root).

5.10 Review and Confirm[edit | edit source]

  1. Double-check your configuration. For two 250 GB SSDs, it should look like this:
    • Root (/): ~231GB on encrypted LVM which is on RAID 1
    • /boot: ~1GB on RAID 1
    • /boot/efi: 512MB on each SSD
  2. If everything looks correct, click “Done”.

5.11 Complete the Installation[edit | edit source]

  1. Carefully review the summary one last time. Remember we are erasing everything on these drives, to a point where even Rossmann Repair can’t recover it. If you create an encrypted volume, write over it, and then want the data back… good luck with that one.
  2. If you’re sure you want to proceed, click “Continue”.
  3. Follow the remaining Ubuntu Server installation prompts.
  4. Set up your username.
  5. Install OpenSSH server.

NOTE: Installing OpenSSH allows you to remotely access your machine to install things, use it, mess with it, etc, rather than sit in front of your server in your unairconditioned garage when it’s 117f outside. When you see me on video installing things via terminal, I am almost never in front of the actual machine(or vm) I am using, I am remoting in using ssh.

NOTE: Do not install Docker via Snap in the next menu when it asks you to. We will install Docker later, and it won’t be the miserable snap version of DOCKER_. If you install Docker using Snap accidentally, this is understandable. If you install docker via snap by CHOICE, you’ll be in hell, & you’ll have earned it.

5.12 Reboot & log in[edit | edit source]

  1. Click reboot now at the end.
  2. Once it is done shutting down Ubuntu Linux, unplug the installation USB.
  3. When it boots up, it will ask for the encryption password to unlock the root partition, type this in.

5.13 Set Up Static IP Mapping in pfSense (Post-Installation)[edit | edit source]

Set Up Static IP Mapping in pfSense[edit | edit source]

  1. Log into your pfSense router.
  2. Go to Diagnostics > ARP Table.
  3. Find the MAC address associated with your server’s IP (e.g., 192.168.5.2). Mine was e0:d5:5e:a8:7f:b5.
  4. Go to Services > DHCP Server.
  5. Scroll to the bottom and click “Add static mapping”.
  6. Enter the MAC address and IP address of your server.

Figure 17: This is what my setup looks like when I’m done configuring my partition structure. Yours should resemble mine. Ubuntu makes it as difficult as possible to use encrypted LVM with RAID 1 on boot devices, but we can beat their interface with some good ol’ ingenuity.

  1. Give it a descriptive name (e.g., “Happy cloud server static IP”).
  2. Save and apply changes.

Identifying Devices on Your Network[edit | edit source]

Let’s take a quick break to discuss the importance of static mappings, hostnames, and the DNS resolver.

What you type into the hostname field when setting the DHCP static mapping in DHCP server settings is what you can use to connect to the device instead of the IP address. For instance, if you set the hostname to happycloud, instead of having to type 192.168.5.2 to connect to this device, you can type happycloud.home.arpa.

By default, on pfSense installations, the default domain is home.arpa. When you combine the hostname of happycloud with the domain of home.arpa, you get happycloud.home.arpa.

This is more convenient for connecting to devices because it is easier to remember happycloud than it is to remember 192.168.5.2 for sane people, who reserve their brains for useful data rather than useless macbook trivia. Further, similar to dynamic DNS, if you change the IP address of this server later, all of your services & bookmarks that point to this server do not have to be changed!

You can name your servers however you want! You can choose IP addresses for your servers however you want! I will be using the same IP addresses & hostnames/domains throughout this guide so it is easy to follow, but you don’t HAVE to follow mine!

Why ISC DHCP Matters in pfSense (and How to Set It Up)[edit | edit source]

The world wants you to switch to Kea DHCP, but there’s a very good reason we’re using ISC instead. It does something important that new DHCP server doesn’t. Let’s get into it.

Why ISC DHCP Is Actually Useful[edit | edit source]

  1. Hostname Resolution: Use hostnames instead of memorizing IP addresses.
  2. Works with DNS Resolver: Registers DHCP stuff automatically! You know, like it should.
  3. Simplifies Things: Makes managing your network a lot easier.

Setting Up ISC DHCP in pfSense[edit | edit source]

1. Make Sure You’re Using ISC DHCP[edit | edit source]

  1. Log into pfSense.
  2. Go to System > Advanced > Networking.
  3. In DHCP Server, select ISC DHCP.
  4. If it complains about deprecation, just ignore it. Click the checkbox to ignore the annoying warning.
  5. Hit “Save”.

2. Configure DNS Resolver[edit | edit source]

  1. Go to Services > DNS Resolver.
  2. Check these boxes:
    • ☒ “Register DHCP leases in the DNS Resolver”
    • ☒ “Register DHCP static mappings in the DNS Resolver”
    • ☒ “Register connected OpenVPN clients in the DNS Resolver”
  3. Save and apply changes.

3. Set Your Domain[edit | edit source]

  1. Navigate to System > General Setup.
  2. Set your “Domain” (like “home.arpa” or “local”).
  3. Save if you made changes.

This setup lets you use hostnames for all your devices, static IPs, and even VPN clients. It’s simple, it works, and it’ll save you a headache later. Sometimes the old way just works.

Static mappings make sure that this IP address of 192.168.5.2 is reserved for this computer to connect to, so that no other device can take it (unless they are spoofing MAC addresses but if someone inside your house is doing just to mess with you, you have bigger problems, that likely end in them getting punched in the mouth).

NOTE: Static IP mappings aren’t a big deal when you have a few phones & game consoles attached to your network at home. IF you are running a server, you are running something where clients(aka other phones/computers) are going to want to consistently know where to access it.

Think of your server like your favorite store. When you visit, do you want to have to look through maps to figure out what address they changed to that day? You could… but… wouldn’t it be better if they were at the same place each time you needed to go?

You’ve now set up an Ubuntu Server with a redundant, encrypted secure storage configuration. This setup gives you:

  • Boot drive redundancy with RAID 1
  • Flexibility for future storage management with LVM – we can resize this later if we want to get a bigger drive setup.
  • Enhanced security with full-disk encryption (except /boot and /boot/efi).
  • There are the “uhm akshually” people who will say that if you don’t encrypt boot there’s no point in all of this… just shut up.

Why I Used Virtual Machines Instead of Docker for Some Parts of My System[edit | edit source]

FEEL FREE TO SKIP THIS SECTION & SCROLL DOWN TO “Understanding the basics of Docker” section[edit | edit source]

This is another section that is completely unnecessary to read if you simply want to get to a working system. Feel free to fast forward to the “Understanding the basics of Docker”. This is here to provide insight into why I structured the guide the way I did.

Docker is a great way to managing programs in lightweight, isolated environments. It changed how sysadmins deploy and maintain their systems. Virtual machines are going out of fashion for many sysadmins; but in this guide, you’ll notice that I’ve grouped certain services into virtual machines instead of using Docker for everything. Let me explain why

1. Building My System Piece by Piece[edit | edit source]

Back when I started getting into self managing my own servers 15 years ago, my setup wasn’t built all at once. It was cobbled together using the hardware I had lying around; old laptops, physical servers & spare drives. As these machines aged, broke, or were bashed in with a titanium nightstick when they frustrated me, I started turning their hard drives into virtual machines. This was as simple as using ddrescue to create an image of the working hard drive, then using Virtual Machine Manager to run that image.

  • Why not Docker? By the time docker even came out(around 2013), I already had 3 virtual machines that were created from disk images of machines that were running in my closet or my store. At the time, Docker didn’t even exist. Rebuilding everything using docker from scratch once it came out wasn’t an efficient use of time while running a business & wasting most of my spare time fighting my state’s incompetent government.

2. Time-Efficient Migration from Physical to Virtual[edit | edit source]

Taking a physical server and turning it into a virtual machine takes no effort.

  1. Pull drive out of physical server.
  2. Run ddrescue -f -d -r 3 /dev/sdb phonesystem.iso
  3. Open virtual machine manager
  4. Enter a few commands in terminal to create a bridge network interface so the virtual machines work; once done, I never have to do this again for any other virtual machine.
  5. Import the phonesystem.iso file as a virtual machine.
  6. Mess with BIOS/UEFI settings if necessary in virtual machine manager to get it to work.
  7. Assign the virtual machine the amount of CPU cores/RAM I think it should have based on what it is doing.
  8. Run it.
  9. Be happy.

It only takes a few seconds to type the commands & click the icons necessary in virtual machine manager. Compare that to re-architecting the entire system for Docker: it would have taken way more effort, downtime, etc. for an improvement in performance I will never notice as a person who has a few users for my server.

Some of these servers were running years before Docker was a thing, & fixing what wasn’t broken made no sense. Virtual machines offered a way to keep my systems running as they were once the hardware died & have them set to back up with no extra work on my part.

Over the years, I changed to using programs that were in docker exclusively. I went from a normal nextcloud deployment where I manually installed everything from scratch(including dependencies) to immich for images which was all docker. I went from self-managed email where the individual components(mailcot, dovecow, mysql, rspamd, etc.) were all installed from scratch to mailcow. Along the way, I just installed the dockerized version of the program on the virtual machine that was assigned to that “group” of services

3. Certain Programs Aren’t Built for Docker[edit | edit source]

Not every program is Docker-friendly. For example:

  • FreePBX (or PBXinaflash): You could theoretically create a custom Docker setup for these, but it would involve a ridiculous amount of work with little to no benefit for someone with my number of users(1, me, or 1 or 2 other people sometimes). For my use case (& most home users), the performance penalty of using FreePBX in a virtual machine instead of dockerized is as noticeable as using Gentoo Linux with emerge to compile the entire thing from a stage 1 tarball vs. using gentoo with apt to install programs.
  • Home Assistant: The developers themselves recommend using their pre-built VM image (HaOS) instead of Docker. If the devs think it’s better, I’m not going to argue. I can barely write a zfs health monitoring email script. Who am I to argue with the developers of the best home automation software on earth on the best way to run their program?

Even if I wanted to use Docker for everything, I’d still have at least two VMs running. Adding another one or two doesn’t bother me.

4. Idiotproof backups - the most important one[edit | edit source]

This is a beginner’s guide. Backing up Docker volumes, containers, networks, images, and configs is 100% doable. But let’s be honest, it requires some degree of competence. Backing up a virtual machine, on the other hand, is completely idiot-proof.

  • A virtual machine backup is just a single .qcow2 disk image and a single .xml configuration file. Drag and drop those two files to another system, import them into Virtual Machine Manager, and the virtual machine runs.
  • There’s no need to rebuild Docker containers, recreate volumes, or edit docker-compose.yml files. It’s so simple that someone with absolutely no technical expertise could do it in one click.
  • Infact it is so easy that even I can do it. If I can do it, it is truly idiotproof.

The backup script I provide is one I use myself. Once a week, it backs up all of my virtual machines as well as their configuration files to a ZFS pool which will continue running even if several hard drives fail. If I screw something up, it is two terminal commands or a few clicks in the GUI & I’m back up & running as if nothing stupid ever happened.

For beginners, when it comes to backups, simplicity is priceless.

Added complexity means you are less likely to use your backup system & less likely to understand how restoring from a backup works.

Why this guide uses virtual machines[edit | edit source]

In this guide, I’ve grouped services into virtual machines because it mirrors how I built my system over the past 15 years. This approach makes it easier for total beginners to back up and restore their setups without worrying about the complexities of Docker. Here’s how I’ve organized the VMs:

  1. Android Services: Alternatives to Google Drive, Google Photos, and Google Docs.
  2. Identity and Communication Services: Alternatives to Gmail, Google Calendar, Google Contacts, and Google Chrome’s password manager.
  3. Phone System: FreePBX for managing calls.
  4. Home Automation: Home Assistant for smart home management.

You do not have to do anything this way if you don’t want to.[edit | edit source]

You’re welcome to adapt this setup, or not. If you prefer Docker, you can combine many of these services into one host system. However, I still recommend using virtual machines for the following:

  • FreePBX: The extra effort required to make this work in Docker isn’t worth it.
  • Home Assistant: The HaOS image is the easiest and most reliable way to run Home Assistant, as per the developers’ recommendations.

This guide wasn’t about doing everything new - it was about all of you asking how I had set up my system. Since my system works for everything under the sun & has continued to for longer than I’ve been allowed to buy alcohol, I keep it going.

Understanding the basics of Docker[edit | edit source]

FEEL FREE TO SKIP THIS SECTION & SCROLL DOWN TO “Configuring Our Server’s Networking for Virtual Machines” section[edit | edit source]

You do not need to read this section to install the software in this guide. You can simply copy & paste along commands as I provide them to you, or follow the documentation from the program’s developers. This section is not required reading, but rather here to help you understand the how and the why behind the installation methods for the programs we’re installing so you learn as you go - if you’re interested. If not, skip ahead to “Configuring Our Server’s Networking for Virtual Machines”

We are going to use docker to install a program called mailcow. Before getting started installing mailcow, I want to go over what docker is & how it works.

You do not need to be a genius linux sysadmin at creating your own docker containers & setups to use it, but you should have some clue what it is or what happens when you type docker compose up to run something! Docker massively changed how sysadmins run & deploy software. It’s the engine behind many modern self-hosted solutions like Mailcow, Immich, Bitwarden, Frigate, & OnlyOffice. It gets rid of one of the single largest pain points of releasing(or using, or installing) software on Linux: dependencies. Before getting into what Docker is, let’s go over dependency hell.

What Are Dependencies and Why Do They Cause Problems?[edit | edit source]

Understanding Dependencies[edit | edit source]

A dependency is software/libraries/frameworks that have to be installed for the program you are installing to work. Let’s say you’re installing a web application written in PHP; it might need a specific PHP module or a specific version of PHP.

  • If you don’t have that version of PHP installed, the application won’t work.
  • If you don’t have PHP installed, the application won’t work.
  • If you want to use an application that requires a different version of PHP on the same machine….

and so on & so forth.

The Dependency Hell of the 1990s[edit | edit source]

Before modern package managers like apt used by Debian(and 6+ years later, ubuntu) or emerge (Gentoo), installing software on GNU/Linux would require manually finding & installing specific dependencies. Here’s what this hell was like:

  1. You downloaded a .tar.gz file that was the source code of the program you wanted to install, called rabbitholetohell.
  2. You ran ./configure & it told you you’re missing libshit.
  3. You foundlibshit, downloaded it, and discovered it required (libpiss).
  4. You found libpiss but learned that libpiss needed version 1.2 of libpuke and your computer had version 1.3 of libpuke installed.
  5. Downgrading from version 1.3 of libpuke to version 1.2 of libpuke breaks your entire system.
  6. User throws keyboard at wall & switches back to windows and says forget GNU/Linux for life.
  7. If the user is a sysadmin, they curse and figure out how to make it work because this is their job, wasting tons of time.

This was called dependency hell, where each dependency needed more dependencies. it’s what eli the computer guy would correctly call the rabbit hole to hell

Tools like apt came along in the late 90s. Instead of dependency hell, you typed apt install rabbitholetohell -y & it just installed rabbitholetohell. It installed all the dependencies, & their dependencies, and it installed the right ones. It was beautiful…

Yet, even with tools like apt to make installs simpler, problems came up if multiple applications needed different versions of the same dependency. For example:

  • PHP Example: Suppose you wanted to run two applications:
    • App 1 requires PHP 7.4.
    • App 2 requires PHP 8.1.
    • Your system can only have one version of PHP installed at a time, and switching between versions was a rabbit hole to hell

Why This Is a Nightmare for Software Maintenance[edit | edit source]

Dependencies can become a serious problem over time:

  1. Conflicting Requirements: If program A needs libshit version 1.2 & program B needs libshit version 2.0, your system can break when one application upgrades.
  2. Complex Upgrades: Updating dependencies for one application can & will cause another application to stop working. This is called dependency breakage and they are another common cause of chasing rabbits all the way to hell.
  3. System Decay: Over time, manually managing dependencies can lead to a bloated, unstable system full of broken packages, outdated libraries, & leftover files.
  4. Version pinning misery: apt lets you install specific versions of packages but managing version conflicts becomes timewasting, dangerous, & difficult when dependencies span dozens of packages with intricate relationships. As a newbie, you are likely going to break your system. As an experienced sysadmin… they still broke their systems….

How docker solves this mess[edit | edit source]

Docker containers solve these problems by isolating dependencies for each application. Here’s how it works:

  1. Per-Application Environments: Each Docker container includes everything an application needs to run from the application code, runtime, & all dependencies. These are packaged together in the Docker image.
    • Example: If one application needs PHP 7.4 and another needs PHP 8.1, you can run both simultaneously in separate containers without conflict, on the same computer.
    • I am not talking about on separate virtual machines. I mean on the SAME HOST OPERATING SYSTEM. Two versions of PHP; or ten if you wanted. and no issues. no conflicts. No rabbit, & no hell :)
  2. Immutable(unchangeable): Docker images are immutable snapshots. Once built, the dependencies in an image don’t change, so the application runs consistently every time. It’s not like an operating system update where package A may not be updated but package B is, and package A depends on a specific version of package B so everything breaks.
  3. No System-Wide Conflicts: Docker containers don't mess with each other on the host system. The PHP version inside the container for nextclouddoesn’t affect the PHP version on the host, or in the container for magento.
  4. Simple Upgrades: If you need to update an application you just type docker compose pull when it’s not running & it just updates… seamlessly. If it fails or the dev messed something up, you can go back to a previously installed image without messing up other applications.
  5. Portable: Docker makes sure that the program & its dependencies work the same way on ANY system; whether it’s your personal server, a cloud provider, or your friend’s gaming PC.

Why docker has exploded in popularity for small open source projects[edit | edit source]

Developers get less complaints from users[edit | edit source]

The biggest complaint by far that many open source projects get is I tried to install abc program & get xyz error. It is the bane of open source software developer’s existence, until they stop caring about their users entirely. This is often the only way to stay sane in a world where “users”(NOT “customers”), who pay the developers $0, expect unlimited tech support & handholding as well as a one year lesson in GNU/Linux systems administration so they can install a photo gallery.

This sucks.

With Docker, for a developer to hand off a program running on their server to your server, the dev only has to provide you the following:

  1. Docker image of your application
  2. The associated Docker Compose docker-compose.yml file
  3. Instructions or files to set up storage & networking.
  4. If you want to copy the files over that the service was saving that are unique to you, the docker volume.
  5. Tell you to edit xyz content in a docker-compose.yml file so the software is set to your specific need.
  6. Tell you to type docker compose pull & docker compose up -d
  7. Never hear complaints from you again.

The Docker image contains the program & its environment, which makes sure the software runs the same on their server as it does on yours.

AKA, the developers can provide me a COMPLETELY IDIOTPROOF copy of their software that is so easy to install even I can’t screw it up. Once they get it to install on THEIR system - they know it’ll install on mine.

The docker-compose.yml file explains to docker & your computer how to “deploy” the program & has details about Docker networks (e.g., how the containers communicate) & Docker volumes (for storing data that persists outside the container).

Docker makes what used to be miserable very easy[edit | edit source]

  • You can run Mailcow (which uses PHP 7.4 for its web interface) alongside OnlyOffice (which needs PHP 8.1) on the same server without issues.
  • When setting up something like Immich, you don’t need to worry about Node.js versions conflicting with other apps. The devs use Docker to bundle the correct version for you. You don’t have to RTFM to figure out the right version of libshit to install anymore - the developer does that once, and then it’s set for all of their users.
  • If a new version of Bitwarden requires updated dependencies, you update the Docker container, leaving the rest of your system untouched.

Docker turns what used to be a nightmare into a manageable, predictable process that isn’t miserable.

1. How Docker Works[edit | edit source]

Docker simplifies running software by packaging everything the software needs into one neat bundle. It does this using containers which are lightweight standalone environments that share the host system’s resources but remain isolated.

This is like a virtual machine, but without the baggage of needing its own operating system. Docker containers run on a shared kernel, making them much faster and lighter. If you ever enter a docker container, you will notice that almost no programs or commands are available besides the bare minimum necessary to do the job. See below:

louis@ultimatebauer:~$ docker exec -it frigate bash
root@174eb3845d50:/opt/frigate# nano file.log
bash: nano: command not found
root@174eb3845d50:/opt/frigate# vi file.log
bash: vi: command not found
root@174eb3845d50:/opt/frigate# vim file.log
bash: vim: command not found
root@174eb3845d50:/opt/frigate# emacs file.log
bash: emacs: command not found
root@174eb3845d50:/opt/frigate# ip addr show
bash: ip: command not found
root@174eb3845d50:/opt/frigate#  you really don't have shit in here besides exactly what you need to run the application, do you? run nano you prick!

root@174eb3845d50:/opt/frigate I’m afraid I can’t do that, dave

2. What Are Docker Images?[edit | edit source]

A Docker image is a blueprint on how to install the program. It has the instructions, files, & dependencies necessary to create a working environment for a piece of software. Think of it like a frozen dinner if they weren’t poisonous to your health. Everything you need is pre-packaged, & all you have to do is microwave it (or, in this case, “run” the image; please don’t try to microwave a GNU/Linux computer, as tempting as it might be when it doesn’t work) to get the app running.

  • Example: A Nextcloud Docker image includes the Nextcloud app, its web server, and everything else it needs to limp. I won’t use the word “run” to describe nextcloud…

3. What Are Docker Containers?[edit | edit source]

A Docker container is a running instance of a Docker image. Using the frozen dinner analogy, if the image is a boxed meal in a freezer, a container is a meal served hot & ready to eat. You can run many containers from the same image just like you could cook multiple identical dinners from the same recipe.

For instance, mailcow is not a mail “program” so much as it is an amalgamation of a bunch of programs necessary to run a mailserver.

On my mailserver, you can see a list of all the different containers that run for mailcow when I run docker ps -a

Example: mailcow container guide[edit | edit source]

Mail processing[edit | edit source]

  • postfix: The program that sends emails
  • dovecot: The program that receives emails & stores them & categorizes them by user, inbox, email address, folder, etc.
  • rspamd: anti-spam controls
  • clamd: scans attachments for viruses

Web & Interface[edit | edit source]

  • sogo: webmail dashboard for checking email/calendar/contacts in browser
  • phpfpm: for web interface

security & monitoring[edit | edit source]

  • watchdog: The health monitor
  • acme: Handles SSL certificates
  • netfilter: Blocks bad actors
  • unbound: helps route messages correctly

Helper Services[edit | edit source]

  • solr: Makes searching through your email faster
  • olefy
  • dockerapi:

Think of Docker containers like having separate tiny computers inside your main computer that are barebones and only include the minimum necessary for each function to work. They each work independent of each other to minimize the likelihood of something screwing up while also allowing you the ability to experiment without destroying your entire system.

Containers are not persistent. This means what happens in the containers stays in the container until you restart it. Once you restart the container, any changes to files you have made are GONE. PERSISTENT storage occurs in docker volumes.

Each container has its own:

  • Space to run programs
  • Network connection
  • File storage
  • Settings
  • Installed programs

Unlike full virtual machines (which are like having complete separate computers), containers share the main operating system’s foundation (the host’s operating system kernel), making them much lighter and faster to start up.

For example, in mailcow:

  • The postfix container only knows about sending/receiving mail
  • The rspamd container is only for filtering junk
  • The clamd container is only there to scan for viruses

They can’t interfere with each other, but they can communicate through specific “doorways” (network ports) when needed. If something goes wrong with one container, it doesn’t affect the others - just like one apartment’s plumbing problem doesn’t affect the other apartments (hopefully).

If you need to upgrade or fix something, you can work on one container without messing with everything else.

louis@mailserver:~$ docker ps -a
CONTAINER ID   IMAGE                    COMMAND                  CREATED       STATUS                  PORTS                                                                                                                                                                                                                               NAMES
aca88eab00b0   mailcow/watchdog:2.05    "/watchdog.sh"           11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-watchdog-mailcow-1
012debb1f557   mailcow/acme:1.90        "/sbin/tini -g -- /s…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-acme-mailcow-1
d33aa2bb976b   nginx:mainline-alpine    "/docker-entrypoint.…"   11 days ago   Up 24 hours             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp                                                                                                                                                            mailcowdockerized-nginx-mailcow-1
7bc85825c0b1   mailcow/rspamd:1.98      "/docker-entrypoint.…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-rspamd-mailcow-1
958d3ba45877   mcuadros/ofelia:latest   "/usr/bin/ofelia dae…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-ofelia-mailcow-1
a99f82d2b36a   mailcow/phpfpm:1.91.1    "/docker-entrypoint.…"   11 days ago   Up 24 hours             9000/tcp                                                                                                                                                                                                                            mailcowdockerized-php-fpm-mailcow-1
b8c6df6a7303   mailcow/dovecot:2.2      "/docker-entrypoint.…"   11 days ago   Up 24 hours             0.0.0.0:110->110/tcp, :::110->110/tcp, 0.0.0.0:143->143/tcp, :::143->143/tcp, 0.0.0.0:993->993/tcp, :::993->993/tcp, 0.0.0.0:995->995/tcp, :::995->995/tcp, 0.0.0.0:4190->4190/tcp, :::4190->4190/tcp, 127.0.0.1:19991->12345/tcp   mailcowdockerized-dovecot-mailcow-1
e3b09c799a7c   mailcow/postfix:1.77     "/docker-entrypoint.…"   11 days ago   Up 24 hours             0.0.0.0:25->25/tcp, :::25->25/tcp, 0.0.0.0:465->465/tcp, :::465->465/tcp, 0.0.0.0:587->587/tcp, :::587->587/tcp, 588/tcp                                                                                                            mailcowdockerized-postfix-mailcow-1
faece81357e3   mailcow/solr:1.8.3       "docker-entrypoint.s…"   11 days ago   Up 24 hours             127.0.0.1:18983->8983/tcp                                                                                                                                                                                                           mailcowdockerized-solr-mailcow-1
76c9f63fa50d   mariadb:10.5             "docker-entrypoint.s…"   11 days ago   Up 24 hours             127.0.0.1:13306->3306/tcp                                                                                                                                                                                                           mailcowdockerized-mysql-mailcow-1
930a7e0acff6   redis:7-alpine           "docker-entrypoint.s…"   11 days ago   Up 24 hours             127.0.0.1:7654->6379/tcp                                                                                                                                                                                                            mailcowdockerized-redis-mailcow-1
8bbcbe5ebefb   mailcow/clamd:1.66       "/sbin/tini -g -- /c…"   11 days ago   Up 24 hours (healthy)                                                                                                                                                                                                                                       mailcowdockerized-clamd-mailcow-1
9070a5ba3fb0   mailcow/olefy:1.13       "python3 -u /app/ole…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-olefy-mailcow-1
893f2ff1f952   mailcow/dockerapi:2.09   "/bin/sh /app/docker…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-dockerapi-mailcow-1
6781988f3409   mailcow/sogo:1.127.1     "/docker-entrypoint.…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-sogo-mailcow-1
464ca438b4c2   mailcow/unbound:1.23     "/docker-entrypoint.…"   11 days ago   Up 24 hours (healthy)   53/tcp, 53/udp                                                                                                                                                                                                                      mailcowdockerized-unbound-mailcow-1
373c1b7c5741   mailcow/netfilter:1.59   "/bin/sh -c /app/doc…"   11 days ago   Up 24 hours                                                                                                                                                                                                                                                 mailcowdockerized-netfilter-mailcow-1
6931fc976572   memcached:alpine         "docker-entrypoint.s…"   11 days ago   Up 24 hours             11211/tcp                                                                                                                                                                                                                           mailcowdockerized-memcached-mailcow-1
louis@mailserver:~$ 

4. What Are Docker Networks?[edit | edit source]

Docker allows containers to communicate with each other & the outside world using networks. By default, the containers can access the internet. Custom networks allow you to connect certain containers while keeping them separate from others.

For instance, in mailcow docker networks make sure the mail server can talk to the database container securely without exposing the database to the entire internet.

5. What Are Docker Volumes?[edit | edit source]

A Docker volume is where data generated by a container is stored. Think of a docker container like a computer booting up from a read only floppy disk. Whatever you ran in your programs is gone the second you reboot the computer. The docker volume is the second disk in the computer that you can write to so that you can save things. Containers are where programs run (postfix, dovecot), and volumes are where things are stored (emails, pictures, videos, etc.). Volumes make sure that important data persists even if the container is removed or restarted.

Volume examples with different programs:[edit | edit source]

The docker-compose.yml file is what tells docker how to set up everything. In frigate, we are not creating docker volumes. Rather, we tell docker to map a directory on the host computer inside the docker container. Look here:

docker program that does not use docker volumes[edit | edit source]

In this file, the container “frigate” specified on line 4 by container_name, we do not have any docker volumes specified. Under services we specify our containers. There are no docker volumes specified here. We have told the system that whatever is in /home/louis/Downloads/programs/frigate/config on the host system should show up inside the frigate container on the directory /config. Without this, the config.yml file within the /home/louis/Downloads/programs/frigate/config directory would not show up inside the container. Even if I logged into the container using docker exec -it frigate bash and created a config.yml file in /config, it would be gone when I restarted the container.

version: "3.9"
services:
  frigate:
    container_name: frigate
    privileged: true # this may not be necessary for all setups
    restart: unless-stopped
    image: ghcr.io/blakeblackshear/frigate:stable
    shm_size: "2048mb" # update for your cameras based on calculation above
    devices:
      - /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
      - /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/doc>
      - /dev/video11:/dev/video11 # For Raspberry Pi 4B
      - /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /home/louis/Downloads/programs/frigate/config:/config
      - /drive1thru8/securitycam:/data/db
      - /drive1thru8/securitycam:/media/frigate
      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1000000000
    ports:
      - "8971:8971"
      - "5000:5000" # Internal unauthenticated access. Expose carefully.
      - "8554:8554" # RTSP feeds
      - "8555:8555/tcp" # WebRTC over tcp
      - "8555:8555/udp" # WebRTC over udp
    environment:
      FRIGATE_RTSP_PASSWORD: "password"

docker program that DOES use docker volumes[edit | edit source]

Check out mailcow. This is not the full docker-compose.yml configuration file, just a part of it. Look at lines 25-28. For the container mysql-mailcow, we have two docker volumes. The docker volume mysql-vol-1 will show up inside the mysql-mailcow container (which is a tiny virtual computer that runs our programs, in this case, the mysql database. mysql databases usually contain data on users, configurations, product orders, etc). Whatever is in the mysql-vol-1 docker volume will show up inside the mysql-mailcow container at /var/lib/mysql.

It is using a docker volume instead of the main computer/operating system’s file system to store its files.

However, on line 28, we have - ./data/conf/mysql/:/etc/mysql/conf.d/:ro,Z which means that whatever is in the subfolder of our mailcow folder(where the docker-compose.yml file is that we used to install mailcow) under data/conf/mysql/ will show up inside the docker container at /etc/mysql/conf.d/

services:

    unbound-mailcow:
      image: mailcow/unbound:1.23
      environment:
        - TZ=${TZ}
        - SKIP_UNBOUND_HEALTHCHECK=${SKIP_UNBOUND_HEALTHCHECK:-n}
      volumes:
        - ./data/hooks/unbound:/hooks:Z
        - ./data/conf/unbound/unbound.conf:/etc/unbound/unbound.conf:ro,Z
      restart: always
      tty: true
      networks:
        mailcow-network:
          ipv4_address: ${IPV4_NETWORK:-172.22.1}.254
          aliases:
            - unbound

    mysql-mailcow:
      image: mariadb:10.5
      depends_on:
        - unbound-mailcow
        - netfilter-mailcow
      stop_grace_period: 45s
      volumes:
        - mysql-vol-1:/var/lib/mysql/
        - mysql-socket-vol-1:/var/run/mysqld/
        - ./data/conf/mysql/:/etc/mysql/conf.d/:ro,Z
      environment:
        - TZ=${TZ}
        - MYSQL_ROOT_PASSWORD=${DBROOT}
        - MYSQL_DATABASE=${DBNAME}
        - MYSQL_USER=${DBUSER}
        - MYSQL_PASSWORD=${DBPASS}
        - MYSQL_INITDB_SKIP_TZINFO=1
      restart: always
      ports:
        - "${SQL_PORT:-127.0.0.1:13306}:3306"
      networks:
        mailcow-network:
          aliases:
            - mysql

mailcow docker volume descriptions[edit | edit source]

Here are some docker volumes used for mailcow:

louis@mailserver:/opt/mailcow-dockerized$ docker volume ls
DRIVER    VOLUME NAME
local     mailcowdockerized_clamd-db-vol-1
local     mailcowdockerized_crypt-vol-1
local     mailcowdockerized_mysql-socket-vol-1
local     mailcowdockerized_mysql-vol-1
local     mailcowdockerized_postfix-vol-1
local     mailcowdockerized_redis-vol-1
local     mailcowdockerized_rspamd-vol-1
local     mailcowdockerized_sogo-userdata-backup-vol-1
local     mailcowdockerized_sogo-web-vol-1
local     mailcowdockerized_solr-vol-1
local     mailcowdockerized_vmail-index-vol-1
local     mailcowdockerized_vmail-vol-1

main data storage[edit | edit source]
  • vmail-vol-1: The emails & attachment files
  • mysql-vol-1: Database stuff like user accounts/settings
  • redis-vol-1: Temporary data for faster load times

email processing[edit | edit source]
  • postfix-vol-1: Mail server configuration & logs
  • rspamd-vol-1: spam filter rules & training data
  • clamd-db-vol-1: Virus scanning database

webmail & user data[edit | edit source]
  • sogo-userdata-backup-vol-1: Backups of user settings & data
  • sogo-web-vol-1: Web interface files
  • vmail-index-vol-1: Helps search through old email quickly

random technical volumes[edit | edit source]
  • crypt-vol-1: Encryption-related data
  • mysql-socket-vol-1: This assists database communication
  • solr-vol-1: Search engine data

This seems like a lot[edit | edit source]

If this is too much, realize this. 99% of installing programs that are packaged with docker means doing the following:

  1. Downloading a docker-compose.yml file
  2. Running the command docker compose pull to grab program
  3. Running the command docker compose up -d to start program.
  4. You’re done.
  5. If an idiot like me can do it, then so can you.

YOU DO NOT NEED TO BECOME AN EXPERT SYSTEMS ADMINISTRATOR OVERNIGHT.

The best way to learn is to try and understand things one part at a time. You do it like this:

  1. Set something up, have it work.
  2. Have no idea what you did.
  3. Mess around with it & enjoy it.
  4. Use the kick of dopamine from it working & enjoying it to get motivated.
  5. Read a piece of a config file just for the hell of it and see if it maps to anything in the program/what you’re doing.
  6. If it makes no sense, don’t worry about it, keep enjoying the program & increasing your stock of dopamine & happiness & satisfaction.
  7. Come back to it again later.
  8. Read a little bit.
  9. Read something on a forum/manual/guide that makes little sense to you, but maybe 1% more sense now than it did a week ago.
  10. Pat yourself on the back for understanding it even though you think this is kindergarten level & you’re an idiot & everyone else knows way more than you.
  11. Enjoy program more.
  12. Don’t crap on yourself because you don’t get everything.
  13. When bored sitting in a meeting you have no business wasting your time in, alt-tab over to your docker-compose.yml file.
  14. Google random parts & see what they do.
  15. Think about how that piece of software works. Google what the different words inside of the file do, what those programs are for, & how they relate to the program working as a whole.
  16. See if you understand 1% more now than before.
  17. Each percent you understand is not cumulative - it is exponential! Learning this stuff is a parabola. In the beginning, it is insanely slow. Once you get started & understand the foundation, learning increases at an exponential pace.
  18. You need to overcome that period where you feel like an imposter & a total idiot in order to get better.
  19. Realize that even complete experts know 0.0001% of what there is to know about all of this and usually specialize in one specific area, because to understand how everything works is damn near impossible.

Configuring Our Server’s Networking for Virtual Machines[edit | edit source]

What are virtual machines?[edit | edit source]

We are going to make use of virtual machines a lot. Virtual machines (VMs) are software-based computers running inside your physical server. This approach allows us to have separated, segmented computers running inside of our computer that are absolutely idiotproof to back up & restore. Key word there being idiotproof. Once I provide you with a working backup script, if you mess something up with any of the services (mailcow, freepbx, homeassistant syncthing, immich, nextcloud, etc.) all you have to do is:

  1. Shut down the existing messed up virtual machines.
  2. Restore a single .qcow2 file from backup.
  3. Start up the virtual machine.

And everything works again. No confusing command line stuff, no editing config files. Depending on the host & network you move it to, you might have to edit the IP address configuration & ports forwarded in the firewall; besides that it will just work. This is beautiful, and so idiotproof even someone like me can do it. If I mess up my phone system, I can restore it in seconds without having to mess with any other part of my system. Did I mention it’s idiotproof? This is the most important quality of a system when I’m the one using it.

If the server’s hardware or OS drive fails, it’s easy to move the VMs to another device. Insanely easy. Take a backup of a single file & move it over easy.

nerd note: Yes, docker allows for containerized installs of everything. Yes, it’s faster, yes, it makes more sense in an enterprise environment… this is a beginner’s guide. Having a very easy backup script that allows copying & pasting a qcow2 file when you break something means you’ll actually use your backups rather than give up, which is important in the beginning. We will create segmented VMs for each of our purposes (identity/email, android/cloud services & sync, home automation, etc.) and they will have programs & services running in docker within them; often several. The backup solution will be backing up these VMs because it is dirt easy for a beginner vs. managing backing up all associated docker containers & volumes. If you want to manage this, go set up Kubernetes at a mid-sized company for someone & stop reading a newbie guide.

By running each set of services in its own VM, we isolate them from each other—so if one service has an issue, it won’t bring down the entire system.

One problem: I need these virtual machines to connect to the internet… and since they’re virtual, they have no network interface card… so I can’t plug them into my switch.

Since VMs don’t come with physical network interface cards (NICs) like a regular server, we need to create a virtual network interface that allows them to connect to the network and access the internet. This virtual interface acts as a bridge between the VMs and your server’s physical network connection.

Step 1: Disable Cloud-Init’s Network Configuration[edit | edit source]

Before changing the network configuration, you need to stop cloud-init from managing it.

Create the file to disable cloud-init’s network management by running the following command: sudo nano /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg Add this to the file: network: {config: disabled} Save & close the file by typing Ctrl + X and hitting y to save it.

Step 2: Backup the Current Netplan Configuration[edit | edit source]

Make a backup of your current Netplan configuration.

Run the following command to back up the current 50-cloud-init.yaml file: sudo mv /etc/netplan/50-cloud-init.yaml /etc/netplan/50-cloud-init.yaml.bak

.bak makes sure that Netplan will not use it for creating a configuration. .bak also makes it easy to go “back” – you don’t have to remember the filename or the location of the original file. You just copy the backup file to the same filename in the same directory without the suffix .bak, there it is.

Step 3: Create a New Netplan Configuration[edit | edit source]

Since you disabled cloud-init, you can now modify the network configuration to create a bridge interface that your virtual machines can use.

  1. Find the name of your ethernet interface (the one the CAT5 cable plugs into):
[louis@livingroombauer ~]$ ls /sys/class/net
   enp4s0  lo

In my personal computer, enp4s0 is my network interface, and lo is the loopback interface. loopback allows the machine to talk to itself - other computers cannot contact this computer through the loopback interface. This is useful if there is a service we would like to run that we do not want to be accessible to other machines on the network. enp4s0 is my ethernet port. On my server, eno1 is my interface that the ethernet port plugs into, so I will use that below. When you see me using eno1 as I set up my server, replace eno1 with the actual name of your network interface.

  1. Create or edit the Netplan configuration file by running this command:

    sudo nano /etc/netplan/01-netcfg.yaml
  2. Replace the content with the following configuration. I’ve added a comment on each line so you know how many spaces there should be:

    network: # 0 spaces
      version: 2 # 4 spaces
      renderer: networkd # 4 spaces
      ethernets: # 4 spaces
        eno1: # 8 spaces
          dhcp4: no # 12 spaces
      bridges: # 4 spaces
        br0: # 8 spaces
          dhcp4: no # 12 spaces
          addresses: # 12 spaces
            - 192.168.5.2/24 # 16 spaces
          nameservers: # 12 spaces
            addresses: # 16 spaces
              - 192.168.5.1 # 20 spaces
          routes: # 12 spaces
            - to: default # 16 spaces
              via: 192.168.5.1 # 18 spaces
          interfaces: # 12 spaces
            - eno1 # 16 spaces
  3. Once done, remember to change the permissions of your netplan file so netplan does not yell at you:

    sudo chmod 600 /etc/netplan/01-netcfg.yaml

Explanation of the Configuration: - eno1 will be part of the bridge (br0), but will no longer have an IP address directly. - br0 is the bridge interface that will be assigned the static IP 192.168.5.2. - The br0 interface will be configured with the same gateway and nameserver settings as before. The gateway is our pfSense router, which is what it connects to to get an IP address and connect to the internet (a “gateway” to the world), and the nameserver is also our router, which is what it connects to to translate things like google.com into 142.250.138.101. - br0 is a virtual interface WE are creating. eno1 is an interface already present on this machine. eno1 is the ethernet port on my computer, simply put. Your network interface card will most likely be called something else; this is ok! Use what your network interface is called as it will be different for all machines

NOTE: You are probably used to old school configuration files where: pasv_enable=YES is the same as pasv_enable=YES That is not how a YAML do. A single space is all that stands between you having a working setup & happiness, and total misery. YAML is sensitive to spaces; indentation errors matter, and can cause the config file to not work. Some text editors are helpful in editing yaml files so that it is easier to notice mistakes & errors. Some are not.

Step 4: Apply the New Configuration[edit | edit source]

Now that the configuration is ready, apply it.

  1. Run the following command to apply the new Netplan configuration:

    sudo netplan apply

    NOTE: You may make an error because yaml files are evil; to make sure the configuration works, run netplan try before running netplan apply. While yoda had a point with the “do or not do there is no try”, he never dealt with linux documentation.

  2. Verify that the bridge interface is up and has the correct configuration by running this command:

    ip addr show br0
  3. You should see that br0 has the IP address 192.168.5.2.

  4. Check the routing table to make sure that the default route is correctly set by running:

    ip route show
  5. Verify that the default route points to 192.168.5.1.

Step 5: Test Network Configuration[edit | edit source]

Verify that your server can still access the network after the changes.

  1. Ping your router by running:

    ping 192.168.5.1
  2. Ping an external IP to make sure connectivity by running. This is Google’s DNS server, which should be up all the time:

    ping 8.8.8.8

Step 6: Add iptables rules for bridging[edit | edit source]

For the bridge to work correctly, you need to allow traffic forwarding on the br0 bridge interface. This requires creating iptables rules & making them persistent across reboots. This is a very important detail, often left out of guides on setting up bridge interfaces. Skipping this part will result in a setup that doesn’t work; you will be stuck in the hell of posting on GNU/Linux forums where people with IQs of 180+ will tell you to “RTFM”, a man page, that is 2000000+ pages long.

This is analogous to Derek Jeter telling you to “just keep your eye on the ball.” right.

You may wonder why that is the case. Setting up things with open source software is like MacBook board repair 12 years ago: it’s a club & you’re not in it. Most teachers know their subject matter. As a result, they forget what it was like to try something for the first time.

This is why I am building a machine from scratch as I do this. Telling you how I did it on my machine will never work. There will always be some small detail I subconsciously assume you will know; or perhaps a detail I forgot myself since some of these services I’m showing you were set up in my closet ten years ago!

By performing the tasks from what I have written, I am forced to provide you with instructions that actually work!

  1. Run the following commands to add the iptables rules:
 sudo iptables -I FORWARD 1 -i br0 -j ACCEPT
 sudo iptables -I FORWARD 1 -o br0 -j ACCEPT

NOTE: These iptables rules let traffic go through the bridge interface so your virtual machines can work on your network. Without them, your virtual machines will not be able to connect to anything, and you won’t be able to connect to them. If you see that your virtual machine received an IP address in virtual machine manager, but it can’t connect to anything, you likely skipped this step. > >The order of rules in iptables matters. Inserting rules at the top (using the ‘-I’) puts them at the top. If traffic forwarding does not work as expected, check rules & the order which you can do by running ‘sudo iptables -L -v -n’.

  1. Verify the iptables rules by running:

    sudo iptables -L

    You should see the rules for accepting traffic on br0.

Step 7: Make iptables Rules Persistent[edit | edit source]

To make sure the iptables rules are applied after a reboot, you need to save them and configure them to load automatically on startup.

  1. Install the iptables-persistent package:

    sudo apt install iptables-persistent
  2. During installation, you’ll be asked if you want to save the current iptables rules. Choose Yes.

  3. If you’re not prompted, you can manually save the rules by running:

    sudo netfilter-persistent save

    NOTE: Installing iptables-persistent is what allows your iptables rules to stick after a reboot. This is a server - you’re not going to turn this off very often. Nine months from now when you DO turn off this server, you’re not going to remember a single damn character from this guide; much less that iptables rule above! Nor will you remember that that rule not being present is why none of your virtual machines work.

  4. Confirm the rules are saved by checking the file at /etc/iptables/rules.v4.

With these changes, your bridge interface will now correctly allow traffic to flow through the virtual machines. The iptables rules will persist across reboots, and your virtual machines will be able to grab IP addresses from the same network as your host machine.

Preparing Ubuntu Server for Virtual Machine Management[edit | edit source]

Next, let’s set up Ubuntu Server for use with virtual machines using Virtual Machine Manager (virt-manager). We’ll cover everything from preparing the ISO file to configuring the virtual machine with a static IP address, including the installation of a lightweight GUI for easier management.

Step 1: Prepare the Ubuntu Server ISO[edit | edit source]

Before creating the virtual machine, you need to place the Ubuntu Server ISO file in the correct directory and set the proper permissions.

  1. Place the ISO file you used to create your installable Ubuntu USB onto your server. You can do this by attaching a disk to it, using SSH FTP and a program like filezilla to transfer it over. Or, if you’re an animal, you can download it again by going to ubuntu.com and downloading the LTS version of Ubuntu server again.

  2. Move the ISO file to /var/lib/libvirt/images/, obviously changing the source location & filename to whatever yours is. As long as the file ends up in /var/lib/libvirt/images/ we’re good:

    sudo mv ~/Downloads/ubuntu-server.iso /var/lib/libvirt/images/
  3. Change the ownership and group of the ISO file:

    sudo chown libvirt-qemu:libvirt /var/lib/libvirt/images/ubuntu-server.iso
  4. Set the correct permissions:

    sudo chmod 0640 /var/lib/libvirt/images/ubuntu-server.iso
  5. To apply these settings to all ISO files in the directory:

    sudo chown libvirt-qemu:libvirt /var/lib/libvirt/images/*.iso
    sudo chmod 0640 /var/lib/libvirt/images/*.iso

Note: These settings make sure that the libvirt-qemu user, which runs the QEMU processes, can read and write the file, while members of the libvirt group can read it. Other users will have no access, so virsh & related tools can access the ISO files but others can’t.

Step 2: Update Your System[edit | edit source]

Make sure your system is up to date:

sudo apt update ; sudo apt upgrade -y

Note: Some GNU/Linux distributions update during installation, but it’s always good to check.

Step 3: Install Openbox and Virtual Machine Manager[edit | edit source]

We’ll install a lightweight desktop environment (Openbox) and Virtual Machine Manager:

sudo apt install --no-install-recommends xorg openbox xorg xinit virtualbox virtinst qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

The --no-install-recommends flag makes sure only the core components are installed without any additional unnecessary packages.

Step 4: Enable and Start Libvirt[edit | edit source]

Enable libvirt to start on boot and start it immediately:

sudo systemctl enable libvirtd
sudo systemctl start libvirtd

Step 5: Add Your User to Necessary Groups[edit | edit source]

To allow your user to configure virtual machines, add yourself to the required groups:

sudo usermod -aG libvirt,kvm $USER

NOTE: Adding your user to the libvirt & kvm groups is useful so you do not have to become superuser/sudo for virt-manager(virtual machine manager GUI) orvirsh to work right. Log out & log back in to make sure you’re in the user group after doing this.

Step 6: Start the GUI[edit | edit source]

To start the graphical interface, use the following command:

startx

Note: We will be using the GUI for Virtual Machine Manager. Any time you are NOT using this, exit the GUI (right click the desktop & log out), then type exit from the command line terminal so your machine is not logged in. Even if someone breaks into your house, they’ll have physical access to your computer; but that doesn’t mean they have easy access to your encrypted data.

6.1(OPTIONAL): Remote Desktop Access with x11vnc and TigerVNC[edit | edit source]

x11vnc is like teamviewer if teamviewer were open source and 50x slower. x11vnc allows you to connect to your server’s GUI for remote access, as if you were right in front of the computer, without having to be in the room with your computer. Up until now, we have been SSHing into the machine in order to enter terminal commands, but normal SSH won’t work if we want to use the graphical user interface, see a mouse cursor, etc.

Note: If you are ok with sitting in front of your server computer with a keyboard, mouse, & monitor plugged into it when using virtual machine manager, this step is unnecessary and you can feel free to skip it.

This will walk you through setting up remote desktop access to your Ubuntu Server using x11vnc and TigerVNC. I like using virtual machine manager GUI to install virtual machines on the main server. Since we keep going to the GUI to install virtual machines & use virtual machine manager via the GUI, we’re stuck sitting in front of the server, which sucks if it’s in a closet or garage. Here’s how you can connect to it to view what is on its screen from another computer.

6.2 Installing x11vnc on Ubuntu Server[edit | edit source]

To install x11vnc, run the following command in your terminal:

sudo apt update && sudo apt install x11vnc

This will install the x11vnc package and its dependencies on your server.

6.3 Set a Password for VNC Authentication[edit | edit source]

x11vnc uses a password for authentication, and you can set this password as follows:

x11vnc -storepasswd

You will be prompted to enter a password. This password will be saved in the default location ~/.vnc/passwd.

6.4 Set x11vnc to Listen on All Interfaces on Port 5920[edit | edit source]

Open a terminal and run the following:

x11vnc -rfbport 5920 -usepw -auth ~/.Xauthority -display :0 -forever -norc -noxdamage -shared

Here is why this helps clients like Remmina connect:

1. -rfbport 5920

This sets the port on which the VNC server will listen for connections. VNC defaults to port 5900, but I like to use a non-standard one because I am strange.

2. -usepw

This option enables password authentication for VNC clients. It requires you to set a password using x11vnc -storepasswd beforehand. Use this to set a VNC password.

  • Password authentication is standard for VNC clients like Remmina. Without this, some clients might reject the connection for security reasons. And it’s just good practice.

3. -auth ~/.Xauthority

The -auth option tells x11vnc which authentication file to use to access your X session. The file path /run/user/$(id -u)/gdm/Xauthority refers to authorization for the user running the current display session managed by GDM (your display manager). If you’re using LightDM or another manager, the path might differ.

  • Why it helps: Instead of relying on -auth guess (which might not always find the right file), specifying the correct Xauthority file guarantees that x11vnc can properly access the graphical session. If x11vnc can’t authenticate the display, no client can connect.

4. -display :0 This option specifies which X display to serve via VNC. The display :0 is typically the primary display for your desktop session (the one you see on your monitor). It makes sure x11vnc is connecting to the right graphics session. If it were set to the wrong display, you’d either get a black screen or your client wouldn’t connect at all.

5. -forever Normally, x11vnc stops running after the client disconnects. The -forever flag keeps it running indefinitely. If you disconnect & reconnect it would suck to have to log back in each time. Without this, x11vnc would stop after Remmina disconnects, and you’d have to restart it manually for every new connection. I like stopping x11vnc once I am done manually.

6. -norc This option tells x11vnc not to load a configuration file (which might contain unwanted settings), we are only using the settings in this command line.

7. -noxdamage The Xdamage extension tracks changes to the screen, but sometimes it can cause display corruption or update issues in VNC clients. The -noxdamage flag disables this extension to avoid those problems. Some VNC clients who shall not be named sometimes fk up refreshing the screen when properly when Xdamage is enabled. Disabling it keeps artifacts/stuck screen issues.

8. -shared This option allows multiple clients to connect simultaneously to the VNC server. If this option isn’t set, only one client can connect at a time, and additional connection attempts (such as from Remmina) would fail. Enabling -shared makes sure that you can connect with multiple devices or clients without being disconnected when another connects.

6.5 Installing TigerVNC Viewer on the Client[edit | edit source]

To connect to the VNC server from a client machine, you need a VNC viewer. The following steps will install TigerVNC Viewer (also known as vncviewer) on the client (your GNU/Linux computer you are reading this on):

  1. Update the package list and install TigerVNC Viewer:

    sudo apt update && sudo apt install tigervnc-viewer -y
  2. Once installed, you can use vncviewer to connect to the server.

  3. If you use Windows or a Mac, you’re on your own, my friend. Find a VNC client that doesn’t suck.

6.6 Connecting to the VNC Server[edit | edit source]

Now that everything is set up, you can connect to your server.

  1. On your local machine, use the following command:

    vncviewer 192.168.5.2:5920 -SecurityTypes VncAuth
  2. Note: Replace 192.168.5.2 with your server’s actual IP address. In our case, we can also use the domain happycloud.home.arpa since we set up a static mapping earlier for our server in pfSense.

  3. When prompted, enter the VNC password you set earlier.

You should now have a remote desktop connection to your Ubuntu Server. Remember to start x11vnc after you have logged in & typed startx to start Openbox so it works.

Step 7: Using Openbox[edit | edit source]

Once you’ve installed Openbox and typed startx, Openbox starts:

  1. Right-click on the desktop to open the application menu.
  2. Navigate to System > Virtual Machine Manager; This is what we are going to be using to create virtual machines.

At this point, we have our dependencies set up for virtual machine management, and bridge networking configured so that our virtual machines can go online. We’re ready to set up our first virtual machine!

Creating a Virtual Machine[edit | edit source]

Our first virtual machine will be for mailcow and bitwarden. These provide the following:

Mailcow: - Self managed email server for sending & receiving mail - Integrated spam management & web interface - Calendar & contacts syncing with mobile devices - A lovely, “just works” mashup of - Postfix for sending mail - Dovecot for receiving mail - rspamd for killing spam - SoGo for webmail/calendar/contacts

Bitwarden: - Password management across devices, browsers, phones, computers, etc. - Alerts when your passwords have been found in a breach

Note: These instructions will carry over into many other virtual machine installs we will be doing. I will ask you to refer back to this section. Often, the only thing you will be doing is changing the RAM amount and CPU cores allotted to the VM, and the IP address you choose as you install.

Options for Virtual Machine Creation[edit | edit source]

When you start creating a new virtual machine, you’ll see several options. We’re going to use “local install media” because we’re working with the ISO image of the Ubuntu server we downloaded. But before getting into that, let me explain the “import existing disk image” option, which is pretty cool :)

Import Existing Disk Image[edit | edit source]

Imagine you’ve got a bunch of old laptops lying around, each running different servers. Maybe you’ve got a Dell Latitude D620 from 2006 or a piece of junk Lenovo with a dying northbridge running your entire business phone system; not that I ever did that.

But if you did, you could use a tool like ddrescue to make a disk image of each server. Then, you can import them into your virtual machine setup and keep them running without separate installations. It’s a useful method of consolidating everything onto one machine until you have time to set things up properly!

Local Install Media[edit | edit source]

This option expects us to choose a disk image (whether for a CD-ROM or a USB stick) that we will use to make a fresh installation onto our computer. This option is for when we want to create our own virtual machine from scratch, and is what we are going to be using.

Step 1: Setting up Virtual Machine Manager (virsh)[edit | edit source]

1.0 Create new virtual machine[edit | edit source]

In Virtual Machine Manager, click “Create a new virtual machine” (usually the first icon on the toolbar or select File > New Virtual Machine from the menu).

1.1 Choose Installation Media[edit | edit source]

  • Select “Local install media (ISO image or CDROM)” and click Forward.
  • Click Browse to select your Ubuntu Server ISO.
  • Choose the ISO file you prepared earlier (e.g., /var/lib/libvirt/images/ubuntu-server.iso) and click Forward.

1.2 Choose Operating System Version[edit | edit source]

  • Virtual Machine Manager may automatically detect the OS. If not, search for ubuntu and choose what is closest to your version. When in total doubt, linux generic 2022 works. Click Forward.

1.3 Configure Memory and CPU[edit | edit source]

  • Allocate the resources for your VM:
    • Set RAM: (e.g., 4096 MB).
    • Set vCPUs: (e.g., 2 CPUs max for what we are doing here).
  • Click Forward.

1.4 Configure Storage[edit | edit source]

  • Select “Create a disk image for the virtual machine”.
  • Allocate an initial disk size that is whatever you think the maximum amount of storage you will need for email, contacts, and calendar is (e.g., 25 GB). You’ll be able to resize this disk later, so make sure it’s large enough for your initial installation but leave room for growth.
  • Make sure the disk image format is QCOW2. This format supports resizing, and other cool features.
  • Click Forward.

NOTE: QCOW2 format has a lot of useful features. It supports snapshots, which we aren’t using for our virtual machine backups, but it’s nice to have if you choose to use that. More importantly, qcow2 supports “sparse file allocation.” aka, it only uses physical disk space as it needs. Just because you say a virtual machine has access to 300 gigabytes, doesn’t mean it creates an image that actually takes up 300 gigabytes.

1.5 Set Up Networking with the Bridge Interface[edit | edit source]

  • Choose “Bridge device” under “Network Selection”.
  • In the Device Name field, type br0 (or whatever name you have given your bridge interface).
  • This will allow the VM to grab a static IP from the same network as your host machine, making sure it acts like an independent hardware device.
  • Click “Finish”.

NOTE: Choosing “bridge device” allows the virtual machine to appear like a unique hardware device on your network. That’s the idea, for each of our virtual machines to seem like Pinnocchio; a real machine :)

1.6 Finish & Customize Before Installing[edit | edit source]

  • Name your virtual machine (e.g., “mailserver”), whatever you think makes sense for a contacts/calendar/mail machine.
  • Click “Finish”.

Step 2: Install Ubuntu Server as a Virtual Machine[edit | edit source]

Note: I will be blazing through the installing of Ubuntu here, since we already installed Ubuntu server once onto this physical server.

Keep in mind the following:

We are NOT using LUKS encryption here. There is no need since the image is going to be stored on an encrypted partition.

We are NOT using RAID – this is a disk image that is being stored on a RAID array, so we are not doing that.

We are configuring networking the same as we did before, but we will be using a different IP address!

NOTE: Setting a unique static IP address to each virtual machine is necessary. It’s like having 5 businesses in the same building and expecting the postman to deliver laptops to the right address.. that’s never a problem that plagued me in new york city.

If something else is using that IP address, you cannot use it again. You don’t want your packets ending up in Berwick Maine.

2.1 Start the installation process in the virtual machine[edit | edit source]

Choose your language and select “Try or install Ubuntu Server”.

Follow the installation prompts.

2.2 Configure Static IP Address[edit | edit source]

  • When you reach the Network configuration screen, select the network interface that corresponds to your network interface.
  • Choose the option “Configure network manually”.
  • Enter the following details:
    • IP Address: 192.168.5.3
    • Subnet: 192.168.5.0/24
    • Gateway: 192.168.5.1
    • Nameserver: 192.168.5.1
  • Make sure you enter all the details correctly to provide the virtual machine has the correct static IP configuration.

2.3 Partition the virtual “drive”[edit | edit source]

  • When you reach the Filesystem setup section, select “Use an entire disk” and then choose the disk you want to install Ubuntu Server on.
  • Choose the option “Set up this disk as an LVM group”.
  • Important: At this stage, edit the partition sizes as Ubuntu’s installer usually allocates 2 GB for boot which is ridiculous and even worse it only uses half the available space for your LVM & root. The Ubuntu auto partitioner is horrible.
  • Reduce the boot partition to 512 MB.
  • Delete the old LVM & root partition.
  • Create a new LVM taking up the entire disk.
  • Create a logical volume for the root filesystem, using all available space.
  • Do not encrypt the volume (it’s unnecessary since the host drive is already encrypted, and it is not my intention for you to have these VMs running on other people’s servers).

2.4 Finalize installation & do not install docker[edit | edit source]

  • Set up your username and password.
  • Choose to install OpenSSH server.

NOTE: DO NOT CHOOSE TO INSTALL PACKAGES THROUGH THE PROMPTS AFTER THIS. THEY INSTALL VIA SNAP. DOCKER INSTALLED VIA SNAP IS CANCER. USING THE SNAP VERSION OF DOCKER WILL PROVIDE YOU WITH MANY AGGRAVATING HEADACHES. DON’T DO IT. IGNORE ME NOW? SUFFER LATER!

  • After configuring the partition sizes, proceed with the installation process as usual, following the prompts to set up any additional software you want to install.
  • Once the installation is complete, the system will automatically apply your network and partitioning settings.
  • When prompted, remove the installation media (ISO) from the virtual machine settings.
  • Restart the virtual machine.

Step 3: Post-Installation Tasks[edit | edit source]

3.1 Remove the CDROM[edit | edit source]

  • Go to View —> details in virtual machine manager
  • Go to SATA CDROM on the left side.
  • Confirm that the source path is the Ubuntu ISO we downloaded for installing Ubuntu server on this virtual machine
  • Click Remove in the lower right corner.
  • UNCHECK Delete associated storage files – we will use this image again later!
  • Click delete.
  • You may have to turn off the VM to do this.

3.2 Set Up Static IP Mapping in pfSense:[edit | edit source]

  • Log into your pfSense router.
  • Go to Status > Diagnostics > ARP Table.
  • Find the MAC address associated with your server’s IP (e.g., 192.168.5.3), copy it.
  • Go to Services > DHCP Server.
  • Scroll to the bottom and click “Add static mapping”.
  • Enter the MAC address and IP address of your server.
  • Give it a descriptive name (e.g., “mailserver static IP”).
  • Set the hostname to mailserver
  • Save and apply changes.

Note: This makes sure that this IP address is reserved for this computer to connect to, so that no other device can take it (unless they are spoofing MAC addresses, but if someone does, that’s a different story).

3.3 Set up this virtual machine to start at boot:[edit | edit source]

virsh autostart mailserver
  • Check that this is set up properly by typing virsh dominfo mailserver and seeing if the autostart line is set to enable.
  • If you don’t do this, you will realize once it is too late & you’ve left your house after you have rebooted your server (for whatever reason) that none of your services are working. This will suck.
  • This command makes it so that the virtual machine starts each time we boot the computer.

Calendar & Contacts using SoGo within Mailcow[edit | edit source]

No more saving your contacts & calendar to your Gmail account or iCloud – keep it all on your OWN server!

This guide will walk you through the process of installing and configuring mailcow on Ubuntu Server. Mailcow is an excellent solution for managing email, contacts, and calendars. It simplifies the setup of multiple mail-related services like dovecot, rspamd, SpamAssassin, postfix, SoGo web interface, CalDAV, making it easier (I will never use the word “easy” to describe self-managed email) to maintain a secure, working mail server with calendar & contacts sync. Mailcow’s ease of use and strong community support make it perfect for self-hosting these services.

You will come to appreciate mailcow’s simplicity when we set up postfix manually for FreePBX & ZFS filesystem alerts in later sections.

Prerequisites:[edit | edit source]

For self-hosted calendar & contacts:[edit | edit source]

For self-hosted email:[edit | edit source]

These instructions are going to serve as a base for each of our installations of a virtual machine that uses Ubuntu Server. I will ask you to refer back to these later.

Step 1: Prepare Ubuntu Server[edit | edit source]

You can either work through virtual machine manager since virtual machine manager provides you a console view of your virtual machine, or ssh in from another computer.

1.1 Update and upgrade your system[edit | edit source]

sudo apt update && sudo apt upgrade -y
sudo apt install curl git wget -y

1.2 Check for other Docker installations:[edit | edit source]

Run docker --version and see what is installed. Nothing should be installed yet since this is a fresh system. If something is installed, remove it.

# Just in case you accidentally installed snap version of docker:

sudo snap remove docker

For other versions of docker: 

sudo apt remove docker docker-engine docker.io containerd runc

1.3 Install Docker using official Docker script:[edit | edit source]

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

Note: It’s very important to use the official Docker installation and not the Snap version. The Snap version can cause issues due to its sandboxed nature, making it a mess for mailcow’s requirements. Docker snap makes me sad, and it’ll make you sad too if you try to make things work with it.

Editor's Note: Louis uses the convenience script provided by Docker here. This method is Only recommended for testing and development environments. and may not update your dependencies correctly. For installation methods meant for a production environment see the official Docker manual.

1.4 Install Docker Compose:[edit | edit source]

Ubuntu’s docker-compose-plugin is safe to use, it is not snap cancer.

sudo apt install docker-compose-plugin -y
sudo systemctl enable --now docker

1.5 Verify the install[edit | edit source]

Run docker compose version and make sure the version is 2.0 or higher. Run docker --version and make sure version is 24.0.0 or higher

1.6 Set proper permissions:[edit | edit source]

Docker needs to be run as root for some operations, but you can add your user to the docker group to avoid using sudo all the time. To be clear, mailcow’s own documentation and community suggest starting with root or sudo, and you should trust them more than me. To quote mailcow developers, “Controlling the Docker daemon as non-root user does not give you additional security. The unprivileged user will spawn the containers as root likewise. The behaviour of the stack is identical.” Run this command to add your user:

sudo usermod -aG docker $USER

Log out and log back in, or run: newgrp docker

Step 2: Install mailcow[edit | edit source]

2.1 Clone the mailcow repository[edit | edit source]

cd /opt
sudo git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized

2.2 Set the correct permissions[edit | edit source]

Run umask 0022

2.3 Generate the configuration file[edit | edit source]

Run sudo ./generate_config.sh

When prompted, enter your Fully Qualified Domain Name (FQDN), such as mail.yourdomain.com.

2.4 Start mailcow services[edit | edit source]

sudo docker compose pull
sudo docker compose up -d

The -d option runs mailcow in detached mode so it continues running in the background.

2.5 Verify the installation[edit | edit source]

Run

sudo docker ps

You should see a list of containers associated with mailcow running.

Step 3: Access and Configure mailcow[edit | edit source]

Firstly, my screenshots from this guide were missing the top bar – my apologies. Refer to this below to see what was cut off, and hopefully where my mouse is clicking in both the rest of the screenshots & the video will make sense:

3.1 Access the web interface[edit | edit source]

Open a browser and navigate to https://mailserver.home.arpa, replacing your-mailcow-domain with the hostname you set up when installing Ubuntu and making a static mapping. For example, with pfSense’s default domain home.arpa, if the hostname is mailserver, you can access it at https://mailserver.home.arpa. If unsure, use the IP address. If the IP is 192.168.5.3, it would be https://192.168.5.3/. Ignore the warning about self-signed certificates since this is a new installation.

3.2 Log in with default credentials[edit | edit source]

  • Username: admin
  • Password: moohoo

3.3 Change the administrative password[edit | edit source]

  • Click on System → Configuration in the top menu.
  • Click on Access → Administrators.
  • Find the admin account and click the edit (pencil) icon.
  • Enter a new, strong password.
  • Click “Save changes”.

3.4 Add a domain[edit | edit source]

  • Go to Email → Configuration on the top menu.
  • In the Domains tab, click Add domain.
  • Enter your domain name (e.g., yourdomain.com).
  • Set any desired options (quota, aliases, etc.).
  • Click Add domain (for example, louishomeserver.chickenkiller.com).

3.5 Add an email account[edit | edit source]

  • Go to Email → Configuration → Mailboxes.
  • In the Mailboxes tab, click Add mailbox.
    • Enter the username (the part before @ in the email address).
    • Choose the domain name (e.g., [email protected]).
    • Set a password for the mailbox.
    • Make sure to check Grant direct login access to SOGo
      • This is what we use for webmail, calendar & contacts
  • Click Add to add your mailbox.

Step 4: Accessing Calendar/Contacts Services[edit | edit source]

  1. Go to https://mailserver.home.arpa, or in this case, https://192.168.5.3/ - this was the IP address & hostname I suggested utilizing for this machine.
  2. Log in with your credentials.
  3. Click on “Apps” in the top right corner.
  4. Select SOGo to access webmail, calendar, and contacts.
  5. Alternatively, go directly to webmail by visiting https://192.168.5.3/SoGo. This can be faster, but the standard login section offers important spam control options so I suggest browsing around there first. SoGo is the web interface similar to gmail.com

Note: When logging in, make sure to use your full email address. This is necessary because mailcow supports multiple domains, so the full email address is required to identify the correct account.

You can also set up your email client or mobile device using the configuration details provided in the mailcow interface.

You’ve now successfully set up mailcow on your Ubuntu Server. This is the base of a great self-hosted solution for email, calendars, and contacts.

Right now, we are not setting up email but focusing on calendar and contacts. For mobile access and syncing, we’re going to set up DAVx⁵ on an Android device and configuring OpenVPN for secure remote access to your server. This will let you automatically sync calendar & contacts from anywhere, for multiple calendars and multiple devices!

Step 5: Sync Android with mailcow using DAVx⁵[edit | edit source]

5.1 Installing DAVx⁵ on an Android Phone[edit | edit source]

  1. Open the F-Droid store on your Android phone. If not installed, download it from https://f-droid.org/.
  2. In F-Droid, search for “DAVx⁵”.
  3. Locate DAVx⁵ in the results and tap on it.
  4. Tap the “Install” button to download and install .
  5. Once installed, open DAVx⁵.
  6. Grant all requested permissions when prompted. These typically include:
    • Access to contacts
    • Access to calendars
    • Access to storage
  7. You may see a donation request screen. While appreciated by developers, you can skip this for now. But they’re nice people, so think about giving them some money.

5.2. Installing Fossify’s Calendar App on Android Using F-Droid Store[edit | edit source]

  1. Open the F-Droid store on your Android phone.
  2. In the search bar, type “Calendar” and find the one made by fossify. You have to click the app sometimes to figure out who made it. It’s worth it. Their app is the only one that works properly.
  3. Press the “Install” button to download and install the app.

Note: Fossify Calendar is a fork of Simple Mobile Tools’ calendar app, maintained by developers who prioritize privacy and open-source principles. Simple mobile tools’ app was bought by a cancerous spyware company. IF YOU WERE USING OLD SIMPLE MOBILE TOOLS APPS – UNINSTALL THEM OR DO NOT ALLOW THEM TO AUTO UPDATE AGAIN, EVER.

5.3. Make Sure Android Phone’s OpenVPN Connection is Still Connected[edit | edit source]

  1. Locate the OpenVPN Connect app on your Android phone.
  2. Open the app and check the connection status.
  3. If not connected, tap on the profile you created earlier (e.g., “Home VPN”).
  4. Tap the “Connect” button.
  5. Wait for the connection to establish. You should see a “Connected” status.

Important: Make sure you’re connected to your home network via OpenVPN before attempting to sync your contacts and calendar. If you’re not, it won’t find your server, since we haven’t forwarded any ports, and you are using local IP/hostnames to connect to it. Your router knows who mailserver.home.arpa is, your router knows who 192.168.5.3 is. To the outside world, this means nothing… and further, you’re not open to the outside world anyway.

Think of it like the difference between saying “I want to find Sabrina Carpenter” to a hotel bellhop, vs. “I want to find my girlfriend.” Girlfriend only means something in reference to you. Mailserver.home.arpa only means something to you. The rest of the world has no idea who the fk that is.

5.4 Adding Mailcow acct to your phone in DAVx⁵[edit | edit source]

  1. Open the DAVx⁵ app on your Android phone.

  2. Tap on Add account to set up a new connection. ⊕

  3. Choose “Login with URL and username”.

  4. In the Base URL field, enter one of the following:

    Note: Use https:// at the beginning of the URL for a secure connection. If it bitches at you, use http:// - we’re connecting to this via OpenVPN which provides incredibly secure encryption anyway.

  5. Enter your login credentials:

    • Username: Your full email address (e.g., [email protected])
    • Password: Your mailcow account password
  6. Tap Login or Next to proceed.

  7. If you see a certificate warning (due to a self-signed certificate), hit ACCEPT , this is your server. If you misfollowed something here so bad that you even have the ability to connect to someone else’s server right now, you amaze me more than the assistant engineer on the set of Gaucho.

NOTE: Self-signed certificates are common & normal when setting up a home self managed server. They are not normal on the regular internet.

The entire point of a certificate is that a trusted certificate authority has deemed them to be them. When you go to amazon.com, someone authoritative is vouching that they are actually amazon, so some scammer can’t pretend to be amazon.com tomorrow.

For that authority to be able to vouch for amazon, they have to be able to ACCESS amazon.

We aren’t letting anyone access our server; and that’s the point. It’s only open via VPN - therefore, we can’t get a real certificate. You could open the port temporarily, and then close it right after you get the certificate, but that just feels dirty.

It’s fine to accept this warning for your OWN server; but don’t let this fly when you’re putting your credit card details or bank password into someone else’s website.

  1. When prompted for an account name, use your email address.

  2. On the next screen, you’ll see options for syncing different data types:

    • For Contacts: Enable “CardDAV” sync
    • For Calendar: Enable “CalDAV” sync
    • For Tasks (optional): Enable if you plan to use this feature, I don’t though.
  3. Tap “Create account” or “Finish” to complete the setup.

5.5 Adjusting Sync Settings[edit | edit source]

After setting up your account, adjust the sync settings so you actually enjoy using this over Google/iCloud. The default sync interval is every 4 hours, which is horrible.

  1. In the DAVx⁵ app, find and tap on the account you just created.
  2. Look for sync settings, which will be in the settings, that you get to by clicking on the gear icon at the top of the application.
  3. Set up the sync intervals:
    • For server changes: Set to every 15 minutes (this is usually the minimum allowed interval)
    • For local changes: Set to immediate.
  4. Tap on each sync type (e.g., “Contacts” or “CardDAV”).
  5. Look for sync interval settings within each category.
  6. Set server sync to 15 minutes and local changes to immediate for each.

Important Notes:

  • The exact menu layouts and option names may vary slightly depending on your DAVx⁵ version.
  • Remember that for the 15-minute sync interval to work, make sure that DAVx⁵ is exempted from battery optimization settings on your Android device. Android batteries are glued into the phone and most phones don’t let you limit charging to 80-90%, meaning the phone you’re using right now’s battery probably sucks and dies all the time anyway, might as well have up-to-date syncing on your contacts & calendar.

Step 6: Managing Contacts with Mailcow & Android[edit | edit source]

6.1 Finding Your New Mailcow Contacts Account in Android[edit | edit source]

  1. Open the Contacts app on your Android phone.
  2. Tap on the menu icon (usually three lines or dots) to open settings.
  3. Go to Settings.
  4. Go to Accounts.
  5. Tap Add Account.
  6. Tap DAVx5 address book.
  7. Enter DAVx5 app as it opens automatically, click checkbox.
  8. Once in the app, make sure your accounts are all selected & checked.
  9. Return to the Android contacts app.
  10. Go to Settings —> Accounts again.
  11. Do you see the green DAVx⁵ icon & your account from Mailcow there? If so, great!
  12. Go back to Settings in the contacts app.
  13. Set up the default account for new contacts and the contacts to display so that your phone stores your contacts on your new Mailcow server to the account you created, and shows you contacts from your new server.
  14. Make sure this account is checked or toggled on to display its contacts.
  15. MAKE SURE YOU KNOW WHAT CONTACTS YOU ARE VIEWING & WHERE THEY ARE BEING SAVED EARLY ON SO YOU DO NOT SCREW YOURSELF LATER! ON MY SETUP I EXPORT ALL OF MY CONTACTS TO A FILE, IMPORT THEM TO MAILCOW, AND AVOID USING THE PHONE FOR CONTACTS EVER. MORE PLACES YOU STORE CONTACTS = MORE CHANCES YOU SAVE TO THE WRONG PLACE & SCREW YOURSELF LATER!

6.2 Adding a Contact in Mailcow and Verifying on Android[edit | edit source]

  1. In the Mailcow SoGo web interface located at https://192.168.5.3/SOGo/, after logging in, find the option to add a new contact.
  2. Create a test contact with a unique name (e.g., “Test Mailcow Sync”).
  3. Save the new contact.
  4. On your Android phone, open the DAVx⁵ app and hit refresh.
  5. This will sync with Mailcow every 15 minutes, but if waiting 15 minutes.
  6. Yeah, I know, I know, Google & iCloud have push… This is open source. We make sacrifices.
  7. When you add a contact on your PHONE, it will show up on the mailcow server in SOGo immediately. But the other way around takes 15 minutes.
  8. Open your Android Contacts app.
  9. Browse to the address book we just added. Or, do what I suggested above and stop using your device contacts list to begin with!
  10. Search for the unique name you gave the test contact.
  11. The contact should appear in your list, confirming that syncing from Mailcow to Android is working.
  12. Make sure this works both ways. Do not trust it until you test it. The worst thing in the world is losing a contact you thought you added. Ruby Lewis from Cirque Du Soleil could decide she wants to go out with you tomorrow—do you really want to lose her number because you messed up configuring DAVx⁵? I didn’t think so. It’s too easy to mess up this section not to double-check.

Trivia: I quit Avatar Studios in 2008, after working there for a year as an intern, then junior technician in the tech room. I made $7.50/hr. Had I stayed there another 8 years, I wouldn’t have started a business, a YouTube channel, or made more than $15/hr; the other technician who had a master’s degree & was 13 years my senior made $15. However, I would’ve gotten to say “hi” to Ruby Lewis in person. Would it have been worth it to not quit to meet my celebrity crush? Absolutely.

Important Notes:

  • Make sure your OpenVPN connection is active if you’re not on your home network or this will not work. We intentionally set this server up to have no contact with the outside world with regards to contacts & calendar syncing, so your phone must be connected to your home network via VPN for this to work.
  • DAVx⁵ typically syncs every 15 minutes, but you can force an immediate sync in the DAVx⁵ app.
  • If contacts don’t appear immediately, wait a few minutes or try forcing a sync in both DAVx⁵ and your Contacts app.
  • Remember to choose to save contacts in your Mailcow-linked address book to make sure they sync properly.

6.3 Exporting Contacts from Your Old Address Book[edit | edit source]

  1. In your Android Contacts app, go to Settings.
  2. Look for an option “Export”. Contacts app may be different from phone to phone, old version to new version, etc.
  3. Choose the account you want to export from (likely your old Google account or phone storage).
  4. Select Export to .vcf file.
  5. Choose a location to save the file, such as your phone’s Downloads folder.

6.4 Importing Contacts into Your New Mailcow Address Book[edit | edit source]

  1. In the contacts app, click the three horizontal bars at the top you usually click before going to settings.
  2. Tap on your Mailcow account.
  3. Confirm that it only has the one contact that we added.
  4. Go back to the three-bar menu we were at before tapping the Mailcow account and tap settings.
  5. Find the option to “Import contacts”, usually called import.
  6. Select the .vcf file you exported earlier.
  7. Choose “DAVx⁵ personal address book” or your Mailcow-linked address book as the destination.
  8. Confirm the import. This process may take a few minutes depending on the number of contacts.
  9. Once it is done, customize your view by clicking “contacts to display” in your settings. Turn off EVERYTHING besides the DAVx⁵ Mailcow address book.
  10. Go back to the three-bar menu.
  11. Click onto your DAVx⁵ Mailcow address book. Do you see your contacts? It worked. :)

6.5 Verifying Contacts in Mailcow Web Interface[edit | edit source]

  1. On your computer, open a web browser and navigate to your Mailcow server’s address.
  2. Log in with your Mailcow credentials. Go to the webmail app, the SOGo thing.
  3. Look for the “Contacts” or “Address Book” section.
  4. You should see the contacts you just imported listed here. :D

Step 7: Setting Up and Using Your Mailcow Calendar[edit | edit source]

7.1. Configuring Fossify Calendar App with DAVx⁵-synced Mailcow Calendar[edit | edit source]

  1. Open the Fossify Calendar app on your Android phone.
  2. Tap the menu icon (usually three lines or dots) and select Settings.
  3. Check the box next to caldav sync.
  4. Tap Manage synced calendars.
  5. You should see a list of available calendars. Find the one associated with your Mailcow account and look for something with a familiar name to what you set up before.
  6. Make sure this calendar is checked or toggled on to display its events.
  7. If you don’t see your Mailcow calendar, go back to the DAVx⁵ app, find your account, and make sure calendar sync is enabled.

7.2 Adding Events in Android Calendar App and Verifying in Mailcow[edit | edit source]

  1. In the Fossify Calendar app, tap the “+” or Add event button.
  2. Enter event details:
    • Title: Give it a unique name (e.g., “Test Android to Mailcow Sync”)
    • Date and time
    • Any other details you want to add

Important: Make sure you select your Mailcow calendar as the destination calendar (not “Store locally only”). THIS IS VERY EASY TO MESS UP. PAY ATTENTION.

  1. Save the event.

  2. Open a web browser and log into your Mailcow web interface.

  3. Navigate to the calendar section.

  4. You should see the event you just created appear in your Mailcow calendar. If it does not, you probably forgot to configure DAVx⁵ properly so that it syncs on local changes immediately. Or you’re not on the VPN. Or you just messed up the configuration; do not pass go & do not collect $200.

7.3 Adding Events in Mailcow and Verifying on Android[edit | edit source]

  1. In your Mailcow web interface, navigate to the Calendar section.
  2. Find the option to add a new event (usually a “+” or New Event button).
  3. Create an event with a unique name (e.g., “Test Mailcow to Android Sync”).
  4. Set the date, time, and any other details.
  5. Save the event.
  6. On your Android phone, open the Fossify Calendar app.
  7. Swipe down or tap refresh.
  8. The new event should appear in your calendar view.
  9. PSYCH!!!!

7.4 Refreshing Calendar Data[edit | edit source]

Refresh button in Calendar app is not real[edit | edit source]

Refreshing directly in the Fossify Calendar app DOES NOT immediately show new events added on the server. For immediate updates:

  1. Open the DAVx⁵ app on your Android phone.
  2. Tap the Refresh, and then the Synchronize Now button.
  3. Tap this to force an immediate sync with your Mailcow server.
  4. After the sync completes in DAVx⁵, open the Fossify Calendar app.
  5. Your calendar should now show the most up-to-date information.

You may wonder why this is, given that the calendar app literally has an option that says, “Refresh CalDAV Calendars.” that does not refresh your calendar. Welcome to the beautiful world of open-source software! :) I hope you’ll stay awhile. What we lack in functional UI, we make up for in not selling your data to bail bondsmen & bounty hunters. It’s kinda worth it…. kinda.

Why does it work this way?[edit | edit source]

When you tap “Refresh CalDAV Calendars”, what you’re actually doing is asking the calendar app to check if anything has changed in CalDAV. You’re not telling CalDAV to contact your server to fetch new entries.

Here’s how it works:

  1. Mailcow server → CalDAV (Mailcow sends updates every 15 minutes)
  2. CalDAV → Calendar app (The calendar app pulls from CalDAV)

The calendar app will not immediately refresh unless you manually ask it to. And even when you do, it’s just checking DAVx⁵ for updates. It doesn’t ask DAVx⁵ to go and poll your Mailcow server.

  • Remember that automatic syncs occur every 15 minutes by default.
  • Always make sure you’re adding events to the correct calendar (your Mailcow calendar, not a local one).
  • If you’re away from your home network, make sure your OpenVPN connection is active for the sync to work.
  • If you experience any sync issues, check your internet connection and OpenVPN status, then try a manual refresh in DAVx⁵, NOT the calendar or contacts app first.

To force an immediate sync from the server at any time, you can tap refresh/sync now within the DAVx⁵ app or use a “Sync now” option if available.

THIS IS IMPORTANT: REFRESHING IN THE FOSSIFY CALENDAR APP WE INSTALL WILL NOT REFRESH INSTANTLY.

DAVx⁵ grabs data from our home server. Calendar & contacts apps grab the data from DAVx⁵. When you tap “refresh” in your calendar app, what you’re actually doing is grabbing the latest data from DAVx⁵ on your phone. If DAVx⁵ does not have new data, it doesn’t matter if you just added a calendar event on your server & you tap refresh furiously in the calendar app 50 times. The fossify calendar will still not see a new event on your server until DAVx⁵ refreshes. Fossify does not have a way to trigger DAVx⁵ to refresh when you tap refresh in the fossify calendar app.

IF YOU WANT TO REFRESH TO SEE UPDATES IMMEDIATELY IN THE CALENDAR APP, YOU NEED TO HIT REFRESH/SYNC IN THE DAVx⁵ APP, THEN IN THE CALENDAR.

  • I call this an “OPEN SOURCISM” - these are the byproducts of 20+ years of people thinking it’s wrong for developers to get paid for their work. It’s why Google & Apple win; for all their flaws, they understand that developers want to be able to pay their rent & feed their family in exchange for working 10 hours a day to produce software people use. There is only so much a small band of enthusiasts can do in their spare time, given that they need to make money to live indoors & pay for food like the rest of us.
  • If you want this to get better, show that you are willing to pay for software so people put time & effort into fixing all of this.

By following these steps, you’ve now set up DAVx⁵ to securely connect to your mailcow server and configured it to sync your data efficiently. As efficiently as it’ll let you; welcome to the world of self-managed open source servers! :D

Self Managed Email with Mailcow & Postmark[edit | edit source]

Up to this point, we have only set up mailcow for contacts & calendar syncing. This is as far as you should go. Self managed email is not for the faint of heart. If you are a beginner, do not pass go, do not collect $200, and skip on to the next section.

Choosing to do self managed email is like most of my relationship decisions:

  1. Just because you can doesn’t mean you should
  2. It’s messy, complicated, high maintenance.
  3. You’ll regret it later.

That being said, if you wish to continue…

Why do I need SMTP relay?[edit | edit source]

You need an SMTP relay server if you want people to actually see your email. No man is an island, and none of your mail is going to go anywhere without an SMTP relay. Gmail, etc., everyone will “lol” at you if they see you sending email from your home email server.

As a society, we have chosen being spam-free over email sovereignty. You’re welcome to try running an email server on your residential internet account, but your mail is not going to get anywhere.

I’m not suggesting your email will end up in spam. It will be rejected by the server before its spam filter even sees it.

99% of the time that a major email server receives mail from a server on a residential internet connection, it’s from someone who got hacked & is now unknowingly spamming half of the internet. We traded freedom to be rid of spam.

Whether or not you think this is fair is irrelevant; it’s how the world is. If you want your email to make it to most of your intended recipients, you need an SMTP relay.

SMTP relay sends your mail through postmark’s trusted server. Using postmark, icloud/gmail will let your mail through, rather than assume some schmuck running windows xp service pack 1 with his banking password post-it-noted to his monitor is part of a spam botnet.

Think of it like doing business in NYC. You are paying a troll toll for the ability to send email. But Postmark are nice people, so you’ll enjoy it. I hope they don’t cancel my services on account of me comparing them to New York City government. I’m sorry, postmark; that was uncalled for. :’(

Step 1: Setting Up Postmark as an SMTP Relay[edit | edit source]

1.1 Create a Postmark Account[edit | edit source]

  • Go to: postmarkapp.com
  • Sign up: Click on the Start free trial button at the top right-hand corner of the page.
    • This is a paid service and you are going to pay, one way or another. If you don’t want to deal with forgetting you signed up for a trial, you can use privacy.com to create a temporary credit card that is authorized for $50, then delete it the second you put it into Postmark. But if you choose to go the self-hosted email route, you will be paying; keep that in mind.
  • Complete the registration: Enter the required details (email, password, etc.) and confirm your account through email verification.

Talk to Postmark; they need to know you are not a spammer.

  • Postmark isn’t going to let you send email using their servers without taking them to dinner first. You need to get to know them & they need to get to know you. They don’t let just ANYONE use their servers.
  • This will take a day, or a few days, for them to verify that you are not a known spammer/scammer. This might require gentle nudging customer service if they do not get back to you quickly, but they usually do because Postmark is staffed by awesome people.
  • They may ask for info about you. This is normal; no reputable SMTP relay wants to be responsible for helping deliver spam!

This may seem inconvenient, but it’s for the greater good of a spam free internet. If you don’t like that this is a thing, make sure to berate (verbally, of course) the next spammer you encounter. These people never refer to themselves by their proper name; they’re not “spammers,” they’re “email marketers.”

If you check two of these three boxes, you are very likely a spammer, and have contributed to the amount of annoyance, aggravation, & irritation that good people experience:

Are you responsible for sending me email that:

  1. utilizes templates
  2. includes in-line images
  3. has an “UNSUBSCRIBE” button

If you are, gargle my balls.

1.2 Create a New Server[edit | edit source]

  1. Navigate to the Servers page:
  2. Create a new server:
    • Click on the Create Server button on the “Servers” page.
    • Name your server: Enter a name for your pretty new SMTP relay server.
    • Click Save to create the server.

1.3 Configure Message Streams[edit | edit source]

  1. Navigate to the server you just set up by clicking on its name.
  2. Choose Default transaction stream from the three message streams it shows you.

Note: Transactional is for messages that are low volume but meant to be sent fast to an individual user, broadcast are for messages sent out to lots of users (aka spam) that are not time sensitive.

1.4. Get SMTP Relay Credentials[edit | edit source]

  1. Navigate to the Setup Instructions page after clicking onto your message stream.
  2. If you forgot how to do this, you click Servers → Default Transactional stream → Setup Instructions.
  3. After configuring the outbound stream, go to the Setup Instructions page for the Transactional Outbound Stream.
  4. You will be overwhelmed with options under Pick the library or integration – no need to fear, we are picking SMTP.

SMTP details:

  • Server: smtp.postmarkapp.com

  • Ports: 25, 2525, or 587. We will be using 587 with STARTTLS. You do not need to pick anything or configure anything here; this is just a page showing you your credentials you will put into Mailcow later. Save them securely. Pretend this is your bank password & treat it accordingly.

  • Authentication: Postmark supports Plain Text, CRAM-MD5, or TLS.

  • Username: This is your Postmark server token. It will look like a long string of characters (e.g., 1788dd83-9917-46e1-b90a-3b9a89c10bd7).

  • Password: The same value as the username (Postmark uses the server token as both the username and password).

  • Note: As I go throughout this video, I will be using MY credentials as an example. THESE WILL NOT BE THE SAME AS YOURS. USE YOUR OWN CREDENTIALS.

Step 2: Configuring Mailcow to use Postmark as SMTP relay[edit | edit source]

2.1. Access Mailcow Admin Interface[edit | edit source]

  1. Login to Mailcow:
  2. Navigate to your Mailcow instance by going to the admin interface URL (e.g., https://192.168.5.3/admin) or https://mailserver.home.arpa/admin.
  3. Use your administrator credentials to log in.

2.2. Find SMTP relay section[edit | edit source]

  1. From the main Mailcow admin dashboard, click System at the top and then click Configuration.
  2. Click onto the routing tab.
  3. Note the “add sender-dependent transport” section. This is where we will be placing our Postmark credentials.

2.3 Enter Postmark SMTP Details[edit | edit source]

  1. Use the credentials provided by Postmark in the prior step, which have a screenshot included.
    • SMTP Server: Set the SMTP server to Postmark’s SMTP, which at the time of writing for me was smtp.postmarkapp.com:587.
    • Ports: If Postmark is still using port 587 for TLS and offering it at the time of this writing, use port 587.
    • Username & Password: Enter your Postmark server token (the token provided by Postmark when you created your server). This token serves as both the username and password. This is what you see on the servers —> default transactional stream —> setup instructions —> SMTP page under “Authenticate with a server token and specify stream with a header”
    • Example:
      • Username: 1788dd83-9917-46e1-b90a-3b9a89c10bd7 (replace with your actual token).
      • Password: Same as the username (server token).
  2. Click Add.

Step 3: Adding a Domain Name & Mailbox to Mailcow[edit | edit source]

3.1. Add a Domain[edit | edit source]

  1. Go to Email → Configuration on the top menu.
  2. Go to the Domains tab.
  3. In the Domains tab, click Add domain
  4. Enter your domain name (in my case, stevesavers.com).
  5. Set any desired options (quota, aliases, etc.).
  6. Make sure DKIM key length is at least 2048.
  7. Click Add domain and restart SOGo.

3.2 Set Postmark as the Relay[edit | edit source]

IF YOU DO NOT DO THIS, NONE OF YOUR EMAIL WILL SEND!

  • Click Edit on the domain name you just created.
  • Now you will see a NEW option: sender-dependent transports.
  • In the domain settings, find the option labeled sender-dependent transports and select the newly created Postmark relay (e.g., smtp:postmarkapp.com). Set this to the Postmark SMTP relay server you set up in the prior step. Sometimes this is already checked for you, but it is safe to inspect what you expect so you don’t get screwed!

3.3. Add an Email Account[edit | edit source]

  • Go toEmail → Configuration → Mailboxes.
  • In the Mailboxes tab, click Add mailbox.
  • Enter the username (the part before @ in the email address).
  • Choose the domain name (e.g., [email protected]).
  • Set a password for the mailbox.
  • Configure any additional options as you want.
  • Click Add mailbox.

3.4 Save Changes and Apply[edit | edit source]

  • After choosing the smtp.postmarkapp.com:587 SMTP relay, click Save Changes to apply the settings.

3.5 Accessing SoGo Webmail/calendar/contacts[edit | edit source]

  1. Go to https://mailserver.home.arpa, or in this case, https://192.168.5.3/SoGo.
  2. Log in with your credentials.
  3. Click on Apps in the top right corner.

Note: When logging in, make sure to use your full email address. This is necessary because Mailcow supports multiple domains, so the full email address is required to identify the correct account.

You can also set up your email client or mobile device using the configuration details provided in the Mailcow interface.

Step 4: Setting up DNS Records in your domain registrar[edit | edit source]

Introduction to domain registrars[edit | edit source]

What is a domain registrar?[edit | edit source]

This is who you buy your website name from. If you don’t know what this is… for the love of god skip the self-hosted email section.

Namecheap.com as an example[edit | edit source]

Namecheap is a cheap & easy way to register a domain name. I will use them as an example. Their interface for DNS configuration is similar to 99% of the available providers out there.

If you have any trouble setting up these records, contact the support staff of your domain name provider who will happily provide you tech support commensurate with the fifteen dollars per year you pay them. No really, you’re on your own here… do you really want to do this??

I would love to show you how to do this on every provider, but at this time this manual is 605 pages, the video is 12+ hours, and I would like to return to my life. You will be able to find similar settings, menus, and fields in your DNS registrar if your provider isn’t horrible.

Configuring DNS records in Namecheap[edit | edit source]

4.1. Find the DKIM thing for your domain[edit | edit source]

  1. Go to Email → Configuration on the top menu.
  2. Go to the Domains tab.
  3. In the Domains tab, click edit on the domain you created (in my case, stevesavers.com).
  4. Scroll down to the DKIM section. Keep this tab open for now; we will come back to it later.
  5. We’re not changing anything here, so there’s no need to save changes or make any changes. We just want that DKIM thing.

4.2 Configure DNS records in Namecheap[edit | edit source]

  1. Log into your Namecheap.com account.
  2. Go to Domain List and click Manage next to your domain.
  3. Navigate to the Advanced DNS tab.
  4. Here are the DNS records I added: you will fill them according to your specific setup.
  • CNAME Record[edit | edit source]
    • Host: pm-bounces (Keep this exactly the same)
    • Value: pm.mtasv.net. (Keep this exactly the same)
    • TTL: Automatic (Keep this the same unless your DNS provider requires a different TTL setting)

    This CNAME record is used by Postmark for handling email bounces. When an email bounces, it will be sent to pm-bounces.[yourdomain], which forwards the bounce to Postmark’s servers. No changes are needed unless you are using a different bounce-handling service.

    DMARC Record (TXT)[edit | edit source]
    • Host: _dmarc (Keep this exactly the same)
    • Value: v=DMARC1; p=none; rua=mailto:[email protected] (Change only the email address after rua=mailto: to your own)

    Here’s what stays the same and what changes:

    • v=DMARC1: (Keep this exactly the same)
    • p=none: (Keep this exactly the same for monitoring; change to p=quarantine or p=reject once you’re ready to enforce DMARC)
    • rua=mailto: [email protected]: Change stevesavers.com to your own domain and use an email where you want to receive DMARC reports.

    This DMARC record helps protect your domain from email spoofing. For now, it’s in monitoring mode, so keep p=none if you want to monitor. If you’re ready to enforce policy, change p=none to p=quarantine or p=reject.

    Postmark DKIM Record (TXT)[edit | edit source]

    This you are going to get by doing as follows:

    1. Go to postmark.com and log in
    2. Go to your domain interface, go to Sender Signatures, click Add Domain or Signature, then Add Sender Signature.
    3. Once you’re done it’ll present you with a DKIM record and a return path. I’ll show you what we’re doing with these below & in the attached pictures:

    Note: When adding your domain, choose to send from any email address on the domain, not just a single one.

    • Host: 20241012215824pm._domainkey (Postmark generates this value, so keep it exactly as provided by Postmark)
    • Value: k=rsa; p=MIGfMA0GCSq... (You will replace the long key string p= with the public key provided by Postmark)

    IMPORTANT: The Host (20241012215824pm._domainkey) and k=rsa are specific to Postmark and should stay the same. You need to copy and paste this key exactly as Postmark provides it FROM POSTMARK, NOT FROM THIS GUIDE!

    DKIM Record for Your Domain (TXT)[edit | edit source]
    1. Log into mailcow’s administration interface.
    2. Go to Email → Configuration on the top menu.
    3. Go to the Domains tab.
    4. In the Domains tab, click edit on the domain you created (in my case, stevesavers.com).
    5. Scroll down to the DKIM section.
    6. Insert the record as follows:
      • Host: dkim._domainkey (Keep this exactly the same unless mailcow email provider tells you to use a different prefix)
      • Value: v=DKIM1; k=rsa; t=s; s=email; p=MIIBIjANB... (Replace this with the figure)

    The Host should be dkim._domainkey unless your email provider asks for a different format. For the Value, keep v=DKIM1; k=rsa; t=s; s=email exactly the same. The part you need to change is the long public key string after p=, which will be provided by your email provider or mail server (like Mailcow). Copy and paste it carefully.

    SPF Record (TXT)[edit | edit source]
    • Host: @ (Keep this exactly the same)
    • Value: v=spf1 mx a include:spf.mtasv.net ~all (Enter this as it is: change the include value if using a different SMTP service than postmark or if postmark changes this in the future)

    Here’s what stays the same and what you need to change:

    • Host: Always use @ for your main domain.
    • Value:
      • v=spf1 mx a: Keep this exactly the same; it tells servers to check your MX and A records.
    • include:spf.mtasv.net: You will need to change this if you’re using a different mail service than Postmark. Replace spf.mtasv.net with the SPF record provided by your SMTP service (e.g., if using a different relay like SendGrid or Amazon SES, they will give you a different include value).
    • ~all: Keep this the same unless you want stricter enforcement. You can replace ~all with -all for stricter failure rules.

    Mail CNAME Record[edit | edit source]
    • Host: mail (Keep this exactly the same)
    • Value: louishomeserver.chickenkiller.com. (Change this to the domain or subdomain that hosts your mail server, this is what you set when you created a dynamic DNS domain at freedns!)

    The Host mail stays the same. What you will change is the value after Value:, which should point to the domain or subdomain that hosts your mail server. Replace louishomeserver.chickenkiller.com with your actual mail server’s domain or subdomain.

    Email Client Configuration CNAME Records[edit | edit source]
    • Host: autoconfig (Keep this exactly the same)
    • Value: mail.stevesavers.com. (Change this to the domain of your mail server)
    • Host: autodiscover (Keep this exactly the same)
    • Value: mail.stevesavers.com. (Change this to the domain of your mail server)

    Both Host fields (autoconfig and autodiscover) stay the same, as they are used for automatic email client configuration. You will change the Value to point to your mail server’s domain or subdomain (in this case, mail.stevesavers.com). Replace this with your own mail server domain.

    MX Record[edit | edit source]
    • Host: @ (Keep this exactly the same)
    • Value: mail.stevesavers.com. (Change this to the domain of your mail server)
    • TTL: Automatic (Keep this the same unless your DNS provider requires a specific TTL)

    The Host @ stays the same to apply to your root domain. What you need to change is the value after Value:, which should point to the domain that handles incoming mail for your domain. Replace mail.stevesavers.com with your own mail server domain.

    These DNS records set up email services for your domain. For the third time, here’s what stays the same and what needs changing:

    • SPF, DKIM, and DMARC: Most parts of these records remain the same, but you’ll need to customize the DKIM public keys and the domain-specific parts (like email addresses for DMARC reports or SPF includes).
    • MX and CNAME records: The basic structure stays the same, but you’ll need to update the domain values to point to your own mail server.

    By carefully adjusting the fields noted for customization, you can provide the DNS setup matches your unique mail and web infrastructure.

    4.3 Go back to Postmark & verify your DNS records.[edit | edit source]

    1. Go to postmark.com and log in.
    2. Go to your domain interface, go to Sender Signatures.
    3. Click onto the ones you just created.
    4. Click VERIFY next to both DKIM and Return Path.
    5. If it doesn’t work yet, no big deal, DNS changes can take time to propagate.

    Step 5: pfSense firewall introduction[edit | edit source]

    So you have a basic idea on how to use pfSense as a basic router, but we haven’t dealt with port forwarding or messing with the firewall yet. Let’s get into that.

    Before we move on to making the necessary firewall rules to allow us to receive email, let’s discuss aliases.

    What makes firewall rules easy to manage are aliases.

    Lesson 1: Aliases in pfSense[edit | edit source]

    What are Aliases in pfSense?[edit | edit source]

    Aliases in pfSense are placeholders that can represent:

    • IP addresses
    • Networks
    • Ports
    • URLs

    For example, instead of having to make a separate NAT & firewall rule to open port 993 for 8.8.8.8, 9.9.9.9, 10.10.10.10, etc., I can make ONE firewall rule and enter the “alias” I created into the field where I would usually put an IP. I’d create an alias for those three IPs.

    The cool part about this is if I ever want to add or remove one of those IPs, I don’t have to change firewall rules or delete/add firewall rules. I just change my alias.

    Practical example:[edit | edit source]
    • If you’re using a service like Freshdesk (CRM system):
    • Freshdesk needs to connect to your mail server
    • You don’t want to give Freshdesk VPN access
    • Freshdesk doesn’t have VPN access anyway
    • Here’s how you can handle this situation:
    • Add their IPs to your alias
    • Only those IPs will see your mail server
    • Everyone else gets blocked before even seeing the service
    • Using aliases this way means:
    • Your mail server is invisible to random internet traffic
    • Only trusted IPs can even attempt connection
    • Much more secure than opening ports to everyone

    IMPORTANT: While port 25 needs to be open to the world for receiving email, other mail-related ports (587, 993, etc.) should only be open to trusted IPs or VPN users.Let’s say I am making firewall rules to allow Freshdesk customer service software to access my email system. Can you imagine making a firewall rule for EACH of these individually

    Can you imagine having to add all of those IPs as its own separate rule, or having to update them all each time freshdesk’s IPs changed? That would be a nightmare!

    Aliases allow us to add all of these IP addresses to a single thing called “freshdesk IP addresses” – then, all we have to do is make a firewall rule with “freshdesk IP addresses” as the source or destination, rather than a bunch of rules for each individual IP.

    Benefits of Using Aliases[edit | edit source]
    1. Simplification: Instead of entering “192.168.5.3” into a firewall rule, if I make an alias, I can just enter “mailserver”, once I have set up a “mailserver” alias that directs to the mailserver.
    2. I can add to it! Let’s say I have 1 smart television in my house. I want to block it from going onto the internet to anything besides a single Netflix IP address, so I add a firewall rule to block it from going online to anything besides the Netflix IP address. Let’s say my family buys 3 more smart TVs… I don’t want to set up a new set of firewall rules each time. Aliases allow me to add multiple IP addresses to a single alias! Instead of having to make 5 new sets of rules, I can keep my existing firewall rules as they are, and simply add the new IP addresses to the alias.
    3. Maintainability: When you need to update multiple firewall rules, you can just update the alias instead of each individual rule.
    4. Readability: Aliases make firewall rules more understandable by using descriptive names instead of raw IP addresses or port numbers.

    WTF? OpenVPN was set up so we DON’T open ports; why are we talking about opening ports?[edit | edit source]

    If you are accessing your mailserver using OpenVPN (AS YOU SHOULD), this doesn’t matter. You will be opening port 25 to the world so you can receive email, but for the rest of the ports, these are ONLY NECESSARY IF YOU WANT CLIENTS WHO ARE NOT CONNECTING TO YOUR VPN TO BE ABLE TO LOG INTO AN EMAIL ACCOUNT AND READ THEIR MAIL AND SEND MAIL ON YOUR MAILSERVER!!!

    Plus, the self-hosted phone system is going to require we allow some external IPs belonging to our SIP trunking provider (the thing that lets you receive & send calls to other phones outside your house) to access our server anyway, so you might as well learn about aliases now.

    How to Set Up Aliases in pfSense[edit | edit source]

    5.1.1 Accessing the Aliases Page[edit | edit source]
    1. Log into the pfSense web interface.
    2. Navigate to Firewall > Aliases.
    3. Click Add

    5.1.2 Creating an Alias[edit | edit source]
    1. In the Name field, enter a descriptive name for your alias (e.g., “WebServers” or “BlockedIPs”).
    2. Select the Type of alias you want to create:
      • Host: For single IP addresses
      • Network: For subnets
      • Port: For port numbers
      • URL: For lists of IPs or networks from a URL
    3. In the Description field, enter a brief explanation of the alias’s purpose. Here, I would enter mailserver.
    4. In the Content box, enter the values for your alias:
      • For IP aliases: Enter IP addresses, one per line, such as our mailserver at 192.168.5.2.

    5.1.3 Using Aliases in Firewall Rules[edit | edit source]
    1. Go to Firewall > NAT.
    2. Add a new rule or edit an existing one.
    3. In the source or destination fields, you can now select your alias from the drop-down menu.
    4. For port fields, you can select port aliases.

    Example rule using aliases:

    • Action: Pass
    • Interface: WAN
    • Source: Any
    • Destination: WebServers (alias)
    • Destination Port: WebPorts (alias)

    This rule allows incoming traffic to the IP addresses defined in the WebServers alias on the ports defined in the WebPorts alias.

    Using Aliases for Secure Access[edit | edit source]

    If you want external access to your mail server without requiring VPN, you’ll need to set up aliases for trusted IPs; or open your server to the entire world, which is a poor idea.

    Lesson 2: Setting Up pfSense Firewall Rules for a Mail Server[edit | edit source]

    Understanding NAT vs. Firewall Rules[edit | edit source]

    Let’s understand the two types of rules you need to set up in pfSense:

    NAT (Network Address Translation)[edit | edit source]

    NAT determines where traffic goes. Here’s why it matters:

    • Your network has one public IP that the world sees
    • But you might have 200+ computers internally
    • When someone sends you an email, NAT tells the router “traffic on port 25 goes to the mail server, port 80 goes to the web server” etc.

    Think of NAT like a restaurant host - they decide which table gets which customers.

    Firewall Rules[edit | edit source]

    Firewall rules determine if traffic is allowed to pass. After NAT directs traffic to a computer, firewall rules decide if it gets through.

    Think of firewall rules like the bouncer - they decide if you get in at all.

    Practical Application[edit | edit source]

    NAT port forward is when the router sees an email coming in on port 25 to my spectrum internet address, and sends that email to our mail server on port 25.

    Once NAT has sent that email to my mailserver on port 25, the firewall rule is what allows that traffic to access port 25 on our mailserver.

    Setting Up Mail Server Port Forwarding so you Receive emails:[edit | edit source]

    A “mail client” is a program you use to read & send your email from the mail server (the mailcow machine we are setting up). Examples are k9 mail, Microsoft Outlook, Mozilla Thunderbird, etc., or just using the web interface.

    If you are going to use the mail server while connected to the VPN, THIS IS THE ONLY RULE YOU NEED TO ADD! This is for receiving email. This port must be opened to the public.

    Create NAT Rule[edit | edit source]
    1. Access pfSense at https://192.168.5.1
    2. Go to Firewall → NAT
    3. Under the Port Forward tab, click Add
    4. Configure the following:
      • Interface: WAN (incoming traffic)
      • Protocol: TCP
      • Source: Any (you can’t predict which mail servers will email you)
      • Destination: WAN address
      • Destination Port Range: 25
      • Redirect Target IP: Your mail server IP (here in our example it’s 192.168.5.3)
      • Redirect Target Port: 25
      • Description: “Receive Emails”
    5. Important: Check “Add associated filter rule”
    6. Click Save
    7. Click Apply Changes

    Critical Note: Port 25 MUST be open or you’ll never receive email. This is non-negotiable for a mail server.

    NOTE: When setting up port forwarding for a mail server, make sure that your ISP isn’t blocking it to stop spam. Yours might. It’s not unheard of with residential internet providers. You are paying for a residential connection, not a business one, and they’ll remind you of it way they can(actually, they’ll do that even when you pay $409.99/mo for the business one).

    Step 6: Add pfSense Firewall Rules (for real)[edit | edit source]

    You don’t need to add ALL these rules below. If you are okay with being connected to your VPN, or on your local network, to receive & send email, the only rule you need to add is rule #1 so you can receive mail which you just did.

    If you want to allow IP addresses that are NOT connecting to your server via VPN into your mail server, you would create an alias with those IPs using the steps in Lesson 1 above, and then use that alias (called mailserver_trusted_clients in this case) for everything.

    One instance would be if you use a service like Freshdesk for customer service & opt to use your own mail server. In this case, you would have to allow their IP addresses to access your server so that Freshdesk can read your customer service inbox, and send emails as your customer service email.

    Rule 1: Forwarding SMTP (Port 25) – the ONLY rule you need if you are using OpenVPN to connect to your mailserver![edit | edit source]

    • Protocol: IPv4 TCP
    • Source: Any
    • Destination: 192.168.5.3
    • Port: 25 (SMTP)
    • Description: NAT Forward Postfix SMTP to Mailcow

    What this rule does:

    • This rule forwards unsecured SMTP traffic on port 25 to the Mailcow server at 192.168.5.3.
    • SMTP on port 25 is traditionally used for sending emails between email servers. However, it’s not encrypted by default, meaning the data can be sent in plain text.
    • Why this is ALWAYS needed: Although not as secure as SMTPS, port 25 is required for email delivery between servers on the internet. When your Mailcow server sends or receives emails from other email servers, it typically uses SMTP on port 25. This rule makes sure that your Mailcow server can communicate with other email servers to handle incoming and outgoing email traffic. Keeping port 25 closed means saying goodbye to receiving email. If you’re like me, this might be step 1 to solving a lot of life’s problems…

    Rule 2: Forwarding SMTPS (Port 465)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 465 (SMTP/S)
    • Description: NAT Forward Postfix SMTPS to Mailcow

    What this rule does: - This rule allows secure SMTP (SMTPS) traffic on port 465 from the clients defined in the mailserver_trusted_clients alias to be forwarded to the Mailcow server running on 192.168.5.3. For instance, if you are integrating self-hosted-email with a service like freshdesk, you would want to open this port so their app can send emails using your server. However, you would not want to open it to the entire world, just for the clients you want. In the case of freshdesk, you might make a mailserver_trusted_clients alias with all of freshdesk’s IP addresses so they make it through on port 465, but nobody else does. - SMTP (Simple Mail Transfer Protocol) is the protocol used for sending emails. The S at the end of SMTPS indicates that this is a secure version of SMTP, meaning the communication is encrypted using SSL/TLS. - When this is needed: This rule allows email clients that are NOT connected to your server via VPN to send emails using encryption. If this port is closed, they will not be able to connect to your mail server to send mail. - When this NOT needed: This rule is unnecessary if you are sending mail by connecting to your mailserver via VPN, or locally on your home network. It is unnecessary if you do not have external services such as freshdesk that you integrate with your mailserver.

    Rule 3: Forwarding Submission (Port 587)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 587 (SUBMISSION)
    • Description: NAT Forward Postfix Submission to Mailcow

    What this rule does: - This rule forwards traffic on port 587 to your Mailcow server at 192.168.5.3.

    • Port 587 is used for email submission by clients (i.e., when you’re sending an email through an email client like Outlook or Thunderbird). This port requires authentication and typically uses STARTTLS to secure the connection.
    • Why this is needed: Unlike port 25 (which is often used for server-to-server email transmission), port 587 is specifically used for sending emails from a client to the server. When you configure an email client to send messages, you often use port 587 with authentication. This rule makes sure that clients (in this case, the trusted clients defined in mailserver_trusted_clients) can securely submit their emails for sending through Mailcow.
    • When this NOT needed: This rule is unnecessary if you are sending mail by connecting to your mailserver via VPN, or locally on your home network. It is unnecessary if you do not have external services such as freshdesk that you integrate with your mailserver

    Rule 4: Forwarding IMAP (Port 143)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 143 (IMAP)
    • Description: NAT Forward Dovecot IMAP to Mailcow

    What this rule does:[edit | edit source]

    • This rule forwards IMAP traffic on port 143 to the Mailcow server at 192.168.5.3.
    • IMAP (Internet Message Access Protocol) is used by email clients to retrieve emails from the mail server. IMAP allows users to keep their emails on the server and access them from multiple devices.
    • Why this is needed: This rule allows clients to access their emails using the non-encrypted version of IMAP on port 143. It allows clients to view and manage their emails stored on the server without downloading them to their devices.
    • When this NOT needed: This rule is unnecessary if you are receiving mail by connecting to your mailserver via VPN, or locally on your home network. It is unnecessary if you do not have external services such as freshdesk that you integrate with your mailserver

    Rule 5: Forwarding IMAPS (Port 993)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 993 (IMAP/S)
    • Description: NAT Forward Dovecot IMAPS to Mailcow

    What this rule does:[edit | edit source]

    • This rule forwards secure IMAP traffic (IMAPS) on port 993 to the Mailcow server.
    • IMAPS is the encrypted version of IMAP. It uses SSL/TLS to secure communication between the email client and the server.
    • Why this is needed: This rule allows users to securely access their emails stored on the server using IMAP. This is the preferred method for most modern email clients, as it encrypts the communication, making sure that sensitive information like email contents and credentials are protected while being retrieved by the client.
    • When this NOT needed: This rule is unnecessary if you are receiving mail by connecting to your mailserver via VPN, or locally on your home network. It is unnecessary if you do not have external services such as freshdesk that you integrate with your mailserver

    Rule 6: Forwarding POP3 (Port 110)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 110 (POP3)
    • Description: NAT Forward Dovecot POP3 to Mailcow

    What this rule does:[edit | edit source]

    • This rule forwards POP3 traffic on port 110 to the Mailcow server.
    • POP3 (Post Office Protocol version 3) is another protocol used to retrieve emails from the server. Unlike IMAP, POP3 typically downloads emails to the local device and removes them from the server.
    • Why this is needed: This rule allows clients to retrieve emails using POP3. Some users or legacy email clients may prefer to use POP3 if they want to download and store emails locally rather than keeping them on the server.
    • When this NOT needed: This rule is unnecessary if you are receiving mail by connecting to your mailserver via VPN, or locally on your home network. Also, why are you even thinking of using POP3? Don’t do this.

    Rule 7: Forwarding POP3S (Port 995)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 995 (POP3/S)
    • Description: NAT Forward Dovecot POP3S to Mailcow

    What this rule does:[edit | edit source]

    • This rule forwards secure POP3 (POP3S) traffic on port 995 to the Mailcow server.
    • POP3S is the encrypted version of POP3, using SSL/TLS for secure communication.
    • Why this is needed: This rule enables users to securely retrieve their emails using POP3S. This is preferred over regular POP3 because it makes sure that the email contents and credentials are transmitted securely.
    • When this NOT needed: This rule is unnecessary if you are receiving mail by connecting to your mailserver via VPN, or locally on your home network. Also why are you even thinking of using POP3? Don’t do this. Use IMAP, POP3 in 2024 is pure insanity.

    Rule 8: Forwarding ManageSieve (Port 4190)[edit | edit source]

    • Protocol: IPv4 TCP
    • Source: mailserver_trusted_clients
    • Destination: 192.168.5.3
    • Port: 4190
    • Description: NAT Forward Dovecot ManageSieve to Mailcow

    What this rule does:[edit | edit source]

    • This rule forwards ManageSieve traffic on port 4190 to the Mailcow server.
    • ManageSieve is a protocol used to manage server-side email filtering rules (such as automated sorting of emails into folders, marking emails as spam, etc.). This is done on the server side rather than through a client-side rule.
    • Why this is needed: This rule allows trusted clients to create and manage email filtering rules on the server. For example, users can create rules to automatically move incoming emails from a certain sender into a specific folder. It’s useful for managing email organization and automating tasks at the server level. I don’t bother with this, but you can if you want to.

    TL;DR of self-hosted email firewall rules:[edit | edit source]

    Using OpenVPN to connect to your mailserver?[edit | edit source]

    Port 25 is all you have to open to the public so you receive mail from other servers.

    Need clients outside LAN that don’t have VPN access to connect to your mailserver?[edit | edit source]

    Then you gotta make an alias with their IPs & make all of the rules I provided above.

    Let’s say you want ANY IP from ANYWHERE IN THE WORLD to connect to your mailserver; which is a horrible idea; instead of an alias, you’d specify “any” in the “source” section.

    This is a bad idea, IMO, on par with the bad idea of being a newbie & doing self-hosted mail.

    What you should do: Just stick to using a VPN to access your inbox, install OpenVPN & K9 Mail on your Android phone and be done with it. Connecting to your VPN on a laptop as well is very easy, it’s one click or one command in the terminal & you should be doing that so you can access all of your other services anyway.**

    Port 25 (SMTP)[edit | edit source]

    • Why it is open to everyone: Port 25 is used for server-to-server email transmission, which means email servers from around the world need to be able to reach your Mailcow server to deliver incoming mail. Since this is a very important function for your mail server, it makes sense to allow traffic on port 25 from any source.
    • Security concerns: Since port 25 is open to the world, it can be targeted by spammers or malicious actors trying to exploit the service. However, this is mitigated by using tools such as fail2ban, rspamd, and strong SMTP authentication policies to detect and block abuse.

    Step 7: Verify SMTP Relay Setup[edit | edit source]

    1. Test Email Delivery:
    2. Once the configuration is saved, send a test email to ensure Mailcow is using Postmark to relay emails successfully. I would suggest sending your test email to four addresses:
      • Email to yourself (same email in Mailcow you are sending from).
      • Email to another mailbox on Mailcow.
      • Email to a “friendly” server, i.e., something not hosted by the main mega providers (another person who hosts their own email).
      • A Gmail/iCloud/Microsoft email address.

    Each one tests a portion of the chain.

    • If 1 doesn’t work, you’re hopelessly screwed.
    • If 2 works but not 3, perhaps a network problem.
    • If 1, 2, & 3 work but not 4, you’ve likely screwed up something in the SMTP relay or DNS records process, but the networking configuration and Mailcow setup in general is mostly working. It’s also possible that you did everything right, but Google/Apple/Microsoft still hate you. It’s ok. You can’t hate them back though. As my first studio employer told me, “Louis, you hate nothing, you intensely dislike it!”

    If all 4 work, great! If you get something like this in your email when sending, you made a stupid typo when setting up SMTP relay. Can you find mine?

     > This is the mail system at host mail.louishomeserver.chickenkiller.com.
      > I'm sorry to have to inform you that your message could not be delivered to one or more recipients. It's attached below. For further assistance, please send mail to postmaster. If you do so, please include this problem report. You can delete your own text from the attached returned message.
      > The mail system
      > <[email protected]>: Host or domain name not found. Name service error for name=smtp.postmark.com type=A: Host not found
      > <[email protected]>: Host or domain name not found. Name service error for name=smtp.postmark.com type=A: Host not found

    This concludes the guide on setting up Postmark as an SMTP relay for your Mailcow server, configuring DNS records, and setting up firewall rules. Remember to double-check all your configurations and test thoroughly to provide everything is working as expected. Or, don’t & give up. The latter is recommended.

    Step 8 – Spam controls[edit | edit source]

    Accessing the Rspamd Interface[edit | edit source]

    To access the Rspamd web interface, you need to be logged in as an administrator on Mailcow. Here’s how you do it:

    1. Go to http://your-mailcow-address/admin
    2. Enter your admin password
    3. Navigate to System > Configuration > Actions > Rspamd
    4. Set your password for Rspamd

    Once you’re in, you can train the system manually and upload things for it to learn from.

    Accessing YOUR inbox’s spam controls[edit | edit source]

    1. Log into the Mailcow interface with your EMAIL USERNAME & PASSWORD, NOT AS ADMIN
    2. Go to Email → Spam Filter
    3. Slide the slidy thingy & have fun :)

    To set the spam controls for your specific account, log in as your USER to the web interface, not an admin.

    pfBlockerNG for spam prevention[edit | edit source]

    Remember when we set up pfBlockerNG in our pfSense router?

    pfBlockerNG has IPv4 blocklists like Lashback that are great for reducing spam from known bad actors, such as people who explicitly send email to addresses that they know are on “unsubscribe” lists. If you use pfBlockerNG with these lists, when servers with IPs on these blocklists try to send you mail on port 25, they will be blocked at the router level before these known bad actors even make their way to your mailcow server or spam filter.

    Take a look at these lists. They are incredibly useful!

    Don’t do this[edit | edit source]

    Warning: Self-hosting email is a high-maintenance, complicated task. Just because you can do it doesn’t mean you should. It’s a decision you might regret later.

    Home Assistant to control your air conditioners & full smarthome control[edit | edit source]

    What is Home Assistant?[edit | edit source]

    Home Assistant allows you to control everything from your lights to your air conditioner to your car’s remote start, all within an open-source system that YOU control! It is a system that works with plugins developed by open-source devs around the world who are just as frustrated as you are that the smart home future we were promised is chock full of spyware, subscriptions, and enshittification. We’re going to be using this to adjust an air conditioner’s temperature, so if we’re going to be home early, we can tell it to turn on remotely a little earlier without allowing the A/C to connect to the internet, and also for getting alerts when someone walks by one of our security cameras.

    Step 1: Installing Home Assistant[edit | edit source]

    1.1 Download the Home Assistant KVM Image and Prepare it for Use[edit | edit source]

    1. Go to the official Home Assistant website.

    2. Find the KVM Image:

      • Scroll down to the section titled “KVM(virt-manager)”.

      • Click the link to download the latest KVM .qcow2.xz image from the official Home Assistant GitHub releases. Alternatively, you can download directly from the GitHub link provided here.

      • (Note: This file version will change over time, so make sure you are downloading the latest release.)

      • MAKE SURE YOU GRAB THE ONE FOR KVM VIRSH VIRTUAL MACHINE MANAGER IN LINUX, NOT THE VIRTUALBOX ONE!

    3. Download and Unzip the Image:

      • Once the download is complete, you’ll need to unzip the .qcow2.xz file. Run the following command to decompress the file:

        xz -d haos_ova-13.1.qcow2.xz
      • (Make sure the filename reflects the version you downloaded, as it may vary.)

    4. Move the Unzipped Image to the Correct Directory:

      • Move the decompressed .qcow2 file to your virtual machine images directory, typically /var/lib/libvirt/images/. Use the following command to move it:

        sudo mv ~/Downloads/haos_ova-13.1.qcow2 /var/lib/libvirt/images/
    5. Set the Correct Ownership and Permissions:

      • Change the ownership of the image file so that it is owned by the libvirt-qemu user and group:

        sudo chown libvirt-qemu:libvirt /var/lib/libvirt/images/haos_ova-13.1.qcow2
    6. Set the right permissions to make sure it is readable and writable by the owner, but not everyone else:

    sudo chmod 0640 /var/lib/libvirt/images/haos_ova-13.1.qcow2

    1.2 Install the Home Assistant Virtual Machine on Ubuntu Server Linux[edit | edit source]

    Before, we chose “local install media” when installing Ubuntu Server to our virtual machine for mailcow, but Home Assistant is a little different. It’s an operating system that is all ready to go – it’s installed, configured, etc. We are going to be choosing the “import existing disk image” option to boot it up.

    • Open Virtual Machine Manager:
      • Right-click on the desktop of your Ubuntu Server.
      • Navigate to Applications > System > Virtual Machine Manager.
    • Create a New Virtual Machine:
      • Once Virtual Machine Manager is open, click on Create a new virtual machine.
      • In the wizard that appears, choose the option Import existing disk image.
      • Unlike the past virtual machine where we were installing an operating system from scratch, this is an image of an operating system that has already been “installed” and configured elsewhere; therefore, all we need to do is import it.
    • Select the Home Assistant Image:
      • When prompted to choose an installation source, browse to /var/lib/libvirt/images/ and select the Home Assistant .qcow2 image you moved in the previous step.
    • Choose Operating System Type:
      • Select Generic Linux 2022 as the operating system type. The official Home Assistant instructions suggest using a “generic” Linux option.
    • Set Memory and CPU Allocation:
      • Set the RAM to 2048 MB (2 GB).
      • Assign 2 CPUs to the virtual machine.
      • It is recommended to use 2 CPUs, even though this might feel like overkill for a thermostat-related function. And it does.
    • Name the Virtual Machine:
      • In the same setup window, name the virtual machine homeassistant.
    • Customize Configuration Before Installation:
      • Before clicking Finish, make sure you check the box that says Customize configuration before install.
    • Set Firmware to UEFI: you want UEFI x86-64: /usr/share/OVMF/OVMF_CODE_4M.fd – DO NOT CHOOSE THE ONE THAT SAYS “SECBOOT”

    Set up this virtual machine to start every time the host computer, happycloud, boots by typing this into a terminal:

    virsh autostart homeassistant
    • Check that this is set up properly by typing virsh dominfo homeassistant and seeing if the autostart line is set to enable.
    • If you don’t do this, you will realize once it is too late & you’ve left your house after you have rebooted your server (for whatever reason) that none of your services are working. This will suck.

    Step 2: Start and Configure Home Assistant[edit | edit source]

    2.1 Start the Virtual Machine[edit | edit source]

    • In Virtual Machine Manager, locate your Home Assistant virtual machine and start the VM.
    • Wait for the machine to boot up fully.

    2.2 Identify the IP Address[edit | edit source]

    • Once the virtual machine has finished booting, check the console within Virtual Machine Manager. You will see an IP address displayed (e.g., 192.168.5.16).
    • We did not “install” this operating system like with the previous mailcow mailserver installation; we imported someone else’s installation. This means we do not have its IP address, nor were we given an opportunity to set up Home Assistant with a static IP yet. It is good to pay attention here so you see its IP address and know where to find it.
    • When it says the URL is http://homeassistant.local:8123, this is wrong. It is assuming that our “domain” is .local. By default, pfSense sets this to home.arpa.
    • If no IP address is displayed on the console, you can also check your DHCP server or router (like pfSense) to find the IP assigned to the Home Assistant VM.

    2.3 Access Home Assistant Web Interface[edit | edit source]

    • Open a web browser on your local machine.
    • In the address bar, type the following to access the Home Assistant web interface: http://homeassistant.home.arpa:8123 (For example: http://192.168.5.16:8123).
    • At the time of writing this guide, Home Assistant will only load on http:// by default when first started, not https://, if you use their fully-fledged HaOS virtual machine image. Don’t worry, you didn’t break anything.

    2.4 Follow On-Screen Setup Instructions[edit | edit source]

    • It will tell you to wait up to 20 minutes to load.
    • You will be greeted by the Home Assistant setup wizard. Follow the on-screen instructions to complete the setup.
    • Create a Home Assistant Account: Enter a username, password, and any additional information required.
    • Configure Location & Units: Choose your location and preferred units (imperial or metric).
    • Add Devices and Services: Home Assistant will begin searching for devices on your network. Depending on your network configuration, devices may automatically be discovered. This is pretty cool. I like this.
    • You don’t have to “trust” them, it’s open source so you can see what it is doing while probing. This is not probing to mess with or spy on you, it’s doing this to try to make your life easier… The thing technology was supposed to do for you.

    2.5 Complete Setup[edit | edit source]

    • Once you’ve created your account and finished the basic configuration, Home Assistant will finalize the installation and setup. You are now ready to take back your air conditioner from the proprietary cloud.

    Step 3: Configure Home Assistant with a Static IP[edit | edit source]

    Home Assistant Network Configuration:[edit | edit source]

    3.1 Access Home Assistant’s Network Settings[edit | edit source]

    • Open the Home Assistant web interface by navigating to http://[your_homeassistant_ip]:8123.
    • Once logged in, go to Settings (found at the bottom left of the sidebar).
    • From the Settings page, click on System and then select Network.

    3.2 Modify Network Interface[edit | edit source]

    • In the Network section, find the network interface (e.g., eth0) that Home Assistant is using.
    • Click Configure next to the interface to edit its settings.

    3.3 Switch to a Static IP Configuration[edit | edit source]

    • Change the network type from DHCP to Static to manually configure the IP address.
    • Set the following details:
      • IP Address: Enter the desired static IP address (e.g., 192.168.5.4).
      • Gateway: Enter the gateway IP address, the IP of your pfSense router (e.g., 192.168.5.1).
      • DNS Server: Enter the IP address of the DNS server (your pfSense router’s IP, e.g., 192.168.5.1).

    3.4 Save the Configuration[edit | edit source]

    • Once you’ve set the static IP, gateway, and DNS, click Save to apply the changes.
    • Home Assistant will now be reachable at the static IP address you configured.

    3.5 Save the Configuration[edit | edit source]

    • Once you’ve set the static IP, gateway, and DNS, click Save to apply the changes.
    • Home Assistant will now be reachable at the static IP address you configured.

    ![(images/lu55028jxdtp_tmp_5504653d.png)

    Add a Static IP mapping in pfSense[edit | edit source]

    3.6 Log in to pfSense[edit | edit source]

    3.7 Navigate to DHCP Server Settings[edit | edit source]

    • Once inside pfSense, go to Services > DHCP Server.
    • In the DHCP Server settings, go to the LAN tab, as this is where you’ll configure the static mapping for devices on your local network.

    3.8 Add a Static IP Mapping[edit | edit source]

    • Scroll down to the DHCP Static Mappings section and click on Add Static Mapping.

    3.9 Enter the Information[edit | edit source]

    • MAC Address: Find the MAC address of your Home Assistant virtual machine. To do this:
      • In pfSense, navigate to Diagnostics > ARP Table.
      • Look for the MAC address associated with the Home Assistant VM’s current IP (this can also be found within the Virtual Machine Manager or via the Home Assistant network settings).
    • IP Address: Enter the static IP address you configured earlier in Home Assistant (e.g., 192.168.5.4).
    • Description: Enter a description for easy identification (e.g., homeassistant).
    • Hostname: Enter homeassistant

    3.92 Save and Apply Changes[edit | edit source]

    • Click Save to add the static mapping.
    • After saving, click Apply Changes to make sure the static IP reservation is applied on your network.

    3.94. Make Sure This Actually Works[edit | edit source]

    • After configuring the static IP and DHCP mapping:
      • Make sure Home Assistant is reachable at the assigned IP (e.g., http://192.168.5.4:8123).
      • In pfSense, you can check the Status > DHCP Leases section to confirm that Home Assistant is using the correct IP address and that the static mapping is working.

    Step 4: Set Up the Venstar Thermostat so Home Assistant can see it[edit | edit source]

    4.1. Connect the Venstar Thermostat to Wi-Fi[edit | edit source]

    • On the thermostat, go to Wi-Fi Setup. The thermostat will display a list of available networks.
    • Select your desired Wi-Fi network and enter the password if necessary.
    • Once connected, make sure that the thermostat remains on the same network that your Home Assistant instance is on, or another network that can communicate with Home Assistant.

    NOTE: Make sure you tap the right network; this garbage touchscreen makes it very easy to tap the wrong network & not notice it. Whoever chose this touchscreen should be in the same prison with the engineers of the A1237/A1304 model MacBook Air from 2008.

    4.2 Configure the Local API on the Thermostat[edit | edit source]

    • On the thermostat, navigate to the Local API Options.
    • Turn on Local API access, which is necessary for Home Assistant to communicate with the thermostat.
    • Set a username (e.g., second floor), and configure a Basic Auth password. You’ll need this information when adding the thermostat in Home Assistant.

    4.3 Assign a Static IP to the Thermostat[edit | edit source]

    • On the thermostat, navigate to Manual Setup > Network Settings.
    • Assign a static IP to the thermostat. This ensures that the IP address does not change, which is very important or you will find yourself freezing to death when you can’t turn off the A/C.

    NOTE: Home assistant needs to know where to find the thermostat; at the same place, every single time. Anytime you attach an IoT device to your network, it is a good practice to give it a static IP. You will find out later in the “syncthing” section why expecting “auto locate” features to work proprerly is a bad idea. Summers are 117 degrees fahrenheit in Texas; I’m not trusting that to DHCP.

    • IP Address: 192.168.5.18 (or another appropriate IP in your network range)
    • Gateway: 192.168.5.1 (typically your pfSense router’s IP)
    • DNS Server: 192.168.5.1
    • Subnet Mask: 255.255.255.0

    4.4 Confirm the Settings[edit | edit source]

    • After entering the network configuration, make sure that the thermostat is connected and reachable on your network.

    Step 5: Add the Venstar Integration in Home Assistant[edit | edit source]

    5.1 Access Home Assistant[edit | edit source]

    • Open the Home Assistant web interface by navigating to http://[your_homeassistant_ip]:8123.
    • Log in with your Home Assistant credentials.

    5.2 Navigate to the Integrations Section[edit | edit source]

    • In Home Assistant, click on Settings from the sidebar.
    • Under Settings, go to Devices & Services.
    • Click on Add Integration.

    5.3 Search for the Venstar Integration[edit | edit source]

    • In the search bar, type Venstar to find the Venstar integration.

    5.4 Enter Thermostat Details[edit | edit source]

    • When prompted, enter the following information:
      • Host: Enter the static IP address you assigned to the thermostat (e.g., 192.168.5.18).
      • Username: Enter the username you set up on the thermostat (e.g., second floor).
      • PIN Code: If required by your thermostat model, enter the PIN code (optional). THIS THERMOSTAT DOES NOT REQUIRE PIN
      • SSL Certificate: yes by default for my thermostat, may be different for yours. For mine, it is yes.

    5.5 Submit the Integration[edit | edit source]

    • Click Submit. Home Assistant will now attempt to connect to your Venstar thermostat using the provided details.
    • If successful, the Venstar thermostat will be added as a device in Home Assistant.

    Step 6: Configure the Thermostat in Home Assistant[edit | edit source]

    6.1 Assign the Thermostat to an Area[edit | edit source]

    • After adding the integration, you can assign the thermostat to an area, such as Living Room. This part confuses me, there are so many labels & subcategories. It is easy to get lost in them all.

    6.2 Add Thermostat Controls to Your Dashboard[edit | edit source]

    • Go to Overview in Home Assistant.
    • Click the diagonal line that is supposed to look like a pencil in the upper right hand corner.
    • Now you are in the edit dashboard menu, that does absolutely nothing.
    • Click the three dots in the upper right corner, then click take control so you can actually edit your dashboard.
    • Click start with an empty dashboard
    • Click on Edit Dashboard, then click Add Card.
    • Select Thermostat as the card type, and choose your Venstar thermostat from the list.
    • Give the thermostat a cool name, like Second Floor Thermostat, and click Done.

    6.3 Customize the Dashboard[edit | edit source]

    • If you want to adjust or hide certain things, you need to click “Take Control” in what is some of the most confusing UI of all time.

    NOTE: You have to hit Take Control in order to do anything with the interface. This is not obvious or intuitive. I set up my dashboard on android when I set up my own system, so I never saw the dashboard in the web interface. I tried the web interface dashboard for the first time when I did this guide. It owned me good.

    6.4. Use the Venstar Thermostat in Home Assistant[edit | edit source]

    1. Control the Thermostat
      • From the dashboard, you can now adjust the temperature, set heating or cooling modes, and control the fan (e.g., always on or only when the compressor is active).
    2. View Historical Data
      • Home Assistant provides historical graphs showing temperature changes and thermostat actions (e.g., target temperature vs. actual temperature) over time, which you can view directly in the thermostat card on your dashboard.

    Step 7: Install Home Assistant Application on Your Phone to Adjust POS Thermostat So You Never Have to Touch Its Touchscreen Again[edit | edit source]

    7.1 Install the Home Assistant App on Android[edit | edit source]

    1. Open the Google Play Store
      • On your Android device, open the Google Play Store app.
    2. Search for Home Assistant
      • In the search bar, type Home Assistant.
    3. Install the App
      • Once you find the Home Assistant app (from Nabu Casa), tap Install to download and install it on your phone.
    4. Open the App
      • After installation is complete, tap Open to start the Home Assistant app.

    7.2 Make Sure OpenVPN Connect is Connected[edit | edit source]

    • Open the OpenVPN Connect app and connect to the VPN profile you set up for accessing your home network.

    It’s important that you are connected to your VPN when accessing Home Assistant from outside your local network! None of this is set up with open ports to the outside world. Without VPN, no air conditioning for you.

    7.3. Log In to Home Assistant on Android[edit | edit source]

    1. Launch the Home Assistant App
      • Open the Home Assistant app you installed earlier.
    2. Connect to Home Assistant
      • The app may automatically search for your Home Assistant instance. If it doesn’t find it, you can manually enter the IP address. Since you are connected via VPN, you’ll enter your Home Assistant server’s local IP, aka http://192.168.5.4:8123.
      • You can’t add 192.168.5.4.
      • You can’t add 192.168.5.4:8123.
      • IT MUST BE http://192.168.5.4:8123.
      • You have to have the http:// and the port.
    3. Log In
    4. Enable Location Tracking (Optional)
      • You’ll be prompted to enable location tracking. You can choose to allow or deny this depending on your preferences. They’re not spying on you though; they’re nice people, not like the evil bastards that sold you your car.

    7.4. Adjust the Thermostat Using the Home Assistant App[edit | edit source]

    1. Access the Thermostat in the App
      • After logging in, you’ll see the Home Assistant dashboard.
      • FInd your Venstar Thermostat (e.g., Second Floor Thermostat) on the dashboard.
    2. Control the Thermostat
      • Tap on the thermostat card to open the controls.
      • From here, you can:
        • Adjust the Temperature: Use the sliders or buttons to set the temperature.
        • Set Mode: Change the thermostat to Heat, Cool, or Auto.
        • Fan Control: Choose whether the fan should run Continuously or only when the heat/AC is on.
    3. Monitor Historical Data on when you Had it on
      • The app will display historical data showing the target temperature and current room temperature over time, so you can see when it was on, etc.

    Historical data will not show how many times you have punched the thermostat’s touchscreen, cursed at Venstar, or threatened the lives of the people who engineered it. But it should.

    Home surveillance camera system with alerts:[edit | edit source]

    Next up, I’m going to show you how to set up a home surveillance system. This system will send alerts to your phone whenever someone passes by the cameras around your house. These security cameras use standard protocols like RTSP and ONVIF – they are STANDARDS, and as a result, they cannot be taken away from you later. When you buy these cameras, YOU own the cameras, YOU own the video, and YOU own the alerts system. No cloud subscriptions, nobody having the ability to change the terms of the sale. No bullshit. :)

    Step 1: Choosing cameras[edit | edit source]

    For this tutorial, I am using a Hikvision camera as an example.

    Why Choose Hikvision Cameras?[edit | edit source]

    I’m settling with Hikvision for the same reason your parents settled on each other; not because they’re the best, but because they’re good enough & available. These cameras are everywhere, especially in small businesses in New York City. When businesses close and liquidate, you can find these cameras as cheap as $150 for a lot of eight, that do 2 megapixel video in good enough quality to see license plates and make out fine facial features.

    You can find these cameras on eBay for as low as $30 or $40 each, and sometimes even cheaper in bulk at liquidation sales. Because they’re so popular, & cheap for the quality you can get, I’m using them as an example.

    Alternatives for the Best Quality[edit | edit source]

    If you’re looking for the best of the best, I suggest cameras from a company called Axis. They make really high-quality stuff, but you’re not finding a lot of 8 for $150 in a liquidation sale.

    If you want the best, there’s nothing like AXIS.

    If you are concerned about Chinese equipment phoning home & sending Xi Jinping photos of you pissing in your backyard at 1 AM, I’ll show you how to create a second network in pfSense at the end of this guide. Once that’s done, you can make it way more difficult for Xi to get a good view.

    Step 2: Setting up the Hikvision Camera from Scratch[edit | edit source]

    2.1 Introduction to Hikvision IP issues[edit | edit source]

    When you get a good camera, it usually uses DHCP to connect to your network. This means when you hook it up, you’ll be able to see it in the ARP table on your pfSense router. It’ll grab an IP address that your router provides, and boom, it’s on the network.

    …I said a GOOD camera. These are (likely grey market) Hikvisions set up into god knows what configuration being sold by a business liquidator.

    Cheaper cameras might not do this. They often come with some weird static IP like 192.0.0.64, and you have no idea what it’s trying to connect to. Hikvision cameras can be like this sometimes.

    2.2 Download the SADP Tool[edit | edit source]

    To fix this IP issue, Hikvision offers a tool called SADP. Unfortunately, this tool requires Windows. So, I’m booting up a sandboxed Windows computer here. It’s a burner computer I use for college math classes because, apparently, you can’t learn math on GNU/Linux, so I keep it around for the cancer that is Pearson Vue.

    Download and Install SADP: Grab it from Hikvision’s website.

    Sometimes, these cameras come with passwords that even the seller doesn’t know. You might have to reset it by hitting a button inside the camera to get it back to default settings.

    Preparing the camera for login

    Once SADP finds your camera, you can log in and configure it. Often, you’ll need to look up the default password online or in the manual.

    2.3 Running SADP to prepare camera for login[edit | edit source]

    Once installed, run SADP and have it find your camera. Once it finds your camera, click on that camera, set it to DHCP, and apply the configuration. You have to enter the password to do this.

    The reason we are using DHCP at first rather than static IP is because this is insanely janky & I want to confirm that it even works & lets you log in at all before going further.

    If you know the password, you’re done with 99% of the setup. If it doesn’t work, google the default password for that specific model of hikvision camera.

    If that doesn’t work, you can either:

    • Message the seller and ask them, but 99% of the time they know less than you about whatever they’re liquidating
    • Open the camera physically & find a button you can hit to reset it. At that point, the default user/pass you find on google should now work.

    2.4 Logging into your newfound camera[edit | edit source]

    After this, sign into your pfSense router and go to Status —> DHCP Leases to find your camera. I used Diagnostics —> ARP Table since I’m used to it. Once you know its IP, put it into your web browser and log right in. :)

    2.5 Configuring a Static IP[edit | edit source]

    First things first, you want to give your camera a static IP address. For instance, if you choose 192.168.5.19, you set it so you always know where to find it. This is necessary; imagine your system goes offline for a few minutes and something steals your camera’s IP address, and now your security camera recorder is trying to get a video feed from your refrigerator? Sadly, by the time this is published, your fridge might actually have a video feed…

    • Configure network settings with a static IP:
      • Click Configuration
      • Click Network on the left side
      • Uncheck DHCP
      • Set an IPv4 Address on your subnet, anything from 192.168.5.5-192.168.5.254 will do here.
      • Set the IPv4 Default Gateway to be your pfSense router.
      • Click Test to make sure you didn’t screw something up before you save this configuration & can no longer log into your camera.
    • Set Preferred DNS server and Alternate DNS server to the IP address of your pfSense router, which in our case is 192.168.5.1.
    • User management: Set a username and password for security.

    2.6 Configure a Static Mapping in pfSense[edit | edit source]

    Follow the same instructions from our prior static mappings to set up a static mapping for our camera so that other devices do not steal its IP address.

    2.7 Create a REAL Password for the camera[edit | edit source]

    No, we’re not keeping the username and password to “admin/password”

    1. Once inside the camera’s configuration interface, go to Configuration at the top.
    2. Go to System on the left side.
    3. Go to User Management.
    4. Click Modify on the admin user.
    5. Don’t use the word “password” or “12345” as your password.
    6. Put this in a password manager when you’re done. Not a post-it on your monitor.
    7. Don’t write the password on the camera. I will come through this screen like Samara from The Ring and drag you so deep down a well you’ll end up on a cave diving YouTube channel.

    2.8 Change Video Codec to H.264[edit | edit source]

    When it comes to video encoding, I’d use H.264 over H.265. Frigate & web browsers can be fussy playing back H.265, and the quality bump is not something I notice enough to be worth the aggravation. Given this is a beginner’s guide, the safe choice is to use the codec that is less likely to cause aggravation.

    Frigate is going to have two streams – one that detects when something is going on (a dog, a cat, a car, a human, etc.), and another that does the recording. If we have a high-quality stream doing all of the detection work, our system is going to be killing itself all the time unnecessarily. We don’t need 12k Blackmagic Ursa quality video to tell whether we’re looking at a car’s license plate or a plastic bag in the wind. We do need good quality to record, though.

    We’re going to set up one high-quality stream for recording, and another lower-quality stream for monitoring what’s going on. This way, we get high-quality video for playback, without unnecessarily blowing up the resource consumption on our computer.

    • While logged into the camera interface, click Configuration.
    • Click Video/Audio on the left side, and select Stream Type as Main Stream (Normal). This is the feed we will be recording.
      • For Main Stream (Normal), set Video Encoding to H.264.
      • Set Video Quality to Highest.
      • Resolution and Frame Rate are up to you – I like the highest resolution that gets me at least 20 frames per second. Lower than this and it starts to turn into a slideshow.
    • Now, select Stream Type and click onto the 2nd stream listed.
    • Set a very low Resolution, something in the 600x300-ish range.
    • Set the Video Quality to medium.

    2.9 Finding the URL where we access the camera’s stream[edit | edit source]

    Before setting up your NVR software, make sure you can view the stream using a program like VLC. Here’s how you do it:

    1. Find the stream address: Use NMap to discover all streams on port 554 (RTSP port).

      nmap -d --script rtsp-url-brute -p 554 192.168.5.19
    2. Identify streams: Look for streams ending in .sdp, typically stream1 for high quality and stream2 for lower quality.

    3. Modify the URL: Adjust the RTSP URL with your username and password.

      rtsp://username:password@<camera_ip>/stream1.sdp

    Hint: You will see the high quality & the low quality stream in this list. You’ll have to mess around a bit to figure out which one is which; it should be obvious when you are viewing the high quality stream & when you are viewing the low quality stream, based on the video quality.

    2.99 Testing Streams in VLC[edit | edit source]

    Once you’ve got the URLs, test them in VLC to ensure they work. You can click Media—> Open Network Stream and then enter the URL. If you don’t have VLC… Get VLC. It is the best multi-format video player there is.

    Once you have a working & properly set up camera, let’s install our NVR – that stands for Network Video Recorder. This is what will monitor the video feeds coming from our cameras & record it to disk for us.

    Step 3: Installing Docker and Setting Up Frigate with Specific Version 0.13.2[edit | edit source]

    Frigate is a lovely network video recorder.

    Next, we’re going to clone the Frigate repository. I’m going to download Frigate, but I’m using the old version of Frigate rather than the new version. I’ll show you why once I’m done installing. The new version, in my opinion, took a well thought through user interface and destroyed it. I don’t mean minor changes; think Amber Heard doing plastic surgery on Johnny Depp. It’s that bad.

    Johnny Depp would still look better after that than Frigate looked from 0.13 —> 0.14. That’s what happened to Frigate from version 0.13 to 0.14. They destroyed it. You can’t even view events for more than one day at a time. It’s horrifically bad.

    I’m downloading an old version, and I’ll show you the differences so you can decide for yourself. The setup routines are IDENTICAL with regards to configuring alerts in Home Assistant, etc.

    This project still deserves donations, purchases, & funding for how good Frigate 0.13 is, as well as thanks & praise for keeping it open source so we even HAVE the option to use older versions.

    3.1 Install Docker[edit | edit source]

    1. Verify Existing Docker Installation:

      Run the command to check if Docker is installed: docker --version. Make sure the version is 24.0.0 or later. If it’s an older version, remove it by using:

      sudo apt remove docker docker-engine docker.io containerd runc
    2. Install the Latest Version of Docker:

      Download and install Docker using the official installation script. Run:

      curl -fsSL https://get.docker.com -o get-docker.sh
      sudo sh get-docker.sh

    Note: Use the official Docker installation, not the Snap version. The Snap version is horrible & causes tons of issues. If you got tricked into installing Docker at the end of the Ubuntu server installation prompts, I am sorry, but you have to remove that, it’s garbage. Run sudo snap remove docker and never look back.

    1. Install Docker Compose:

      sudo apt install docker-compose-plugin -y
    2. Verify Docker Compose Installation:

    3. Make sure Docker Compose version is 2.0 or higher by running:

      docker compose version
    4. Set Proper Permissions for Docker:

      • Docker typically requires root permissions, but you can add your user to the Docker group to avoid using sudo. Run:

        sudo usermod -aG docker $USER
      • Log out and log back in, or run:

        newgrp docker

    3.2 Install Frigate[edit | edit source]

    1. Create a Directory for Frigate:

      • Run the following command to create a directory to store Frigate files:

        mkdir -p /home/$USER/Downloads/programs
        cd ~/Downloads/programs
    2. Clone the Frigate Repository:

      • Clone the Frigate GitHub repository by running:

        git clone https://github.com/blakeblackshear/frigate.git
        cd frigate
    3. Set Up Docker Compose for Frigate:

      • Create and edit the docker-compose.yml file. Make sure it specifies Frigate version 0.13.2. New versions use a horrible user interface that is rage inducing. My example file below specifies version 0.13.2 for you. You’ll need to set the container name, restart policy, image version, shared memory size, devices (e.g., USB Coral, PCIe Coral, video device for Raspberry Pi), and volumes for storing local time, config files, media, and cache. Be sure to open necessary ports (e.g., 5000, 8971, 8554, 8555).

      • *If any of what I said in the last bulletpoint after the “rage inducing” part confuses the hell out of you, don’t worry: you have the easiest path there is; JUST COPY AND PASTE BELOW WITHOUT MESSING WITH IT!

    version: "3.9"
    services:
      frigate:
        container_name: frigate
        privileged: true # This may not be necessary for all setups
        restart: unless-stopped
        image: ghcr.io/blakeblackshear/frigate:0.13.2 # Last good version
        shm_size: "64mb" # Update for your cameras based on requirements
        devices:
          - /dev/bus/usb:/dev/bus/usb # USB Coral, modify for other hardware
          - /dev/apex_0:/dev/apex_0 # PCIe Coral, modify based on your setup
          - /dev/video11:/dev/video11 # For Raspberry Pi 4B
          - /dev/dri/renderD128:/dev/dri/renderD128 # Intel hwaccel, update for your hardware
        volumes:
          - /etc/localtime:/etc/localtime:ro
          - ./config:/config
          - ./storage:/media/frigate
          - ./database:/data/db
          - type: tmpfs # Optional: Reduces SSD wear
            target: /tmp/cache
            tmpfs:
              size: 1000000000
        ports:
          - "8971:8971"
          - "5000:5000" # Internal unauthenticated access. Be careful with exposure.
          - "8554:8554" # RTSP feeds
          - "8555:8555/tcp" # WebRTC over TCP
          - "8555:8555/udp" # WebRTC over UDP
        environment:
          FRIGATE_RTSP_PASSWORD: "password"

    IMPORTANT NOTE: This is going to record to your solid state drive for your main drive by default, which is very bad practice. The only reason it is configured this way is because we have not gotten to the zfs pool creation part of the guide, where we will create a redundant, encrypted, self-healing array of drives as a zfs pool. We want to record camera footage to large hard drives, not tiny solid state drives.

    Later on in the guide, you will want to change this once ZFS is set up. The two lines of interest will be:

          - ./storage:/media/frigate
          - ./database:/data/db
    • This is still set to record everything to main drive: we will come back to edit this later once we have set up a ZFS pool at the end. > DOCKER CHEAT SHEET: breaking down the docker-compose.yml File for Frigate

    > Every line of this docker-compose.yml is there for a reason. You likely have no clue what this is all for if you are reading this, so let’s go through it. > > 1. version: "3.9" > This is the version of Docker Compose file format. Version 3.9 is compatible with new Docker setups > > 2. services: > This section defines the “services” you want to run, which are containers. Here, we only have one container: frigate. > > 3. frigate: > This is the name of the service(container). It helps you identify the container in logs or commands like docker ps. You can name it anything you like, but frigate makes sense since that’s the application we’re running. > > 4. container_name: frigate > Custom name for the frigate container so it is easy to find when you type docker ps -a . Sometimes while debugging things that are not working you may want to enter the environment of the virtual container(this is like sshing into your server, but into the virtual server that runs frigate), which you can do by typing docker exec -it frigate bash - but to do that you need to know which container is which! This is where using sensible names comes into play. > > 5. privileged: true > Running the container in “privileged mode” allows it to access hardware devices like USB or PCIe directly. This is done because frigate can use devices you plug in(like a coral) to improve the performance of the machine learning for detecting items on camera(car, human, bird, etc) > > Warning: This gives the container elevated permissions, so only use it if absolutely needed (like here). > > 6. restart: unless-stopped > This tells Docker to restart the container unless you stop it. If the computer reboots or the container crashes, it will turn back on automatically > > 7. image: ghcr.io/blakeblackshear/frigate:0.13.2 > This tells it what Docker image to use. Here, we’re pulling version 0.13.2 of Frigate from github container registry (ghcr.io) instead of the newest one because the user interface was tortured & butchered to death with new releases. They destroyed it. It makes me sad how bad new versions are. > > 8. shm_size: "64mb" > This sets the size of shared memory available to the container. frigate uses shared memory for hardware acceleration and video processing. frigate documentation tells you how to increase this based on how many cameras you have running. > > 9. devices: > This part of the docker-compose file maps hardware devices from your host system(the physical computer you are installing this program onto) into the container. Frigate needs access to specific hardware for video processing. Let’s explain each line: > > - /dev/bus/usb:/dev/bus/usb: Maps USB devices for hardware like a USB Coral accelerator which can improve/speed up object detection & take the load off of the host computer. > - /dev/apex_0:/dev/apex_0: Maps a pci express coral thing for faster object detection. > - /dev/video11:/dev/video11: Maps a video input device, like a camera, for systems like Raspberry Pi. > - /dev/dri/renderD128:/dev/dri/renderD128: Maps Intel hardware acceleration for video encoding/decoding. > > 10. volumes: > This section maps directories or volumes between the host and the container. Volumes are where we save configuration, media, and data outside the container so they continue existing even if the container is restarted/deleted/shut off. > > - /etc/localtime:/etc/localtime:ro: This maps the time of the host computer to the time of the container(“computer”) running frigate. The :ro means “read-only,” so the container can’t cause the host machine to time travel. Time travel is cool though. If you agree, watch the movie Primer - you won’t be disappointed. Triangle is a close second. The ending messes me up every time. > - ./config:/config: Maps the config directory on the host to /config in the container, where Frigate expects its configuration file. > - ./storage:/media/frigate: Maps the storage directory on the host to /media/frigate in the container, where Frigate saves camera recordings. > - ./database:/data/db: Maps the database directory on the host to /data/db in the container, where Frigate stores metadata and video analytics. > - type: tmpfs: Creates a temporary file system in memory. This reduces wear on SSDs by storing cache data in RAM. > - target: /tmp/cache: Specifies the location of the cache inside the container. > - tmpfs.size: 1000000000: Limits the cache size to 1 GB. > > 11. ports: > This section maps network ports on the host to ports in the container. It allows you to access Frigate’s web interface and services. > - "8971:8971": Exposes Frigate’s main web interface on port 8971. > - "5000:5000": Exposes an internal port for access without username/password authentication. We will fix this later using nginx & an authentication setup. > - "8554:8554": Exposes Real-Time Streaming Protocol (RTSP) feeds for viewing video streams. > - "8555:8555/tcp" and "8555:8555/udp": Expose WebRTC services over TCP and UDP, allowing low-latency streaming. > > 12. environment: > This section defines environment variables, which are key-value pairs that configure the container. > > - FRIGATE_RTSP_PASSWORD: "password": Sets the password for accessing RTSP streams in Frigate. > 13. Important Warning About Default Storage > By default, this configuration saves camera footage (./storage:/media/frigate) and metadata (./database:/data/db) to your main drive. This is fine for testing, but long-term use will fill up and wear out your SSD. Later in the guide, you’ll learn to change these paths to a ZFS pool for redundant, self-healing storage that provides us with way more space than our operating system’d SSD.

    3.3 Create Frigate Configuration File[edit | edit source]

    1. Create and Edit the config.yml File:
      • Create a config/config.yml file to define your cameras & MQTT setup.

      • I have provided a template below. Creating yml files is painful and very easy to mess up. So I provided a known-working file for you to start with.

      • YOU WILL HAVE TO EDIT THE IP ADDRESSES, USERNAMES, AND PASSWORDS IN EACH PATH LINE TO THE URL OF YOUR ACTUAL CAMERA. YOUR CAMERAS WILL ALSO HAVE DIFFERENT URLS THAN MINE. I DID MOST OF THE WORK FOR YOU, BUT DON’T BE SO LAZY THAT YOU DON’T EVEN CHANGE THE CAMERA IPs & USERNAMES & PASSWORDS TO YOURS!

      • To find the RTSP URLs of your camera, you can install nmap on Ubuntu with:

        sudo apt install nmap -y
      • Then you go to your terminal and type the following, replacing the IP address of 192.168.3.120 with the IP address of your camera:

        sudo nmap --script rtsp-url-brute -p 554 192.168.5.19
        sudo nmap --script rtsp-url-brute -p 8554 192.168.5.19
      • You will receive a list of stream URLs. Let’s say one of them is "rtsp://192.168.5.19/Streaming/Channels/101".

    • You need to add your username & password here. So rtsp://192.168.5.19/Streaming/Channels/101 will become rtsp://username:[email protected]/Streaming/Channels/101.
    • Test that this works in a video player like VLC. In VLC, go MediaOpen Network StreamNetwork URL → enter the URL → click Play.
    • If it works, it can be entered into the path line and replace my URLs in the config file below.
    • The first four lines are going to be for MQTT, which sends messages to Home Assistant so that Home Assistant can send alerts to your phone when someone tries to steal your catalytic converter.
    mqtt:
      host: homeassistant.home.arpa  
      port: 1883
      user: louis
      password: passwordman
    
    cameras:
      front_door_closeup:
        ffmpeg:
          inputs:
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/101
              roles:
                - record
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/102
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
            - car
            - motorcycle
            - bird
            - cat
            - dog
            - horse
            - sheep
            - cow
            - bear
            - zebra
            - giraffe
            - elephant
            - mouse
          filters:
            person:
              mask: 570,299,545,0
            cat:
              min_score: 0.01
              threshold: 0.02
            dog:
              min_score: 0.01
              threshold: 0.02
            bird:
              min_score: 0.01
              threshold: 0.02
        motion:
          mask:
            - 473,0,21,156,53,317,140,312
        record:
          enabled: true
          events:
            pre_capture: 5
            post_capture: 5
            objects:
              - person
              - car
              - motorcycle
              - bird
              - cat
              - dog
              - horse
              - sheep
              - cow
              - bear
              - zebra
              - giraffe
              - elephant
              - mouse
    
      driveway:
        ffmpeg:
          inputs:
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/101
              roles:
                - record
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/102
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
            - car
            - motorcycle
            - bird
            - cat
            - dog
            - horse
            - sheep
            - cow
            - bear
            - zebra
            - giraffe
            - elephant
            - mouse
          filters:
            car:
              min_score: 0.01
              threshold: 0.03
            cat:
              min_score: 0.01
              threshold: 0.02
            dog:
              min_score: 0.01
              threshold: 0.02
            bird:
              min_score: 0.01
              threshold: 0.02
        record:
          enabled: true
          events:
            pre_capture: 5
            post_capture: 5
            objects:
              - person
              - car
              - motorcycle
              - bird
              - cat
              - dog
              - horse
              - sheep
              - cow
              - bear
              - zebra
              - giraffe
              - elephant
              - mouse
    
      side_door_closeup:
        ffmpeg:
          inputs:
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/101
              roles:
                - record
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/102
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
            - bird
            - cat
            - dog
            - horse
            - sheep
            - cow
            - bear
            - zebra
            - giraffe
            - elephant
            - mouse
          filters:
            car:
              min_score: 0.01
              threshold: 0.03
            cat:
              min_score: 0.01
              threshold: 0.02
            dog:
              min_score: 0.01
              threshold: 0.02
            bird:
              min_score: 0.70
              threshold: 0.75
        record:
          enabled: true
          events:
            pre_capture: 5
            post_capture: 5
            objects:
              - person
              - car
              - bird
              - cat
              - dog
              - horse
              - sheep
              - cow
              - bear
              - zebra
              - giraffe
              - elephant
              - mouse
    
      back_door_closeup:
        ffmpeg:
          inputs:
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/101
              roles:
                - record
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/102
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
            - car
            - bird
            - cat
            - dog
            - horse
            - sheep
            - cow
            - bear
            - zebra
            - giraffe
            - elephant
            - mouse
          filters:
            car:
              min_score: 0.75
              threshold: 0.75
            cat:
              min_score: 0.01
              threshold: 0.02
            dog:
              min_score: 0.01
              threshold: 0.02
            bird:
              min_score: 0.01
              threshold: 0.02
        record:
          enabled: true
          events:
            pre_capture: 5
            post_capture: 5
            objects:
              - person
              - car
              - bird
              - cat
              - dog
              - horse
              - sheep
              - cow
              - bear
              - zebra
              - giraffe
              - elephant
              - mouse
    
      front_porch_wide_angle:
        ffmpeg:
          inputs:
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/101
              roles:
                - record
            - path: rtsp://CAMERAUSERNAMEGOESHERE:[email protected]:554/Streaming/Channels/102
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
            - car
            - motorcycle
            - bird
            - cat
            - dog
            - horse
            - sheep
            - cow
            - bear
            - zebra
            - giraffe
            - elephant
            - mouse
          filters:
            person:
              min_score: 0.8
              threshold: 0.8
            car:
              min_score: 0.6
              threshold: 0.7
            cat:
              min_score: 0.01
              threshold: 0.02
            dog:
              min_score: 0.01
              threshold: 0.02
            bird:
              min_score: 0.6
              threshold: 0.65
        record:
          enabled: true
          events:
            pre_capture: 5
            post_capture: 5
            objects:
              - person
              - car
              - motorcycle
              - bird
              - cat
              - dog
              - horse
              - sheep
              - cow
              - bear
              - zebra
              - giraffe
              - elephant
              - mouse
    
      fishcam:
        ffmpeg:
          inputs:
            - path: rtsp://louis:[email protected]:554/stream1
              roles:
                - record
            - path: rtsp://louis:[email protected]:554/stream1
              roles:
                - detect
          output_args:
            record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy
        detect:
          width: 640
          height: 360
          fps: 20
        objects:
          track:
            - person
          filters:
            person:
              min_score: 0.3
              threshold: 0.3
        record:
          enabled: true
          events:
            pre_capture: 15
            post_capture: 15
            objects:
              - fish
    
    database:
      path: /data/db/frigate.db
    #version: 0.14

    Note: For each camera, configure the RTSP inputs for recording and detection streams. Define output arguments, detection settings (e.g., width, height, fps), and tracked objects (e.g., person, car, bird, dog). You can set filters for specific objects, mask areas for motion detection, and enable event recording with pre-capture and post-capture times. Repeat for additional cameras as needed.

    3.4 Running Frigate[edit | edit source]

    1. Start Frigate:
      • Start Frigate by running: docker compose up -d.
    2. Access the Frigate Web Interface:
    3. Configure Additional Settings:
      • Edit the config.yml file as needed to add or modify cameras, object tracking settings, or motion detection masks.
    4. Note on Storage:
      • It’s recommended to use a separate storage device for Frigate’s media to avoid unnecessary wear on your primary SSD. We’ll go into detail about setting up ZFS pools & external storage later.

    3.5 Enjoy Frigate![edit | edit source]

    You have the best NVR software there is, and no cancerous hideous modern UI. Enjoy!

    Step 4: Make sure it all works.[edit | edit source]

    There’s nothing worse than someone kidnapping your kid or killing your dog & not being able to see who did it because you set your threshold too low in a yaml file. Extensively test everything. Assume it won’t work later, because often with camera systems, it doesn’t.

    Step 5: Get Instant Camera Alerts On Your Phone[edit | edit source]

    Now you have a camera you can see when you log into it, but don’t you want to get an alert if some weirdo is walking through your backyard? Home Assistant and Frigate can talk to each other to make this happen.

    Home Assistant needs two things:

    • To receive communication from Frigate
    • A client and a broker that understand that communication.

    We are going to go over how to set all of this up – and use a handy extension that allows us to avoid miserable YAML files for setting this all up, that is simple, point, and click.

    5.1 Switch gears & go back to Home Assistant[edit | edit source]

    1. Open web browser
    2. Go to http://192.168.1.7:8123 or http://homeassistant.home.arpa:8123

    5.2 Download and Install HACS[edit | edit source]

    1. Download HACS (Home Assistant Community Store):
      • Go to HACS → Download on their website.
      • Click onto the OS/supervised version, as that’s the version of Home Assistant we have installed.
    2. Open the HACS Add-on Repository:
      • Click the link provided to add the HACS repository to your Home Assistant instance. It’ll ask you to Add missing.
    3. Enter Home Assistant URL:
      • It will ask for your Home Assistant link.
      • By default, Home Assistant may attempt to use homeassistant.local:8123, which will fail.
      • If you are following this guide’s setup, use one of the following URLs:
      • Replace these with your actual Home Assistant domain or IP address if different.
    4. Install HACS:
      • Follow the prompts to install HACS in Home Assistant.
      • BE PATIENT! Click on the LOGS tab and wait for it to be DONE!!! before you try to start adding things, or nothing will work.
    5. Restart Home Assistant:
      • After installation, restart your Home Assistant instance for the changes to take effect.
      • Go to settings → system → power button icon in the upper right-hand corner, click the power button, and click “restart home assistant.” DO NOT DO THIS UNTIL THE LOGS TAB FOR HACS SAYS EVERYTHING IS DONE
    6. Clear your browser cache, cookies, etc.
    7. Log back into Home Assistant.
    8. Go to Settings → Devices & Services → Add Integration & Search for HACS
      • If it doesn’t show up, do not pass go, do not collect $200 – re-follow the instructions here and here. Clear your browser cache/cookies, choose the option to reboot Home Assistant rather than restart Home Assistant when you go to settings → system → power button icon in the upper right-hand corner, clear cache/cookies in the browser, go to settings → addons → get HACS → CLICK START.
    9. Go to logs
    • Wait! Don’t be impatient! Wait for it to be done. You will see the following at the end of the log when it is done:

      INFO: Installation complete.
      INFO: Remember to restart Home Assistant before you configure it.
      s6-rc: info: service legacy-services: stopping
      s6-rc: info: service legacy-services successfully stopped
      s6-rc: info: service legacy-cont-init: stopping
      s6-rc: info: service legacy-cont-init successfully stopped
      s6-rc: info: service fix-attrs: stopping
      s6-rc: info: service fix-attrs successfully stopped
      s6-rc: info: service s6rc-oneshot-runner: stopping
      s6-rc: info: service s6rc-oneshot-runner successfully stopped
    1. Add Integration Properly:

      • Go to Settings –> Devices → Devices & Integrations → Add Integration & search for HACS.

      • Check the boxes.

      • Click submit.

      • It will ask you to open a link to log into GitHub, and insert a code. Click it.

      • Go to GitHub. If you lack an account, make one. If you have a GitHub account, log in.

      • Enter code.

      • Authorize HACS.

      • Add HACS to an “area.”

      • Click finish.

      • Next step!

    5.3 Add Frigate Add-ons to Home Assistant[edit | edit source]

    1. Visit Frigate Home Assistant Add-ons page
    2. Log back into Home Assistant when it prompts you to.
    3. Add Frigate Repository:
      • Click the bright blue “Add-on repository to my Home Assistant” button.
    4. Download and Install Frigate:
      • You’ll see two buttons. One is a blue button that says “Open with Home Assistant Store,” and the other is for downloading the add-on.
      • Important: The blue button in the middle refreshes the page without installing anything.
      • To download and install Frigate, make sure to click the Download button at the bottom.
    5. Access Home Assistant Again:
      • You’ll be prompted again to enter your Home Assistant domain with :8123.
      • Remember, the default URL homeassistant.local:8123 won’t work. HomeAssistant assumes you’re using a standard router where the domain is .local - but with pfsense, it is .home.arpa Use one of the following:
    6. Click “Download” in the lower left corner.
    7. Continue with installing, wait for it to install — it should be quick.
    8. Go to Home Assistant Settings in the lower left corner.
    9. It will say “1 repair, restart required” with the little Frigate logo at the top, or just restart required at the top.
    10. Click this, follow prompts, and restart Home Assistant.

    5.4 Add Frigate Integration[edit | edit source]

    1. Add Frigate integration to Home Assistant

      1. Go to Settings in the Home Assistant menu.

      2. Navigate to Devices & Integrations.

      3. Click Add Integration, and search for Frigate in the list. Follow the prompts to add it.

    2. Enter Frigate URL:

      1. The URL will be the IP address you chose for the server you installed Frigate on, or its hostname: in my case http://192.168.5.2:5000, OR http://happycloud.home.arpa:5000 with the examples I have provided.
    3. Once Frigate is integrated, you’ll be asked to assign cameras to specific areas within Home Assistant. Select the appropriate areas for your cameras.

    5.5: Configure Mosquito Broker & MQTT (in that order)[edit | edit source]

    1. Check if MQTT Broker (Mosquitto) is Installed: Go to Settings > Add-ons and find the blue add-on Store button at the bottom right.
    2. Look for Mosquitto Broker.
    3. Click Install.
    4. Once installed, start the add-on and make sure Start on Boot is enabled, and hit start.
    5. Configure MQTT Broker in Home Assistant:
      • Go to Settings > Devices & Services > Add Integration.
      • Search for MQTT and select it. Go into MQTT by clicking it and add it.
    6. Autoconfigure Prompt:
      • It should prompt you to autoconfigure it with the mosquito broker you just installed.
      • Remember the order – install mosquito broker from addons FIRST, THEN install MQTT from Settings > Devices & Services > Add Integration, or MQTT may not auto-configure itself the same way.
        • Broker: core-mosquitto (since Mosquitto is running on Home Assistant OS). This will auto configure by default.
        • Don’t worry if the MQTT thing has no working configure buttons, those are as optional as the JTAG connector on a MacBook motherboard.
        • Port: 1883 (default MQTT port). This will auto configure by default.
        • Username and Password: Mosquitto broker allows Home Assistant users to log in so you don’t have to worry about this. When we enter this information into Frigate, we will be using the username & password we use to log into home assistant.

    5.6 Set Up Frigate Mobile App Notifications[edit | edit source]

    • Download Notification Blueprint:
    • You need this unless you want to be in hell writing YAML files yourself. You don’t want to do that, right? I thought so.

    5.7 Configure Automations for Camera and Notifications[edit | edit source]

    1. Access Automation Editor:

    2. Use Frigate Notifications Blueprint:

      • Click “Blueprints” at the top right.
      • Click “Frigate Notifications” which is what you want.
    3. Configure Automation:

      • Here you scroll down to choose your camera, and your mobile device, the name of the automation, etc.

      • Most important thing to get right is the name of the camera & the mobile device, everything else you can customize and it’s not for me to tell you how to.

        NOTE: If your mobile device does not show up, log into Home Assistant on your phone and add it as a device to Home Assistant. It will prompt you to do this by default when you first set up the app. Then go back here and redo this step (you will have to close out of the window you just opened after clicking Blueprints → Frigate Notifications & reclick it so the dialog box for your phone will show your phone)

    4. Make sure MQTT is set up in the frigate config.yml file:

      • Make sure in Frigate’s Config menu, in the config.yml file, MQTT is set up as follows, with the username & password matching your homeassistant login, and your host matching the IP address of the home assistant server:
    mqtt:
      host: homeassistant.home.arpa
      port: 1883
      user: louis
      password: passwordman
    1. Enjoy Your New Frigate Integration with Home Assistant!

    Step 6: Making Frigate Secure[edit | edit source]

    NOTE: (if the complexities of docker networking confuse you, skip ahead to “steps”)

    Newer frigate has username/password authentication, but it is so useless you will not want to ever log into it. That isn’t helpful.

    Older frigate has no authentication, so anyone who goes to http://192.168.5.2:5000 on your local network has admin access to everything. They can stop recording, delete recordings, have your setup record goatse, etc. VERY BAD.

    Further complicating things, our Frigate plugin on Home Assistant, at 192.168.5.4, needs to communicate with 192.168.5.2 in order to grab Frigate’s camera setup, on port 5000 – WITHOUT authentication. The communication to grab the camera setup is separate from the mqtt traffic. :( This makes it difficult to secure versions of Frigate that have a functioning UI.

    We can set up nginx as a reverse proxy – this directs all traffic that is received on port 80 & 443 to https:// traffic that directs to Frigate on port 5000. We can add username/password authentication using nginx here, so that people need a password to view it. Then, we can block port 5000 by binding Frigate to only work on localhost.

    But this means that Home Assistant won’t be able to connect to it – since it’s running on another machine. F&^!

    • Plan to set up username/password authentication for Frigate:
      • Use iptables to allow all traffic to port 5000 from 127.0.0.1 (localhost, the computer running Frigate), so that nginx can connect to Frigate.
      • Allow all traffic from 192.168.5.4, our Home Assistant virtual machine, to connect to port 5000 Frigate.
      • Block EVERYTHING ELSE on port 5000.
      • Set up nginx as a webserver on port 443 with https & ssl.
      • Tell nginx anyone accessing the nginx webserver they need to submit a username & password to get in.
      • Tell nginx to show anyone who enters that user/pass when showing up on port 443 to be able to see Frigate on port 5000.

    TL;DR

    • We’re telling everyone who wants to view the cameras they have to enter a username & password.
    • This allows you to view your cameras just fine.
    • This tells anyone who tries to get into your system without a password to gargle your balls.
    • This allows homeassistant to connect without being blocked.

    We have to do this on the machine itself, since people on our LAN are not going to have to talk to the router in order to log into Frigate, since they are on the same network. These rules will be added on 192.168.5.2, aka happycloud.home.arpa, our machine that is running Frigate.

    6.1 Making iptables rules[edit | edit source]

    Allow established connections (makes https more stable, fitter, happier, more productive. Not eating too much)

    sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

    Allow localhost access to port 5000:

    sudo iptables -A INPUT -i lo -p tcp --dport 5000 -j ACCEPT

    Allow Home Assistant access to port 5000

    sudo iptables -A INPUT -s 192.168.5.4 -p tcp --dport 5000 -j ACCEPT

    Block all other access to port 5000

    sudo iptables -A INPUT -p tcp --dport 5000 -j DROP

    Make sure Docker respects these rules

    sudo iptables -I DOCKER-USER -j RETURN

    Install the iptables-persistent package:

    sudo apt install iptables-persistent
    1. During installation, you’ll be asked if you want to save the current iptables rules. Choose Yes.
    2. If you’re not prompted, you can manually save the rules by running: sudo netfilter-persistent save
    3. YOU NEED TO INSTALL IPTABLES-PERSISTENT AND TELL IT TO SAVE YOUR RULES OR ELSE YOU HAVE TO RUN THIS EVERY TIME YOU BOOT!

    6.2 Installing nginx[edit | edit source]

    Next up, it’s time to install nginx & everything necessary for us to have it ask for a username and a password to log in.

    1. Install Nginx:

      sudo apt install nginx
    2. Run the following commands:

      sudo apt update
      sudo apt install nginx -y
    3. Create a Self-Signed SSL Certificate Generate the certificate:

      sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt

    Note: For the Common Name (CN), use your local domain (e.g., happycloud.home.arpa).

    1. Create Strong Diffie-Hellman Group, makes security and https better, because we totally need more security on a LAN connection nobody else will be able to connect to besides your kid who’s trying to troll you with

      sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
    2. Create Password File for Basic Auth Install apache2-utils and create the password file:

      sudo apt install apache2-utils
      sudo htpasswd -c /etc/nginx/.htpasswd your_username
    3. Replace your_username with your desired username.

    6.3 Configure Nginx[edit | edit source]

    Create a new Nginx configuration file:

    sudo nano /etc/nginx/sites-available/frigate

    If this directory does not exist, you might be using a newer version of nginx, which places configuration files in /etc/nginx/conf.d instead. Running nginx -v will tell you whether you are using an older version that defaults to /etc/nginx/sites-available and /etc/nginx/sites-enabled or a newer version that uses /etc/nginx/conf.d/ in that case:

    sudo nano /etc/nginx/sites-available/frigate

    Add the following configuration: remember to replace “happycloud.home.arpa” as well as “192.168.5.2” with the hostname & IP address of YOUR server!

    server {
        listen 80;
        server_name happycloud.home.arpa 192.168.5.2;
        return 301 https://$host$request_uri;
    }
    server {
        listen 443 ssl;
        server_name happycloud.home.arpa 192.168.5.2;
        ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
        ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;
        ssl_session_timeout 10m;
        ssl_session_cache shared:SSL:10m;
        ssl_session_tickets off;
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/.htpasswd;
        location / {
            proxy_pass http://127.0.0.1:5000;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
        location /ws {
            proxy_pass http://127.0.0.1:5000;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Host $host;
        }
    }

    NOTE: Many open source projects suggest using nginx as a reverse proxy. They are kind & cordial enough to provide their own configuration files for you so you don’t have to write everything above & configure it yourself.

    While well meaning, many of them set the cipher(security thingie) manually, a throwback tot he days when nginx used to default to insecure ciphers. So you may see old docs by developers that MEANT WELL to provide you a helping hand with stuff like this in their nginx configuration files:

        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_prefer_server_ciphers on;
        ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384;

    This is bad. Remove things like this as long as you are using a modern version of nginx. These change often and if you are manually setting it, that is not a great thing to be doing. Also consider politely(POLITELY) mentioning to the devs who had that in there that this isn’t necessary anymore since nginx no longer defaults to insecure ciphers.

    6.4 Enable the Nginx Configuration[edit | edit source]

    Enable the configuration and reload Nginx. The commands below do the following:

    sudo ln -s /etc/nginx/sites-available/frigate /etc/nginx/sites-enabled/ This takes our configuration file out of the “chamber” (sites-available) and into the breech (sites-enabled). Your configuration file you place in sites-available will not work unless it is in sites-enabled. ln -s creates a symlink, similar to how a shortcut works in Windows. nginx -t checks our configuration file for errors. sudo systemctl reload nginx allows nginx to load a new configuration file without shutting down.

    sudo ln -s /etc/nginx/sites-available/frigate /etc/nginx/sites-enabled/ 
    sudo nginx -t # This checks if config is bad & tells us what we did wrong
    sudo systemctl reload nginx

    6.5 Test Frigate; does it require user/pass?[edit | edit source]

    • Log into http://192.168.5.2:5000 from other computers on your LAN. If it doesn’t work, you did good.
    • Your nephew can no longer replace your cameras with goatse.

    6.6 Make sure Home Assistant still connects to Frigate.[edit | edit source]

    1. Go over to Home Assistant at http://192.168.5.4:8123 or http://homeassistant.home.arpa
    2. Go to Settings → Devices & Integrations → Frigate
    3. Click “ADD DEVICE” blue button on bottom right
    4. Enter the IP or hostname, along with port, of the machine running Frigate on port 5000 like such: http://192.168.5.2:5000 or http://happycloud.home.arpa:5000
    5. Click SUBMIT.
    6. If Home Assistant’s Frigate plugin can find your cameras by connecting to Frigate on port 5000, but no other computer on your LAN can, YOU DID GOOD.
    7. Go to http://192.168.5.2 – it should auto-redirect you to SSL https://192.168.5.2 & then ask for username & password.
    8. Enter your username & password.
    9. If you are now in Frigate, you done good.

    Replacing Google Drive, Photos, Docs, Sheets, & Keep[edit | edit source]

    Next up, we’ll be setting up a complete app suite so those of you used to iCloud for photos, Google Docs for online office, backup, etc., don’t feel like you’re making big sacrifices. The programs we’ll be installing are as follows:

    1. Immich, to replace Google Photos/iCloud Photos
    2. Onlyoffice, to replace Google Docs & Google Sheets
    3. Syncthing, to replace iCloud & Google Drive
    4. Samba, to allow easy access in any file explorer in any operating system to users connected via VPN
    5. Nextcloud Notes for a Google Keep-like notes system.

    Step 1: Making a new virtual machine[edit | edit source]

    We are going to create a second Ubuntu server virtual machine for our next task – setting up Immich, Onlyoffice, and Syncthing. These instructions are virtually identical to the instructions for installing a virtual machine for Mailcow.

    What makes this virtual machine installation different from Mailcow’s VM installation?[edit | edit source]

    We want more RAM & CPU power for this instance because:

    • Immich is going to transcode videos we upload to video proxies
    • Immich is going to run machine learning tasks on your photos (LOCALLY)
    • Immich is going to create thumbnails of our photos

    Note: What is a video proxy? Video proxies & photo thumbnails are smaller, more compressed versions of the original video or picture that allow you to load them quickly even when your internet connection is slow.

    Step 1: Setting up Virtual Machine Manager (virsh)[edit | edit source]

    1. In Virtual Machine Manager, click File > New Virtual Machine from the menu.

    1.1 Choose Installation Media[edit | edit source]

    • Select “Local install media (ISO image or CDROM)” and click “Forward”.
    • Click “Browse” to select your Ubuntu Server ISO.
    • Choose the ISO file you prepared earlier (e.g., /var/lib/libvirt/images/ubuntu-server.iso) and click “Forward”.

    1.2 Choose Operating System Version:[edit | edit source]

    • Virtual Machine Manager may automatically detect the OS. If not, search for ubuntu and choose what is closest to your version. When in total doubt, linux generic 2022 works.
    • Click“Forward”.

    1.3 Configure Memory and CPU:[edit | edit source]

    • Allocate the resources for your VM:
      • Set RAM: I would use at LEAST 75% of your machine’s RAM.
      • Set vCPUs: I would set this to at least 75% of your CPU’s cores.
    • Click “Forward”.

    1.4 Configure Storage:[edit | edit source]

    • Select Create a disk image for the virtual machine.
    • I would make this as large as you imagine your entire smartphone backup to be, plus extra for padding.
    • What is the size of ALL of your photos, videos, and files on your phone? That’s the size to choose here.
    • When I say videos, I do not mean things you want to watch at home/on your TV – we will have another setup for that. I mean your personal photo albums/videos recorded on your phone.
    • Make sure the disk image format is QCOW2. This format supports resizing, and other cool features.
    • Click “Forward”.

    1.5 Set Up Networking with the Bridge Interface[edit | edit source]

    • Choose “Specify shared device name” under “Network Selection”.
    • In the Device Name field, type br0 (or whatever name you have given your bridge interface).
    • This will allow the VM to grab a static IP from the same network as your host machine, making sure it acts like an independent hardware device.
    • Click “Forward”.

    1.6 Finish & Customize Before Installing[edit | edit source]

    • Name your virtual machine (e.g., “androidstuff”), something suitable for what this machine will do.
    • Before clicking “Finish”, check the box that says “Customize configuration before install”.
    • Click “Finish”.

    Step 2: Install Ubuntu Server as a Virtual Machine[edit | edit source]

    I will be blazing through this since we did this already once - refer to Installing Ubuntu Server with RAID 1, LVM, and LUKS Encryption above.

    Keep in mind the following:

    We are NOT using LUKS encryption here. There is no need since the image is going to be stored on an encrypted partition.

    We are NOT using RAID – this is a disk image that is being stored on a RAID array, so we are not doing that.

    We are configuring networking the same as we did before, but we will be using a different IP address!

    2.1 Start the installation process in the virtual machine[edit | edit source]

    Choose your language and select “Try or install Ubuntu Server”.

    Follow the installation prompts.

    2.2 Configure Static IP Address[edit | edit source]

    • When you reach the Network configuration screen, select the network interface that corresponds to your network interface.
    • Choose the option “Configure network manually”.
    • Enter the following details:
      • IP Address: 192.168.5.5
      • Subnet: 192.168.5.0/24
      • Gateway: 192.168.5.1
      • Nameserver: 192.168.5.1
    • Make sure you enter all the details correctly to provide the virtual machine has the correct static IP configuration.

    2.3 Partition the virtual “drive”[edit | edit source]

    • When you reach the Filesystem setup section, select “Use an entire disk” and then choose the disk you want to install Ubuntu Server on.
    • Choose the option “Set up this disk as an LVM group”.
    • Important: At this stage, edit the partition sizes as Ubuntu’s installer usually allocates 2 GB for boot which is ridiculous and even worse it only uses half the available space for your LVM & root. The Ubuntu auto partitioner is horrible.
    • Reduce the boot partition to 512 MB.
    • Delete the old LVM & root partition.
    • Create a new LVM taking up the entire disk.
    • Create a logical volume for the root filesystem, using all available space.
    • Do not encrypt the volume (it’s unnecessary since the host drive is already encrypted, and it is not my intention for you to have these VMs running on other people’s servers).

    2.4 Finalize installation & do not install docker[edit | edit source]

    • Set up your username and password.
    • Choose to install OpenSSH server.

    WARNING: DO NOT CHOOSE TO INSTALL DOCKER USING THE PROMPT AFTER THIS!

    • After configuring the partition sizes, proceed with the installation process as usual, following the prompts to set up any additional software you want to install.
    • Once the installation is complete, the system will automatically apply your network & partitioning settings.
    • When prompted, remove the installation media (ISO) disk image from the virtual machine settings.
    • Restart the virtual machine.

    2.5 Remove the CDROM[edit | edit source]

    • Go to View —> Details in Virtual Machine Manager
    • Go to “SATA CDROM” on the left side.
    • Confirm that the “source path” is the ubuntu iso we downloaded for installing Ubuntu server on this virtual machine
    • Click “remove” in the lower right corner.
    • UNCHECK “delete associated storage files” – we will use this image again later!
    • Click delete.
    • You may have to turn off the VM to do this.

    2.6 Set Up Static IP Mapping in pfSense:[edit | edit source]

    • Log into your pfSense router.
    • Go to Status > Diagnostics > ARP Table.
    • Find the MAC address associated with your server’s IP (in our case this is, 192.168.5.5), copy it.
    • Go to Services > DHCP Server.
    • Scroll to the bottom and click “Add Static Mapping”.
    • Enter the MAC address and IP address of your server.
    • Give it a descriptive name (such as “androidstuff static IP”).
    • Set the hostname to androidstuff.
    • Save and apply changes.

    Note: This makes sure that this IP address is reserved for this computer to connect to, so that no other device can take it (unless they are spoofing MAC addresses, but if someone does, that’s a different story).

    2.7 Set up this virtual machine to start at boot:[edit | edit source]

    Type the following into the terminal at happycloud, which is our main server that we are creating all of these virtual machines on at 192.168.5.2:

    virsh autostart androidstuff
    • Check that this is set up properly by typing virsh dominfo androidstuff and seeing if the autostart line is set to enable.
    • If you don’t do this, you will realize once it is too late & you’ve left your house after you have rebooted your server (for whatever reason) that none of your services are working. This will suck.
    • This command makes it so that the virtual machine starts each time we boot the computer.

    You’ve now successfully set up an Ubuntu Server virtual machine using Virtual Machine Manager, configured with a static IP address and LVM partitioning. We have a virtual machine that we just created that we can use to set up our second server for android backups, image search using machine learning & face detection with local models that don’t connect to the internet. EXCITED??? I AM! :D :D :D

    Step 2: Setting up Syncthing for android backups[edit | edit source]

    Step 1: Install syncthing[edit | edit source]

    1.1 Add the Syncthing Repository[edit | edit source]

    First, we need to add the Syncthing repository and its PGP key for package verification.

    1. Create a directory for the keyring:

      sudo mkdir -p /etc/apt/keyrings
    2. Download the Syncthing release PGP key:

      sudo curl -L -o /etc/apt/keyrings/syncthing-archive-keyring.gpg https://syncthing.net/release-key.gpg
    3. Add the Syncthing stable repository to your APT sources:

      echo "deb [signed-by=/etc/apt/keyrings/syncthing-archive-keyring.gpg] https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list

    1.2 Make Sure Syncthing Repository Takes Priority[edit | edit source]

    To make sure the system packages don’t take preference over the ones in the Syncthing repository:

    • Create a preferences file for APT:

      sudo nano /etc/apt/preferences.d/syncthing
    • Add the following content to the file:

      Package: *
      Pin: origin apt.syncthing.net
      Pin-Priority: 990
    • Save & exit the editor (in nano, press Ctrl+X, then Y, then Enter).

    1.3 Install Syncthing[edit | edit source]

    Now that we’ve added the repository and made sure its priority, let’s install Syncthing:

    • Update the package lists and make sure your system is up to date:

      sudo apt-get update
      sudo apt-get upgrade -y
    • Install Syncthing:

      sudo apt-get install syncthing -y

    Step 2: Setting Up Syncthing as a System Service[edit | edit source]

    To have Syncthing start automatically on system boot, even without user login, we’ll set it up as a systemd service that runs as our user, even if we haven’t logged in yet.

    2.1 Create a Systemd Service File[edit | edit source]

    1. Create a new service file:

      sudo nano /etc/systemd/system/syncthing@$USER.service
    2. Add the following content to the file:

      [Unit]
      Description=Syncthing
      Documentation=man:syncthing
      After=network.target
      
      [Service]
      User=%i
      ExecStart=/usr/bin/syncthing -no-browser -gui-address=0.0.0.0:8384
      Restart=on-failure
      RestartSec=5
      SuccessExitStatus=3 4
      RestartForceExitStatus=3 4
      # Harder
      ProtectSystem=full
      PrivateTmp=true
      SystemCallArchitectures=native
      MemoryDenyWriteExecute=true
      NoNewPrivileges=true
      
      [Install]
      WantedBy=multi-user.target
    3. Save and exit the editor, hit Ctrl+X then Y to save.

    2.2 Configure the Service[edit | edit source]

    1. Enable the service:

      sudo systemctl enable syncthing@$USER.service
    2. Start the service:

      sudo systemctl start syncthing@$USER.service

    Step 3: Securing Syncthing’s Web Interface[edit | edit source]

    By default, Syncthing’s web interface is accessible from any device that can reach your server. This makes it very important to secure the interface with a strong password.

    3.1 Access the Web Interface[edit | edit source]

    1. Open a web browser and navigate to http://192.168.5.5:8384 or http://androidstuff.home.arpa.
    2. You should see the Syncthing web interface.

    3.2 Add a GUI Password[edit | edit source]

    1. In the web interface, click on the “Actions” button (gear icon) in the top right corner.
    2. Select “Settings” from the dropdown menu.
    3. In the Settings page, scroll down to the “GUI” section.
    4. Find the “GUI Authentication User” field and enter a username.
    5. In the “GUI Authentication Password” field, enter a strong password.
    6. Check “Use HTTPS for GUI” so we can visit the server using https://androidstuff.home.arpa:8384 instead. It’s a good habit. :)

    Note: Choose a complex password so some random person who attaches to your home wifi if you forget to set up a guest network that has no LAN access can’t mess with your Syncthing configuration.

    1. Click “Save” at the bottom of the page.
    2. Syncthing will prompt you to confirm the changes. Click “Yes” to apply the new settings.
    3. You’ll be logged out and prompted to log in with your new credentials.
    4. Attempt to access the interface again. You should be prompted for the username and password you set. If not, you messed something up. Do not pass go, do not collect $200, until this asks you for a password to log in.

    Step 4: Configuring Syncthing Discovery Settings[edit | edit source]

    4.1 Understanding Discovery Methods & why we DON’T USE THEM.[edit | edit source]

    Discovery methods are how the syncthing app on your phone will “find” the server you set up as your backup server.

    NOTE: Our server has a static IP: 192.168.5.5. We went through the trouble to make sure it always lives at 192.168.5.5 via static mappings in pfSense and configuring a static IP in the server’s networking settings. Our server will always be present at 192.168.5.5 or androidstuff.home.arpa while we are connected via VPN. All Syncthing “discovery” is doing is trying to find our machine, but why use a find feature when we already know where it is? This adds another point of failure for no good reason! Think of it like making your iPhone invisible & then enabling “find my iPhone.”

    This setup we are installing syncthing onto has the following:

    1. A static IP configured, so that it is always 192.168.5.5
    2. A static IP mapping configured in our router, so that no other device on our network can ever steal 192.168.5.5 from the computer running syncthing.
    3. A static hostname of androidstuff that does not change.
    4. Dynamic DNS for our main internet connection, so when we are outside our network our pfSense router & FreeDNS will make sure that louishomeserver.chickenkiller.com always points to our home network IP address.

    I will showcase local discovery failing on video. It “works” when I initially connect to my server via QR code & visiting it in the browser, but fails when I try to connect again. This is because my VPN is on network 192.168.6.0/24 and my Syncthing is on 192.168.5.0/24. I was hoping local discovery would be “smart” enough to remember the last IP address my server was on since it had not changed, but it did not.

    NEVER RELY ON SOMETHING ELSE TO BE “SMART” IN SOLVING A PROBLEM THAT DOES NOT HAVE TO EXIST IN THE FIRST PLACE!

    4.2 Local Discovery – DO NOT TRUST![edit | edit source]

    Local discovery allows Syncthing to find other devices on your local network automatically. Key word, local – meaning your subnet of 192.168.5.0/24. What if you connect via your VPN, which is on 192.168.6.0/24?

    When we first add the QR code of our Syncthing instance to our Android phone Syncthing app, Syncthing will connect to our desktop server running Syncthing. HOWEVER: our Android application will NOT find the Syncthing server the NEXT time we connect. THIS IS BAD!!

    This is even worse than it not working at all, as it will give the false impression that it works. This is how people who have set up “backup solutions” end up as customers of Rossmann Repair Group paying $2000 to recover a hard drive that fell off a balcony.

    Connecting Reliably to Syncthing without Discovery Hassles[edit | edit source]

    This situation is actually worse than if Syncthing had no Local Discovery feature at all. If it didn’t work from the start, you’d know you couldn’t rely on it and would just hardcode the IP of your Syncthing server right into your Android app, using the server’s local IP to connect directly.

    What’s dangerous is that Syncthing’s Android app connects the first time by scanning the QR code on the server, making it seem like it’s actually discovering your computer. But it’s not. Next time you try to connect—especially if you’re on a different subnet via VPN—it’ll fail to find the server.

    Syncthing doesn’t even remember the last IP address it used, so it ends up trying to rediscover it, failing again.

    I get it. If it can’t find the server on a different subnet when you’re using a VPN, fine, but it’s dangerous that Syncthing doesn’t try the last known IP to see if it still works.

    TL;DR – to avoid becoming a data recovery customer, don’t trust local or global discovery. Just use the IP address of the server, which in our case is 192.168.5.5, and check that it works three separate times under three separate conditions before ever assuming that it is working, as you should with ANY backup solution!

    4.3 Global Discovery[edit | edit source]

    Global discovery helps Syncthing find your devices over the internet. It works by periodically announcing your device’s presence to global discovery servers.

    • Privacy Implications: Higher risk, as it involves sharing your device’s information with external servers. This could potentially expose:
      • Your IP address
      • The fact that you’re using Syncthing
      • When your device is online

    The bigger issue with this is not privacy, it’s that it is unnecessary and adds another point of failure over entering the hostname manually.

    4.2 Configuring Discovery Settings[edit | edit source]

    1. Access Syncthing Settings
      1. Open the Syncthing web interface (typically https://192.168.5.5 or https://androidstuff.home.arpa:8384).
      2. Click on the “Actions” button (gear icon) in the top right corner.
      3. Select “Settings” from the dropdown menu.
    2. Adjust Discovery Settings
      1. In the Settings page, scroll to the “Connections” section.
      2. Find the following options:
        • Enable Local Discovery: Keep this checked.
        • Enable Global Discovery: Uncheck this box.
      3. Click “Save” at the bottom of the page.
      4. Syncthing will prompt you to confirm the changes. Click “Yes” to apply the new settings.

    Step 5: Connecting server syncthing to android syncthing[edit | edit source]

    5.0 – Connect to your VPN.[edit | edit source]

    Your android phone must be connected to your VPN for you to connect to your server if your phone is not on the same wifi network as the virtual machine running the syncthing server.

    5.1 Install syncthing from the f-droid store.[edit | edit source]

    5.2 Avoid becoming a data recovery customer[edit | edit source]

    Delete the Camera Folder: Not from the device, just from the sync list, within syncthing. Tap on the camera folder & hit the trash bin in the upper right.

    There’s a good reason for that. You might think, “Why? I WANT to sync and back up my photos and videos!!” Here’s the thing: sometimes, camera apps switch folders without you knowing. I’ve seen cases where photos were saved in a different folder INSIDE the DCIM folder, and the gallery app only showed one specific folder.

    I’m not a predatory technician that bills people $3000 for a bad iPhone screen or charge port. But they are out there, and someone was close to paying $500 to a different scam artist data recovery company because their gallery app wasn’t checking a 2nd folder inside of the DCIM folder where another program was saving photos to.

    We are not going to back up the camera folder within the DCIM folder. We are going to back up the entire DCIM folder.

    For those who don’t know, on 99% of Android devices, DCIM is a folder in the root directory of the “visible” filesystem within which the subfolders storing your recorded videos & pictures reside.

    Next, I am going to do something different. I wanted to show you what happens when you use local discovery/dynamic rather than inserting your actual server IP address into the server field. This meant including screenshots from a LATER step, after I had already added folders that we are going to sync, to show you how syncthing fails with local discovery. It’s important to me that you understand how this fails with images for yourself, so you don’t create a setup that makes you a data recovery customer.

    hEREEEEEEEEEEEEEE

    Here is what will happen if you set this up with dynamic, disconnect, and then reconnect. Note how it shows up as “idle” for syncing and “disconnected” on the android phone; it is transferring NOTHING, even though the desktop syncthing server GUI shows that we are out of sync.

    5.3 Add a device to syncthing android app[edit | edit source]

    1. On the top, you’ll see Folders and Devices.
    2. Tap Devices.
    3. Tap the plus in the upper right corner to add a device.
    4. Tap the QR code next to Device ID in the upper right.
    5. Go back to the Ubuntu Server Syncthing Web Interface.
    6. Obtain Device ID and QR Code
      • In the web interface, click on the blue gobbledygook of numbers & letters next to “Identification” under “This Device” (gear icon) in the top right.
      • Select “Show ID”.
      • You’ll see a QR code and the device ID. SCAN YOURS. DO NOT SCAN MINE. I SHOWED A PICTURE OF MINE SO YOU CAN SEE WHAT IT LOOKS LIKE.
    7. Configure Device Settings on Android
      • Device Name: Enter a recognizable name (e.g., “Ubuntu Server”).
      • Addresses: DO NOT CHOOSE DYNAMIC. USING DYNAMIC WILL CAUSE IT TO NOT SYNC WHEN YOU DISCONNECT & RECONNECT FROM YOUR NETWORK. IT WILL WORK THE FIRST TIME, AND THEN NEVER SYNC AGAIN, AND YOU WILL BE PAYING DATA RECOVERY DOUCHEBAGS TO RECOVER YOUR PHONE.

    How dynamic failed: I used “dynamic” as an example of why it doesn’t make sense to use autodiscovery when you KNOW where your server is. I chose dynamic, and it connected & worked. When I disconnected from my network & reconnected, the Devices tab in the Syncthing Android app showed me to be disconnected and the Folders tab showed the folders to be idle even though the web GUI for Syncthing said that my folder was Out of sync and Remote Devices showed my phone as Disconnected.

    • FILL IN “Address” when adding a device as follows, if you used the setup I was using within this guide to Syncthing.

      tcp://192.168.5.5:22000
    • OR

      tcp://androidstuff.home.arpa:22000
    • The format is tcp://, then your IP address, then :22000 for the port.

    • No need to check “Introduce new devices”.

    • Did you include the tcp:// at the beginning, and the :22000 at the end for the port? You’d better have!

    • Save and continue.

    1. Approve the Connection on Ubuntu Server

      • Return to the Ubuntu Server web interface.

      • You should see a prompt to add a new device.

      • Verify the Device ID matches your Android device.

      • Click “Add Device”.

      • Set a name for the Android device (e.g., “Android Phone”).

      • Click “Save”.

    2. Check the Connection

      • On both devices, check that the other device appears as connected. The connection might take a few moments to establish.

    Note: Make sure that port 22000 (or your configured Syncthing port) is open in your Ubuntu Server’s firewall for incoming connections from your local network. By default ufw is not running and blocking things when you first boot Ubuntu Server but that may change at a later date, same way they snuck in the suggestion of pre-installing a snap version of Docker.

    Now you’ve added your Ubuntu Server Syncthing instance to your phone; no open ports, will sync whenever you are on wifi with your VPN on, and continuously back up your phone. Beautiful. :)

    REMEMBER – DO NOT SET “ADDRESSES” TO “DYNAMIC” – TAP “DYNAMIC” AND REPLACE IT WITH tcp://youripaddress:22000 REPLACING “youripaddress” WITH THE IP ADDRESS OF THE VIRTUAL MACHINE THAT IS RUNNING SYNCTHING.

    Step 6: Configuring Syncthing for Organized Android Backups[edit | edit source]

    6.1 Configure Android Syncthing App[edit | edit source]

    1. Open Syncthing on your Android device.
    2. For each folder you want to sync:
      • Tap the plus icon in the upper right in the folders part of the app.
      • Tap folder label and label it.
      • Tap the directory and choose your directory you want to sync (it’ll let you choose everything besides the download folder on android).
    3. MAKE SURE TO TOGGLE THE SERVER SWITCH UNDER WHERE YOU TAPPED TO CHOOSE THE DIRECTORY YOU WANTED TO SYNC SO THAT IT ACTUALLY BACKS UP.
      • Choose send & receive if you want two-way folder sync.
      • Choose send if you want it to only send files to your server.
      • Choose receive if you only want it to receive files from your server

    A good rule of thumb: For smaller folders and stuff you transfer to your phone to read on a trip, audiobooks, etc., I choose SEND & RECEIVE so I can transfer both ways. For stuff like videos I record and photos I take (the DCIM folder), I choose SEND ONLY. I have a 256 GB phone, and over 1.3 terabytes of videos I have recorded… I can’t sync all of that to my phone or it will fill up. But I have less than 1 GB of audiobooks, books, and max 20 GB of movies I am watching at any given time on my phone.

    1. Tap checkbox in upper right corner when done.

    6.2 Syncing on wifi only – yes or no?[edit | edit source]

    Your Android device can connect to Syncthing, and you can configure Syncthing while you’re on the go. But by default, your Android device must be on wifi in order for file transfer and backup to occur. Even if you are connected to your VPN, your Android device is not going to transfer files if you are not on wifi.

    The way you change this is by editing the folder settings in the Syncthing Android app, and disabling the “sync on wifi only” option. I would suggest doing this for folders with SMALL files like documents, audiobooks, and not for folders with LARGE files like the DCIM folder with your recorded videos and camera pictures.

    Unlimited plans have data caps; try using 200 GB in 10 days on any “unlimited” wireless data plan in the United States and watch your “unlimited 5G” turn into a 56k modem. The only reason they can market using this wankery is because consumer protection law in the United States is a joke.

    6.3 Accept Folders on Ubuntu Server[edit | edit source]

    1. On the Syncthing web interface of your Ubuntu server, you’ll see notifications for new folders.
    2. For each folder: Click “Add”.
    3. CHANGE THE BASE DIRECTORY FROM ~/(foldernamehere) to ~/androidbackup/(foldernamehere) so you don’t clog up your base directory. This makes it easy to see in one click what everything we’re backing up from the android phone is.
    4. Click Save.

    6.4 Creating New Folders on Ubuntu Server[edit | edit source]

    • It does it for you. What a beautiful program, right? :)

    Step 7: Verify and Test – INSPECT WHAT YOU EXPECT![edit | edit source]

    Don’t become a data recovery customer. Syncthing is used for backing up your phone - arguably the most important part of this entire process.

    1. 99% of the people who show up for data recovery at a data recovery business thought their data was backing up.
    2. It was not.
    3. Use common sense, look through the folders on your server, look at the web interface, make sure things open.

    You now have working Android backups!

    • All folders from your Android device will be organized within the ~/androidbackup directory.
    • Each Android folder will have its own subdirectory for better organization.

    Step 3: Installing ONLYOFFICE Workspace and WsgiDAV to replace Google Docs[edit | edit source]

    So we have Syncthing, but how do we edit documents we have on our backup server? SSH in? vi? nano?

    No.

    We are going to use the same virtual machine for this that we used for Syncthing and install something called ONLYOFFICE.

    Nextcloud?[edit | edit source]

    The first thing many people are going to suggest is Nextcloud. Nextcloud is that all-in-one cloud suite that will change your contacts from read-write to read-only so that your contacts get deleted when you update (without telling you, of course), that can’t tell time. Might it surprise you if I told you that it is miserably slow, and that it gave errors unless you clicked a separate submenu to open a document?

    Moving to OnlyOffice[edit | edit source]

    OnlyOffice is fast, and it is used by people who actually pay them. This means that their software has to work, and it does!

    Step 0: Install docker properly.[edit | edit source]

    Never use Ubuntu’s snap version of docker[edit | edit source]

    Ubuntu installs docker by default using the cancerous snap. We do not want to use snap. Ubuntu installer will ask if you want to install Docker, and you should always say No. 

    Doesn’t onlyoffice’s install script install docker for me?[edit | edit source]

    Onlyoffice’s installation script DOES install docker for you. I am still going to have you do it manually.

    • If you choose to not install onlyoffice, and wish to install Immich, I want you to know how to install docker on this virtual machine yourself.
    • I don’t want to rely on onlyoffice’s script. It won’t install docker for us if it detects Docker already, so we’re not going to do a double install. What if onlyoffice’s installation script stops installing docker the same way in a new version, or stops installing docker at all within its script?

    It’s little work to install Docker the right way for our purposes manually, and it’s good to have it documented so that you can use docker for immich even if you elect not to install Onlyoffice.

    0.1 Update and upgrade your system[edit | edit source]

    sudo apt update && sudo apt upgrade -y
    sudo apt install curl git wget -y

    0.2 Check for other Docker installations:[edit | edit source]

    Run docker --version and see what is installed. Nothing should be installed yet since this is a fresh system. If something is installed, remove it.

    # Just in case you accidentally installed snap version of docker:
    
    sudo snap remove docker
    
    For other versions of docker: 
    
    sudo apt remove docker docker-engine docker.io containerd runc

    0.3 Install Docker using official Docker script:[edit | edit source]

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh

    Note: It’s very important to use the official Docker installation and not the Snap version. The Snap version can cause issues due to its sandboxed nature, making it a mess for mailcow’s requirements. Docker snap makes me sad, and it’ll make you sad too if you try to make things work with it.

    0.4 Install Docker Compose:[edit | edit source]

    Ubuntu’s docker-compose-plugin is safe to use, it is not snap cancer.

    sudo apt install docker-compose-plugin -y
    sudo systemctl enable --now docker

    0.5 Verify the install[edit | edit source]

    Run docker compose version and make sure the version is 2.0 or higher. Run docker --version and make sure version is 24.0.0 or higher

    0.6 Set proper permissions:[edit | edit source]

    Docker needs to be run as root for some operations, but you can add your user to the docker group to avoid using sudo all the time. To be clear, mailcow’s own documentation and community suggest starting with root or sudo, and you should trust them more than me. To quote mailcow developers, “Controlling the Docker daemon as non-root user does not give you additional security. The unprivileged user will spawn the containers as root likewise. The behaviour of the stack is identical.” Run this command to add your user:

    sudo usermod -aG docker $USER

    Log out and log back in, or run: newgrp docker

    Step 1: Install ONLYOFFICE Workspace Community Edition[edit | edit source]

    It is very important that you follow the right steps. OnlyOffice’s website is a minefield of documentation that will lead to broken installations like this, even if you follow their instructions:

    OR

    You’re going to avoid the open-source hellscape above, by installing like this:

    1. SSH into the androidstuff virtual machine we created at 192.168.5.5

      ssh [email protected]
    2. Download the ONLYOFFICE Workspace installation script:

      wget https://download.onlyoffice.com/install/workspace-install.sh
    3. Make the script executable:

      chmod +x workspace-install.sh

      This changes the file permissions to allow execution.

    4. Run the installation script:

      sudo bash workspace-install.sh -it WORKSPACE -md fakedomainname.com

      Replace “fakedomainname” with your actual domain name from the mailcow section. You can also leave out -md and not install it.

    CAUTION: Instructions within documentation on OnlyOffice website will lead to a broken installation. Use the command line above so it actually works.

    1. Once this is done, log in by going to http://192.168.5.5

    2. It will prompt you to make a username and a password. Go for it.

    3. Once logged in, make an HTTPS SSL certificate so we can log in via HTTPS:

    Go to Control Panel, the big icon on the main home screen.

    • Go to HTTPS on the top of the left menu.
    • Click Generate and apply.
    • Be happy.

    Step 2: Local file access[edit | edit source]

    Once you’re in, you’ll set up everything. Enter a password, agree to the terms of the license, and you’re good to go. I suggest entering administration settings and setting up HTTPS - it will make a self-signed certificate for you!

    2.1 Diving into “open sourcey” software[edit | edit source]

    You can open a sample document. But what if I want this workspace server to be able to access files stored on the server?? I want to open a document that’s on this computer; here’s where the fun begins. :)

    2.2 The Rabbit Hole to hell for Local File Access[edit | edit source]

    So, where do I go? There’s “Shared with me,” “Favorites,” “Reasons,” “Private room,” “Common in projects,” and “Invite users to Portal.” Maybe the settings? Let’s try that. Administrator profile settings, control panel… and oh, look, “Storage” - maybe I can add a local directory!!! … no, it’s all a mirage

    Open Sourcism: You can’t just open a document from your server. It’s not a feature. You need to pass the direct URL to the document using a WebDAV server. Can you believe this? Welcome to the world of open source software!

    2.3 Mounting volumes in Docker failed me[edit | edit source]

    I went down the rabbit hole to figure this out when I tried weening myself off nextcloud a few years ago.

    When you choose to install with Docker, there’s a script that gets downloaded. I explored the directory where this is installed—onlyoffice—and found the document server, control panel, community server, MySQL setup, and mail server.

    In the document server, there’s a data directory. So, I thought, “Surely, I can mount it as a volume using Docker.” I searched for :rw to find where they’re specifying all the Docker volumes. It looks like a typical Docker Compose YAML file. I tried adding an argument for my directory, like home/louis/Documents, and mounted it in almost every possible location.

    Important Note: The problem isn’t that the volume isn’t mounted. The issue is that this feature was never implemented in the software. They never thought a document server would need to access files on it. This is, again, the most open sourcey thing I’ve seen in a long time.

    2.4 Fighting open source & winning[edit | edit source]

    There’s a way to get files into this, but it won’t be immediately obvious. Going back to settings, there’s a menu called “Connected clouds”; we will use this to connect a WebDAV server to serve ourselves files.

    We have to set up a webdav server, on our server, to serve files to the same virtual machine.

    The whole idea of cloud server software is that you should be able to edit your documents in the cloud. No matter what computer you’re on, your files should be right there. But… my cloud server software can’t even read the files from my cloud server computer. Even if I mount those directories within the Docker volume, it still won’t work. The software wasn’t designed to see items in its own document data directory. But wait, it gets better.

    2.5 The “Solution”[edit | edit source]

    There’s a workaround for this. You can connect a new cloud. That you create, within your cloud. schrodinger’s cloud.

    1. Go through the settings and head to the control panel.
    2. You’ll see something called storage. You might think, “Oh, that’s where I can change things, right?” Wrong. There’s nothing there for connecting to local storage.
    3. Go back and find the connect button. It’s on the home screen under documents.
    4. Click “Connect” and we’re going to connect another cloud to our cloud.

    We’re going to create a WebDAV server on our computer to feed files over to OnlyOffice. It’ll look like your directories are available, like it’s reading them off your computer, but we’re actually using WebDAV.

    2.6 Setting Up WebDAV[edit | edit source]

    We’re setting up a separate server to feed files to our server, on our server. There’s this small Python program called wsgidav. It’s a lightweight WebDAV server, not like setting up Apache or Nginx.

    2.7 The Directory Problem[edit | edit source]

    Let’s say I want two directories: a documents directory and an Android backup directory. I can’t map both to WebDAV like you can in a Docker container. You can only log into one at a time.

    Imagine having five different directories in one Docker volume but only being able to use one at a time. You’d have to log in differently each time.

    You might think, “Louis, just create a new directory and symlink all the directories you want into it. What’s the problem?” Well, here’s where the open source rabbit hole goes deeper… the documentation for the software has an option called follow symlinks. You can set it to true, but it doesn’t work. Not unless you install a different version of the software because the version you get on PIP doesn’t work.

    Warning: This will gaslight you to tears. You’ll pull your hair out wondering if you set up your symlinks right. It’s like a mirage—everything looks like it should work, but it doesn’t. I’m here to remind you that you are not insane.

    As Ralph Kramden would say, it doesn’t mean to be mean; it was just born that way.

    I promise, this is all worth it to never have to use Nextcloud again. This is still better than Nextcloud, which tells you how bad Nextcloud is.

    Step 3: Setting Up a WebDAV Server on GNU/Linux[edit | edit source]

    3.1 Install and Configure WsgiDAV[edit | edit source]

    WsgiDAV is a WebDAV server implementation written in Python.

    1. Install WsgiDAV and its dependencies:

      sudo apt install python3-pip python3-dev libssl-dev libpam0g-dev -y
      sudo pip3 install cheroot six python-pam
      sudo pip install git+https://github.com/mar10/wsgidav.git

      CAUTION: Do not install pip version of WsgiDAV as it will not work with the follow symlink option! These commands will install Python development files, SSL development files, WsgiDAV from github, and Cheroot (a WSGI server).

    2. Create WsgiDAV configuration directory:

    sudo mkdir -p /etc/wsgidav
    1. Generate an SSL certificate for WsgiDAV:

     sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/wsgidav.key -out /etc/ssl/certs/wsgidav.crt

    This creates a self-signed SSL certificate. In a production environment, use a certificate from a trusted Certificate Authority. When having localhost connect to localhost in your closet… this will do.

    1. Create and edit the WsgiDAV configuration file:
    sudo nano /etc/wsgidav/wsgidav.yaml
    1. Add the following content to the configuration file, editing /home/louis/webdavroot with the directory you will use for documents:
      host: 0.0.0.0
      port: 8080
      ssl_certificate: /etc/ssl/certs/wsgidav.crt
      ssl_private_key: /etc/ssl/private/wsgidav.key
      enable_https: true
      
      
      fs_dav_provider:
          follow_symlinks: true
      
      `provider_mapping:
          '/webdav': '/home/louis/webdavroot'
      
      
      http_authenticator:
        domain_controller: wsgidav.dc.pam_dc.PAMDomainController
        accept_basic: true
        accept_digest: false
        default_to_digest: false
      
      pam_dc:
        service: "login"
        allow_users: "all"
      
      verbose: 3
      
      property_manager: true
      lock_storage: true
      
      middleware_stack:
        - wsgidav.error_printer.ErrorPrinter
        - wsgidav.http_authenticator.HTTPAuthenticator
        - wsgidav.dir_browser.WsgiDavDirBrowser
        - wsgidav.request_resolver.RequestResolver
      
      dir_browser:
        enable: true
        icon: true
        response_trailer: true

    This configuration sets up SSL, defines shared directories, and configures authentication.

    1. Add the following content to the service file:

      [Unit]
      Description=WsgiDAV WebDAV Server
      After=network.target
      
      [Service]
      ExecStart=/usr/local/bin/wsgidav --config=/etc/wsgidav/wsgidav.yaml
      Restart=always
      
      [Install]
      WantedBy=multi-user.target

      This creates a systemd service for automatically starting WsgiDAV.

    2. Set correct permissions for the configuration file:

      sudo chown root:root /etc/wsgidav/wsgidav.yaml
      sudo chmod 644 /etc/wsgidav/wsgidav.yaml

      This make sures only root can modify the configuration file.

    3. Enable and start the WsgiDAV service:

      sudo systemctl enable wsgidav.service
      sudo systemctl start wsgidav.service

      This enables the service to start on boot and starts it immediately.

    Now, it’s time to go back to the onlyoffice window we were at before to enter the WebDAV server information. See how mine is /webdav? That’s because

    3.2 Understanding file locations[edit | edit source]

    These lines in the WsgiDAV configuration file are responsible for setting the directory that onlyoffice will see on our system. Obviously, if your name is not louis, yours will be different. Edit it accordingly.

    provider_mapping: '/webdav': '/home/louis/webdavroot'

    Remember, WsgiDAV will only let me have one directory that I can get into when I start it up. The way I got around this was as follows, so that my Documents directory and my androidbackup directories would both be visible by onlyoffice:

    ln -s /home/louis/Documents /home/louis/webdavroot
    ln -s /home/louis/androidstuff /home/louis/webdavroot

    Now, my Documents folder in my home directory as well as my androidstuff syncthing backup directory with all of my phone’s files will be viewable by onlyoffice!

    3.3 Configure Firewall (UFW)[edit | edit source]

    UFW (Uncomplicated Firewall) provides a user-friendly interface for managing iptables. There is no need for anything besides onlyoffice to ever contact our WebDAV server, so we are going to make sure only localhost can contact our WebDAV server.

    If you think this is ridiculous, it is. Onlyoffice needs to let me access files on my local server that are already there.

    1. Allow all outgoing traffic:

      sudo ufw default allow outgoing
    2. Allow incoming traffic on port 8080 from specific sources:

      sudo ufw allow from 192.168.5.5 to any port 8080 proto tcp
      sudo ufw allow from 127.0.0.1 to any port 8080 proto tcp
      sudo ufw allow from 172.17.0.0/16 to any port 8080 proto tcp
      sudo ufw allow from 172.18.0.0/16 to any port 8080 proto tcp

      This allows HTTPS traffic to WsgiDAV only from specific IP ranges.

    3. Enable the firewall:

      sudo ufw enable

      This activates the firewall with the configured rules.

    Step 4: Make sure this works[edit | edit source]

    • Open onlyoffice, and try to open files

    Step 5 (optional): set up email in onlyoffice[edit | edit source]

    Viewing email right in the web browser[edit | edit source]

    If you set up onlyoffice as an email client for your mailcow server, you can view your email within onlyoffice. This means you can open documents directly within onlyoffice within the browser tab where you have your email loaded. Very nice!

    FreePBX and UniTel SIP Trunking Setup[edit | edit source]

    Introduction[edit | edit source]

    Just like with self managed mail; this will be high maintenance, low reward, and is a very bad idea - like anything worth doing. This guide provides detailed instructions on setting up a FreePBX system with UniTel SIP Trunking.

    Why Customize Your Phone System?[edit | edit source]

    CallerID hacks to make calls go faster[edit | edit source]

    One of the fun things you can do with this setup is integrate it with your customer relationship management software like I did with repairshopr. So, instead of the usual caller ID, you can have the status of a customer’s ticket show up. Back when I was the only one doing repairs at my store, this was a lifesaver. Most calls were simple status checks, and I could handle them while soldering, thanks to a Bluetooth headset.

    • Caller ID Customization: Instead of just a name, I saw ticket status in the caller ID too!
    • Efficiency: I could handle calls without stopping my work!
    • Customer Satisfaction: Instant info made customers feel like you know them better than they know themselves.

    Automatically send mean customers to an extension where Allison Smith tells them to go fuck themselves :D[edit | edit source]

    Rossmann Repair has never made use of this feature.

    Make telemarketers miserable by installing a program that messes with them: Lenny[edit | edit source]

    The customization possibilities are endless, and that’s what makes this so much fun. Now, let’s get into how to build your own system.

    Step 1: Preparing a FreePBX installation[edit | edit source]

    Step 1: Set up a FreePBX virtual machine[edit | edit source]

    1.1 Download Debian 12 ISO[edit | edit source]

    You used to download FreePBX as its own distro, which was based on CentOS. They switched to Debian after some recent CentOS/Red Hat controversy.

    1. Open a terminal window or use a web browser within your happycloud server that is running Virtual Machine Manager to host all of your virtual machines. In our case, that’s 192.168.5.2.

    2. Download and install Debian 12 on the machine designated for FreePBX.

      wget https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-12.0.0-amd64-netinst.iso -P ~/Downloads
    3. Make sure the download completes successfully.

    1.2 Move the Debian ISO to the Correct Directory[edit | edit source]

    1. Move the downloaded ISO to /var/lib/libvirt/images:

      sudo mv ~/Downloads/debian-12.0.0-amd64-netinst.iso /var/lib/libvirt/images/
    2. Set the correct permissions and ownership for the ISO:

      sudo chmod 644 /var/lib/libvirt/images/debian-12.0.0-amd64-netinst.iso
      sudo chown libvirt-qemu:kvm /var/lib/libvirt/images/debian-12.0.0-amd64-netinst.iso

    1.3: Launch Virtual Machine Manager[edit | edit source]

    Open Virtual Machine Manager from the Openbox menu by right-clicking the desktop, going to system, and then running virtual machine manager. Or run:

    virt-manager

    1.4 Create a New Virtual Machine[edit | edit source]

    1. Click Create a new virtual machine.
    2. Select Local install media (ISO image or CDROM) and click Forward.
    3. Click Browse… and navigate to /var/lib/libvirt/images/ to select debian-12.0.0-amd64-netinst.iso.
    4. Choose Detect automatically for the OS type or manually set it as Debian 12.
    5. Click Forward.

    1.5: Configure VM Resources[edit | edit source]

    1. Memory & CPU:
      • Assign 4096 MB of RAM (or more, but the idea of giving more than 4 gigs to a phone system hurts me).
      • Assign 2 CPUs (adjust based on available resources).
    2. Storage:
      • Select Create a disk image for the virtual machine.
    3. Allocate at least 20 GB for storage. Choose more if you expect larger usage.
    4. Click Forward.

    1.6 Set Up Networking[edit | edit source]

    • Make sure the network selection is set to Bridge and matches your LAN network (e.g., br0). This will allow your VM to get a static IP from your existing network. Under Network settings, make sure it’s set to Bridge mode for proper network integration.

    1.7 Set up FreePBX to start on boot[edit | edit source]

    virsh autostart freepbx
    • Check that this is set up properly by typing virsh dominfo freepbx and seeing if the autostart line is set to enable.
    • If you don’t do this, you will realize once it is too late and you’ve left your house after you have rebooted your server (for whatever reason) that your phone system is dead. Don’t do that.

    1.8 Debian 12 Installation Setup[edit | edit source]

    1. Follow the Debian installer prompts:
      • Language: Choose your preferred language.
      • Location: Set your country.
      • Keyboard: Select your preferred layout.
      • Hostname: Set the hostname as freepbx.
    2. Domain Name:
      • You can leave this blank.
    3. Set the Root Password:
      • Choose a secure password and confirm it.
    4. Create a New User:
      • Add a user. I added a user named louis for myself.
    5. Partitioning:
      • Choose “Guided - use entire disk and set up LVM”.
      • DO NOT USE ENCRYPTION - REMEMBER, THE HOST SYSTEM THIS IMAGE IS ON IS ALREADY AN ENCRYPTED DISK!!
      • Select the disk and proceed.
      • Confirm changes to write the partitions.
      • The disk device will most likely be something like /dev/vda.

    1.9 Post-Installation Configuration Test[edit | edit source]

    After rebooting, log in as root or your user.

    Make sure network connectivity works:

    ping 8.8.8.8
    hostnamectl

    Step 2: Preparing Debian 12 for FreePBX Installation[edit | edit source]

    This guide provides instructions on performing basic maintenance on a fresh Debian 12 installation and then downloading and running the FreePBX installation script. Follow the steps carefully to ensure a smooth setup.

    2.1 Configure Network Settings[edit | edit source]

    1. Log in with your username and password on the virt-manager screen on your host computer (the one hosting all the virtual machines).
    2. Type ip addr show and find which interface shows your IP address.
      • Remember its name for later.
      • It should be something like enp1s0.
    3. Become root:
    su
    1. Make a network configuration file like this:

      Use the name of your network interface in place of enp1s0.

    nano -w /etc/systemd/network/enp1s0.network
    [Match]
    Name=enp1s0 #put name of your network interface in place of enp1s0
    
    [Network]
    Address=192.168.5.6/24
    Gateway=192.168.5.1
    DNS=192.168.5.1
    1. Hit ctrl-x, then y to save.
    systemctl restart systemd-networkd
    1. Make sure your IP address has changed to a static IP by typing ip addr show and checking.

      • Static IP: Set the IP address to 192.168.5.6.

      • Gateway: Use 192.168.5.1.

      • DNS Server: Set to 192.168.5.1.

    2.2 Do Basic Maintenance on Debian 12[edit | edit source]

    • Update Package Lists

      • Refresh the package lists to make sure you get the latest versions

      • Upgrade packages

      • Remove junk, all with the following line:

      sudo apt update ; sudo apt upgrade -y ; sudo apt autoremove -y

    2.3 Download & run FreePBX install script[edit | edit source]

    1. ssh to the Debian system as louis

    2. Check FreePBX page for the latest script since the URL will change over time.

    3. Download the file using wget:

      su
      wget https://github.com/FreePBX/sng_freepbx_debian_install/raw/master/sng_freepbx_debian_install.sh -O /tmp/sng_freepbx_debian_install.sh
    4. Make the script executable

       su # become root user
       chmod +x /tmp/sng_freepbx_debian_install.sh
    1. Run the FreePBX Installation Script

      bash /tmp/sng_freepbx_debian_install.sh
      • The script will handle the setup of Asterisk, Apache, MySQL, Postfix, etc., all necessary FreePBX modules.

    Step 3: Visit FreePBX Web Interface[edit | edit source]

    1. Open a Web Browser
      • Use a browser on a device connected to the same network, or a device that is connected via OpenVPN. Remember, none of this is open to the public!
    2. Navigate to the FreePBX IP
      • Access FreePBX by entering the following URL: http://192.168.5.6/admin
      • If you used a different IP configuration than I did, enter that IP.
    3. Answer prompts for user/password
      • Follow the setup wizard to configure your admin user, language settings, & other preferences. But don’t get started messing around with anything serious just yet.

    Step 3: Configuring UniTel as Phone Service Provider[edit | edit source]

    Now that we have a working FreePBX installation, we’re ready to set things up with an SIP trunk provider, aka “the phone company”. I use UniTel for this at the UniTel Customer Portal at unitelcustomer.com.

    Setting up SIP trunking[edit | edit source]

    A SIP trunk account is like your phone’s connection to the outside world. Think of it as paying for internet but for your phone calls. Without it, your PBX system is just an intercom for internal calls. You need a SIP trunk to make and receive calls from the outside world.

    Choosing an SIP Trunk Provider[edit | edit source]

    I recommend Unitel for SIP trunking. They’re solid, reliable, and unlike some other providers, they don’t just resell AWS in the cheapest way possible. I used to use VoicePulse, but they were terrible and went out of business a few years ago. They had no redundancy, so if some single AWS instance went down, you were out of luck.

    John Grossbard: Studio Landlord & Seinfeld Character[edit | edit source]

    There was an episode of Seinfeld where he said “I GOT GROSSBAR’D!” Supposedly this was a reference to a minor argument between him and Larry David.

    John Grossbard was the owner of Planet to Planet Studios when I rented a space from him for my screen wholesaling company in the basement of 251 W. 30th St in NYC, back when this was called the “Music Building,” before it was remodeled to appeal to hipster-0%-interest-rate-funded-fad-yuppie-startups.

    I was here because I had no credit and he didn’t ask for a security deposit. When a friend of mine brought up an issue of bedbugs, he looked at us and said “If I made this place any nicer, you couldn’t afford it.”

    He wasn’t wrong. This stuck with me my entire adult life.

    Unitel has two websites. One is their main website, and one is a website that looks like some 1997 Slashdot site. The website with the 1997 Slashdot look is what we want. It’s not too nice – that means we can “afford” it.

    The Benefits of Your Own PBX; Revisited[edit | edit source]

    In all seriousness, one of the benefits of hosting our own PBX is that we DON’T pay by the user. We DON’T pay by the feature. All we pay for is SIP trunking—any features & functionality are added by US, to OUR PBX, that we control.

    We don’t need them to make it any nicer. If they did, you couldn’t afford it.

    HINT: Avoid the fancy, infinite-scrolly websites like UnitelPhone.com. Instead, go for the classic UnitelCustomer.com. If it looks like it’s from 1997, you’re in the right place! It’s straightforward and gets the job done.

    Having your own PBX means no more paying per user. You pay based on trunk usage, not the number of users. Some providers nickel and dime you on the number of extensions you have, visual voicemail, call recording, etc. With SIP trunking, they have no idea what is going on, so they can’t bill you by-the-extension or by-the-feature. It’s like a VPN for your calls—they don’t know how many extensions you have or if you’re recording calls. All they know is the call came in or went out. No extra charges for features like visual voicemail, lenny, call recording, or the voice of Allison Smith telling callers to go fuck themselves; which is the reason I set this up to begin with 14 years ago. :D

    3.1 Register for a Unitel Account[edit | edit source]

    3.2 Set Up an Endpoint (This is Where Inbound Calls Get Sent To)[edit | edit source]

    Endpoints are where your call is sent when a call comes in on a number you have in Unitel. When you create a phone number in Unitel, it will ask where you want to send calls that come into that number. We’re going to set up the endpoint first so when we create a number, we’ll already have an endpoint to send it to.

    Navigate to the “Endpoints” Section

    1. From the main dashboard, go to “Settings”.
    2. Click on “Endpoints”.
    3. Create a New Endpoint
    4. Click on “Add Endpoint”.
      • Fill in the following details:
        • Endpoint Description: Enter a name that describes the endpoint (e.g., closet pbx).
        • Endpoint Destination: Insert the dynamic DNS entry (e.g.louishomeserver.chickenkiller.com) that you set up back in the FreeDNS Dynamic DNS section of this guide. This should resolve to your PBX’s external IP address. When a call comes in on a specific number, it is going to send the call to your PBX at this IP.
    5. Click “Add Endpoint” to complete the setup.

    3.3 Get & Configure Phone Numbers[edit | edit source]

    1. Navigate to the “Numbers” Section
      • From the main dashboard, after clicking on “Numbers”, click on “Add Number”.
      • Buy a number.
    2. Assign the Purchased Number to the Endpoint
      • After purchasing, go to “Manage Numbers”.
      • Find the purchased number and click the dropdown under “Actions” and click “Number Mode”.
      • Select “Forward to Endpoint”: Select the endpoint you created earlier (e.g., closet pbx).
      • Click Update.

    3.4 Add a Trunk in Unitel[edit | edit source]

    Purpose of an SIP trunk:

    An SIP trunk is what attaches you to the world, similar to how your cable modem & spectrum or verizon connects you to the rest of the internet. A trunk is a connection between your phone system (PBX) and the external phone network. It allows your system to make and receive calls to/from the outside world. Setting up a trunk in Unitel is necessary.

    The purpose of the trunk is to provide a pathway for your PBX to route calls to and from the public telephone network. Without a properly configured trunk, your system won’t be able to communicate with external phone numbers. Which is what I have been doing for three weeks while writing this guide.

    1. Log in to the Unitel Admin Interface
      • Open your web browser and go to the Unitel admin interface.
      • Log in using your credentials.
    2. Step 2: Add a New Trunk
      • Navigate to Manage SIP Trunks.
      • Click on Add Trunk.
    3. Step 3: Configure Trunk Details
      • Trunk Description: Enter a descriptive name for your trunk (e.g., Main Trunk).
      • Trunk Type: Select General use/Conversational.
    4. Click Add Trunk to save the new trunk.
    5. Click Apply Config to activate the trunk.
    6. NOTE YOUR CREDENTIALS!
      • During the trunk setup, you will be provided with a username and password. Be sure to note your username & password in a password manager of some sort as it will be needed later when configuring the trunk in your PBX system.

    Step 4: Setting up FreePBX with Unitel phone service[edit | edit source]

    Now that your phone service provider is set up, we can configure freepbx to connect to it & receive & send phone calls. We’ll be using UniTel credentials to sign into our trunk.

    Get into FreePBX interface:

    4.1 Add a New SIP Trunk[edit | edit source]

    1. Navigate to the Trunk Configuration
    2. Go to Connectivity > Trunks.
    3. Click Add Trunk and choose Add SIP (chan_pjsip) Trunk.
    4. Configure the General Settings
      • Trunk Name: Enter a happy name, like UniTel_SIP.
      • Hide CallerID: Set to No.
      • Outbound CallerID: Enter your UniTel DID (your phone number) in e.164 format (e.g., 13475522258 for rossmann repair group).
      • CID Options: Choose Allow Any CID.
      • Dial Number Manipulation Rules
      • Outbound Dial Prefix: Make sure all outgoing calls use the 11-digit e.164 format (e.g., 1NXXNXXXXXX).
      • Add a rule if needed to prepend 1 for local or long-distance numbers:
        • Match Pattern: NXXNXXXXXX
        • Prepend: 1

    4.2 PJSIP Settings Configuration in trunk configuration[edit | edit source]

    1. Go to the “PJSIP Settings” Tab
      • Username: Enter the SIP username provided by UniTel.
      • Secret: Enter the SIP password from UniTel.
      • SIP Server (SIP Host): Set to sip.unitelgroup.com, this may change over time. Make sure you check Unitel’s instructions that they offer on unitelcustomer.com after you log in. They’re nice people & provide all this for you in plain English.
      • Authentication: Set to Outbound, should be checked by default.
      • Registration: Choose Send, should be checked by default.
    2. Advanced Options
      • From Domain: Enter sip.unitelgroup.com.
      • Context: Use from-pstn-toheader (This allows FreePBX to correctly handle incoming SIP headers from UniTel). This should be set by default.
    3. Go to “Codecs” tab
        • G723
        • G729
        • G711
          • G711 is actually ulaw and alaw in the list.
      • This may change over time, check Unitel’s page for details.
      • Unitel are nice people & want you to be successful in setting up your PBX. They provide you with all of this information.
    1. Submit and Apply
      • Click Submit.
      • Click Apply Config on the top-right to save changes and reload the configuration.

    Step 5: Internal Call Handling and Call Routing Setup[edit | edit source]

    Now we start the process of setting up internal call handling by creating extensions and ring groups, as well as defining call routing to manage inbound and outbound calls using the UniTel SIP trunk in FreePBX 17. Having a trunk is useless if we don’t have any phones set up.

    5.1 Create an Extension

    Extensions are individual phones. Alice has an extension for a phone on her desk, Mark has an extension for a phone on his desk, and so on and so forth. Each extension has a number for internal calls. Your desk phone could be 101 - this means people inside FreePBX connected directly to your PBX can call 101. This needs to be done first.

    • Log in to the FreePBX Admin Interface
    • Navigate to Extensions
      • Go to Connectivity > Extensions.
      • Click Add Extension.
      • Choose Add SIP (chan_pjsip) Extension.
    • Configure the Extension
      • User Extension: Enter a unique extension number (e.g., 101).
      • Display Name: Enter the name for this extension (e.g., Office Phone).
      • Secret: Enter a strong password for the extension or let FreePBX generate one automatically. It’s a good idea to add this to your password manager like Bitwarden so you have it later. Don’t put this on a post-it note.
    • Voicemail: Enable if you want voicemail for this extension.
      • Set email address to the email you want voicemail sent to.
      • Set Voicemail password to the password you want to have to dial to access voicemail (we will never use this archaic method, we will get voicemails emailed to us).
    • In Advanced you can set up call recording.
    • Submit and Apply Changes

    5.2 Configure Ring Groups

    When someone calls 3475522258 for my business, I don’t want one phone to ring. I want all of them to ring. This is what ring groups are for. We create one number that rings a bunch of different phones.

    1. Go to Applications > Ring Groups.
    2. Click “Add Ring Group”.
      • Ring Group Number: Enter a unique number for the ring group (e.g., 600).
      • Group Description: Enter a name for this ring group (e.g., Office Ring Group).
      • Ring Strategy: Choose how calls should be distributed (e.g., Ringall to ring all devices simultaneously). Ringall is what you want 99% of the time. Use ringall if you are confused.
      • Extension List: Add the extensions you want to include in this ring group (e.g., 101). Everyone here will have their phone ring when this ring group is called. In an office with one phone number, you would want to put every extension here of the people you want to pick up the phone when a customer calls.
    3. Customize settings like Ring Time, Destination if No Answer, and Call Recording. I usually set this to the voicemail of a particular extension.
    4. Click “Submit”.
    5. Click “Apply Config” to activate the ring group.
    6. REMEMBER, YOU NEED TO SET A DESTINATION IF NO ANSWER SO THAT PEOPLE CAN LEAVE VOICEMAILS.

    Step 6: Define Call Routing[edit | edit source]

    Inbound routes define what we do when someone calls a particular phone number.

    6.1 Set Up Call Flow Control[edit | edit source]

    Call flow control allows you to change where calls go by dialing a number on your phone.

    For instance, let’s say your business hours are 11 AM to 7 PM. You can set it up so that when you close, you dial *2886 on your phone to send the calls directly to voicemail. Then, when you open the next day, you dial *2886 again and your calls switch back to going to all of your business phones rather than go to voicemail.

    I like this more than I like call scheduling because I set it manually. If I come to work early, I may want to answer the phone early. If I stay late, I may want to answer the phone late.

    Rather than set up my phone number to go straight to my ring group, I set it up to go to call flow control. Then, I set up call flow control to go to my ring group, and my ring group to go to my extensions.

    1. Navigate to Inbound Routes
    2. Go to “Applications” > “Call Flow Control”.
    3. Click “+ Add”.
    4. Configure the Call Flow Control
      • Call Flow Toggle Feature Code Index: This just means what number you enter into your phone to change it. Whatever you add here will be in front of a 28. So, if you enter 86, that means dialing *2886 on your phone will toggle where your calls go.
        • On older phones like the Cisco SPA525G, this doesn’t work since they seem to only support two digits in front of a * rather than four. :(
      • Description: Describe what the point of this is so you know for later.
      • Current Mode: This sets how calls are going when you initially finish setting this up.
      • Normal Flow: This sets where calls go before you toggle call flow control. Put the default here. For me, that’s ringing all of my office phones at the Ring Group I set up earlier. Enter Ring Groups and then put the Ring Group you created here.
      • Override Flow: This is where calls will go when you dial *2886 and toggle this feature on. Set this to Voicemail and then the voicemail of the extension we created.
      • CallerID Number: Same as the DID number.
      • Description: Provide a description for this route (e.g., Rossmann repair business number).
      • Set Destination: Choose “Ring Groups” and select the ring group number you created earlier (e.g., 600 - Office Ring Group).
    5. Submit and Apply Changes

    6.2 Set Up an Inbound Route[edit | edit source]

    1. Navigate to Inbound Routes
    2. Go to “Connectivity” > “Inbound Routes”.
    3. Click “+ Add Inbound Route”.
    4. Configure the Inbound Route
      • DID Number: Enter your UniTel DID in e.164 format (e.g., 13475522258). Put a 1 in front of your number in the US!
      • CallerID Number: Same as the DID number.
      • Description: Provide a description for this route (e.g., rossmann repair business number).
      • Set Destination: Choose “Ring Groups” and select the ring group number you created earlier (e.g., 600 - Office Ring Group).
    5. Submit and Apply Changes

    6.3 Set Up CallerID[edit | edit source]

    1. Navigate to CID Superfecta
      • Go to “Admin” > “CID Superfecta”.
      • Click Yes on the callerID sources you think would be useful.
    2. Navigate to Inbound Routes
      • Go to Connectivity > Inbound Routes.
      • Go to the Other tab.
      • Choose CID Lookup Source as CID Superfecta.
    3. Submit and Apply Changes

    6.4 Configure Outbound Route for Making Calls[edit | edit source]

    1. Navigate to Outbound Routes
      • Go to Connectivity > Outbound Routes.
      • Click + Add Outbound Route.
    2. Set Up the Outbound Route
      • Route Name: Enter UniTel_Outbound.
      • Route CID: Enter your UniTel DID (in e.164 format, e.g., 13475522258).
    3. Assign Trunk to Route
      • Trunk Sequence for Matched Routes: Select UniTel_SIP (the trunk created earlier). We don’t have multiple trunks.
    4. Navigate to Dial Patterns
      • Click Dial patterns wizards.
      • Click the dial plans that make sense for your locale.
      • This is a conversation for you to have with your SIP trunking provider based on your region. Open a ticket with them and make sure you choose the right options here!
    5. Submit and Apply
    • Click Submit.
    • Click Apply Config to save and activate the outbound route.

    Step 7: Configure IP subnets in FreePBX[edit | edit source]

    FreePBX configures it to work automatically with the LAN subnet. For instance, if you chose 192.168.5.0/24 for your local network, it will configure FreePBX to work properly with your LAN subnet.

    However, it doesn’t know you have a VPN. Remember that I suggest you not open ports. If you want this to work on your Android phone or iPhone when you connect to your home network, you have to add your VPN subnet manually. To do so, follow these instructions:

    Add VPN subnet to local networks in FreePBX[edit | edit source]

    1. Navigate to SIP Settings
    2. Go to “Settings” > “Advanced SIP Settings”.
    3. Make sure you are on the “General” tab.
    4. Find the Local Networks section.
    5. Log into the pfSense firewall in a new browser tab.
    6. Go to “VPN” —> “OpenVPN” at the top menu.
    7. Find the Tunnel Network for your VPN, which will be in the list of OpenVPN servers.
    8. Return to the FreePBX browser tab and click Add Local Network Field.
    9. Add the Tunnel Network of your VPN.
    10. Submit and Apply Changes
      • Click “Submit”.
      • Click “Apply Config” to activate the outbound route.

    Step 8: Setting up a softphone[edit | edit source]

    A softphone is a software phone. I’ll show you how to use this; the instructions are about the same as configuring a hardware phone. It also allows me to create instructions that allow anyone watching this to confirm their system works without having to provide generalized instructions that aren’t precise to every single smartphone.

    8.1 Download Zoiper[edit | edit source]

    1. Go to Zoiper.
    2. Download Zoiper here.
    3. Install Zoiper.
    4. Run Zoiper.

    8.2 Get credentials for your extension.[edit | edit source]

    • Log in to the FreePBX Admin Interface
    • Navigate to Extensions
      • Go to Connectivity > Extensions.
      • Click your extension.
    • Get your extension number, which is your username, and your secret, which is your password.

    8.3 Configure Zoiper

    1. Open Zoiper and select Create New Account.
    2. Enter the following details:
      • Username: Your extension number (e.g., 401).
      • Password: Your secret (password).
      • Domain: Your server’s IP address or hostname (for us, 192.168.5.6:5060).
      • Skip Outbound Proxy.
      • Select Transport Protocol SIP UDP.
      • Choose SIP UDP as most setups use UDP by default.
    3. Test Configuration:
      • If you see a green checkmark, you did good.

    8.4 Test Audio Settings[edit | edit source]

    1. Go to options.
    2. Select Input and Output Devices.
    3. Set Input Device to your microphone.
    4. Set Output Device to your speakers or headphones.
    5. Test Audio: Speak into the microphone to check input levels. For output, press play to confirm audio works.

    Step 9: Configuring Voicemail and Email Notifications in FreePBX 17[edit | edit source]

    This guide provides step-by-step instructions to configure voicemail for an extension and ring group in FreePBX 17, making sure that voicemail messages are sent via email with audio file attachments. We will also set up your custom mail server for sending these email notification; we’re not calling into a voicemail system in 2024.

    9.1 Enable Voicemail for the Extension[edit | edit source]

    1. Log in to the FreePBX Admin Interface
    2. Navigate to Extensions
      • Go to “Applications” > “Extensions”.
      • Find and select the extension (e.g., 401) that you want to set up.
    3. Enable Voicemail for the Extension
      • Scroll down to the Voicemail section.
      • Enable Voicemail: Set to Yes.
      • Voicemail Password: Enter a numerical for accessing voicemail which we will never use.
      • Email Address: Enter the email address where voicemail notifications should be sent (e.g., [email protected]).
      • Attach Voicemail: Set to Yes (this will attach the audio file of the voicemail to the email notification).
      • Delete Voicemail: Set to No (keeps a copy of the voicemail on the system even after sending the email). Until we know if our system works, keep this to no. Once it is emailing us our voicemails as a wav file, then we can change this to yes.
    4. Submit and Apply Changes
      • Click “Submit”.
      • Click “Apply Config” to save the voicemail settings for the extension.

    9.2 Configure Voicemail for the Ring Group[edit | edit source]

    1. Navigate to Ring Groups
      • Go to “Applications” > “Ring Groups”.
      • Select the ring group you configured earlier (e.g., 600 - Office Ring Group).
    2. Set Ring Group to Go to Voicemail
      • Destination if No Answer: Choose “Voicemail”, and select the extension’s voicemail (e.g., 101).
    3. Submit and Apply Changes
      • Click “Submit”.

    Step 9.3: Configure FreePBX to Send Email Notifications via Custom Mail Server[edit | edit source]

    1. Navigate to System Admin Module
      • Go to “Admin” > “System Admin”.
      • Click on “Email Setup”…. GOTCHA!! This is GNU/Linux, nothing is easy. I had you for a moment there, didn’t I? :D
    2. This is actually going to be a fun journey of configuring postfix manually. That is a long way away, at the end.

    Step 10: Setting Up pfSense Firewall Rules for FreePBX with UniTel SIP Services[edit | edit source]

    To make sure your FreePBX system (located at 192.168.5.6) is able to connect to UniTel’s SIP service and receive calls with two-way audio that actually work, we need to create NAT rules & corresponding firewall rules that only allow traffic from UniTel’s approved IPs. Next we’ll walk you through setting up aliases for UniTel’s IPs, creating NAT rules, & making sure SIP and RTP traffic flows correctly.

    Create aliases for UniTel’s IPs

    Step 10.1: Log in to Your pfSense Web Interface[edit | edit source]

    1. Open a web browser and navigate to: https://pfsense.home.arpa or https://192.168.5.1
    2. Enter your pfSense admin credentials.

    Step 10.2: Add an Alias for UniTel’s SIP Signaling IPs[edit | edit source]

    1. Go to Firewall > Aliases.
    2. Click Add (+) to create a new alias.
    3. Configure the alias as follows:
      • Name: Unitel_SIP_IPs
      • Description: SIP Signaling IPs from UniTel
      • Type: Host(s)
      • IP Addresses:

    Add each of the following SIP IP addresses: THIS MAY CHANGE, CHECK UNITEL GETTING STARTED PAGE TO MAKE SURE THESE ARE THE RIGHT ONES!

        • 199.180.220.89
        • 199.180.220.91
        • 208.89.104.3
    1. Click Save, then Apply Changes.

    Step 10.3: Add an Alias for UniTel’s Media IPs[edit | edit source]

    1. In the Aliases section, click Add again to create another alias.
    2. Configure the alias as follows:
    3. Name: Unitel_Media_IPs
    4. Description: Media IPs for UniTel SIP Services
    5. Type: Host(s)
    6. IP Addresses: Add each of the following media IP addresses: THIS MAY CHANGE, CHECK UNITEL GETTING STARTED PAGE TO MAKE SURE THESE ARE THE RIGHT ONES!
      • 199.180.223.109
      • 45.55.33.77
      • 157.230.238.197
      • 45.33.70.196
      • 45.33.71.83
      • 159.65.107.252
      • 45.33.14.21
      • 159.89.122.218
      • 167.71.237.189
      • 172.104.226.108
      • 139.162.250.71
    7. Click Save, then Apply Changes.

    Setting up NAT port forward & firewall rules[edit | edit source]

    10.4 Configure NAT port forwards for FreePBX signalling[edit | edit source]

    1. Navigate to Firewall > NAT.
    2. Under the Port Forward tab, click Add to create a new NAT rule.
    3. Configure the rule as follows:
      • Interface: WAN
      • Protocol: UDP
      • Destination: WAN address
      • Destination Port Range:
        • From: 5060
        • To: 5065 (for SIP signaling)
      • Redirect Target IP: Enter your PBX IP: 192.168.5.6
      • Redirect Target Port:
        • From: 5060
        • To: 5065
      • Source: Select Single host or alias and choose Unitel_SIP_IPs.
      • Description: Forward SIP Traffic from UniTel to FreePBX
    4. Click Save, then Apply Changes.

    10.5 Set Up NAT port forwards for RTP (Media) Traffic[edit | edit source]

    1. In the Port Forward tab, click Add to create another NAT rule.
    2. Configure the rule as follows:
      • Interface: WAN
      • Protocol: UDP
      • Destination: WAN address
      • Destination Port Range:
        • From: 10000
        • To: 20000 (for RTP media traffic)
    3. Redirect Target IP: Enter your PBX IP: 192.168.5.6
    4. Redirect Target Port:
      • From: 10000
      • To: 20000
    5. Source: Select Single host or alias and choose Unitel_Media_IPs.
    6. Description: Forward RTP Traffic from UniTel to FreePBX
      • Click Save, then Apply Changes.

    10.6 Verify Automatic Firewall Rules[edit | edit source]

    1. After creating the NAT rules, go to Firewall > Rules.
    2. In the WAN tab, confirm that the firewall rules were automatically created for:
      1. SIP Traffic (ports 5060-5065) pointing to 192.168.5.6 and restricted to Unitel_SIP_IPs.
      2. RTP Traffic (ports 10000-20000) pointing to 192.168.5.6 and restricted to Unitel_Media_IPs.

    10.7 Test the Configuration[edit | edit source]

    1. Make sure that your FreePBX system can register with UniTel’s SIP servers.
    2. Make a test call to make sure both SIP signaling and media (audio) traffic are functioning correctly.
    3. Make sure that when you end a phone call, both the caller & recipient notice that it has ended immediately.
    4. Make sure you have two-way audio.
    5. Leave a call on for fifteen minutes and make sure it doesn’t hang up by itself.

    Step 11: Troubleshooting when it doesn’t work. It’s open source, so….[edit | edit source]

    Introduction to Network Rules[edit | edit source]

    We set up two sets of rules:

    1. SIP Trunk rules (Ports 5060-5065)
      • Allows Unitel to talk to our PBX
      • Deals with signaling & connection management
    2. Media Proxy Rules (Ports 10000-20000)
      • Manages the actual audio transmission
      • Handles voice data going back and forth

    What are NAT port forwards vs Firewall Rules?[edit | edit source]

    Network Address Translation (NAT) Port Forwards[edit | edit source]

    NAT is like the restaurant host who brings guests to specific tables. It allows specific machines behind your network to get traffic depending on the port that the traffic was trying to access when the traffic got to your cable modem & firewall.

    Firewall Rules[edit | edit source]

    The firewall acts as a bouncer. Even when NAT directs traffic to the right computer, the firewall can still block problematic connections.

    Order:[edit | edit source]

    pfSense will add a firewall rule AUTOMATICALLY each time you create a NAT port forward, as long as you do not change that option at the end of the NAT port forward rule creation page. I circled this to make sure you would get it right.

    1. Set up NAT rules first
    2. Configure firewall rules second

    Our Setup[edit | edit source]

    FreePBX box IP address: 192.168.5.6

    Internet Traffic → NAT (Traffic Direction) → Firewall (Security Check) → FreePBX virtual machine

    When Things Don’t Work (Common Scenario)[edit | edit source]

    This is an open source firewall combined with self-managed SIP trunking. If something works on the first go, you should be very concerned – this likely means you are in a coma & dreaming. Try to wake up. If you can’t, something is wrong.

    IMPORTANT: Follow along in the video as this is best explained there as I go. This is one of the few sections where I believe the video is a must-have to understand how troubleshooting an issue here would work in real time.

    When initial setup doesn’t work, follow this troubleshooting sequence:

    1. Clear ARP Tables
      • Navigate to Diagnostics → ARP Table → Clear
    2. Reset States
      • Navigate to Diagnostics → States → Reset States
      • States are current connections
      • Must be reset on both routers
      • Wait 90 seconds after reset (best practice)
    3. Reload Filter Rules
      • Navigate to Status → Filter → then click ** Reload**

    Using Packet Capture for Diagnostics[edit | edit source]

    1. Go to: Diagnostics → Packet Capture
    2. Configure capture:
      • Interface: WAN or LAN depending on test
      • Port: 5060, 5061, 5062, 5063, 5064, 5065 for SIP traffic

    Reading Packet Capture Results[edit | edit source]

    • Example of captured traffic: 199.18.220.89 (Unitel’s IP in my case)
    • You’re looking to see if the port 5060 traffic is actually being directed to your PBX. You’re also looking to see if it is even coming in at all.

    Stuff we use to troubleshoot:[edit | edit source]

    When dealing with miserable issues:

    1. Check Logs

      Status → System Logs → Firewall → Normal View

      • Sort by newest first
      • Enable logging for allowed and blocked traffic
    2. Use diagnosing tools

      • Packet capture shows where things are going

      • Firewall logs show what’s being blocked/allowed

      • Side-by-side comparison of rules vs. actual traffic

    3. Reset Everything

      • Clear ARP tables

      • Reset state tables

      • Reload filter rules

      • None of this will work because it’s open source, SO:

      • Reboot the router

      • Look for hints & clues.

    Important takeaway from this[edit | edit source]

    • In the video, I did all of the above. The router magically started passing traffic after a reboot.
    • Even when everything is configured correctly, it may not work correctly – it’s open source.
    • Consumer routers vs Enterprise/Open Source firewalls:
      • $20 consumer router: “It just works”
      • Enterprise-grade open source firewall: Requires patience and systematic troubleshooting
    • It’s still better to use this than a traditional router so you don’t get hacked & owned via lack of updates.

    Step 12: Install Lenny on FreePBX 17[edit | edit source]

    12.1 Prepare to Access Your FreePBX System[edit | edit source]

    You need to SSH into the FreePBX VM to install Lenny. Open a terminal on your local machine and connect via SSH:

    ssh [email protected]
    su

    12.2 Add Lenny’s Custom Context in Asterisk[edit | edit source]

    1. Open the extensions_custom.conf file for editing:

      nano /etc/asterisk/extensions_custom.conf
    2. Add the following lines to define the Lenny context:

    [Lenny]
    exten => talk,1,Set(i=${IF($["0${i}"="016"]?7:$[0${i}+1])})
    same => n,ExecIf($[${i}=1]?MixMonitor(${UNIQUEID}.wav))
    same => n,Playback(Lenny/Lenny${i})
    same => n,BackgroundDetect(Lenny/backgroundnoise,1500)
    1. Save and exit the editor by pressing Ctrl + X, then Y, and Enter.

    12.3 Download Lenny’s Sound Files[edit | edit source]

    Continue with the necessary steps to download and configure Lenny’s sound files as required.

    Lenny works by playing recorded audio. You’ll need to download these audio files to the correct directory on your FreePBX system.

    Download Lenny’s sound files from this link. Crosstalk solutions is a hero for continuing to host this. Tell him thank you.

    cd /var/lib/asterisk/sounds/
    wget https://www.crosstalksolutions.com/pub/Lenny.zip
    unzip Lenny.zip
    chown asterisk:asterisk /var/lib/asterisk/sounds/Lenny/* -R
    chmod -R 755 /var/lib/asterisk/sounds/Lenny/* -R

    12.4 Create a Custom Destination in FreePBX[edit | edit source]

    1. Log in to the FreePBX web interface.
    2. Navigate to Admin > Custom Destinations.
    3. Add a new custom destination with the following details:
      • Custom Destination: Lenny,talk,1
    4. This may be called TARGET instead of CUSTOM DESTINATION IN NEW VERSIONS
      • Description: Lenny
    5. Click Submit and then Apply Config to save the changes.

    12.5 Set Lenny as a Destination[edit | edit source]

    You now have multiple options for how to use Lenny.

    • Manual Transfers to Lenny:
      • Navigate to Connectivity —> Extensions
      • Create a new Virtual Extension
      • Set the extension to whatever you want it to be; this is the number you dial to get Lenny & the number you transfer people to for Lenny
      • Click on the Advanced tab
      • Scroll to the bottom for destinations when nobody answers.
      • Set each of the three to Custom Destinations —> Lenny
      • Enjoy transferring telemarketers to Lenny at his extension. :)

    12.6 Reload things in the terminal.[edit | edit source]

    In your SSH terminal, type the following:

    fwconsole reload

    IMPORTANT: Hitting the red “Apply Config” button in the upper right corner of the FreePBX webpage is not enough here. For this to work, you must run fwconsole reload in the terminal.

    12.7 Sending blocked numbers to Lenny[edit | edit source]

    1. After hanging up on someone you hate, hit *32 quickly which will block their number.
    2. Navigate to Admin —> Blacklist.
    3. Click onto Settings.
    4. Set the Destination for BlackListed Calls to Custom Destination —> Lenny.

    Now every time you get a call from someone you hate, you can dial *32 & they will be routed to Lenny as soon as they call back. But remember, in the words of one of my first recording studio job bosses in 2007 - “Louis, you hate nothing; you intensely dislike it!”

    Step 12: Hiring a virtual receptionist who tells annoying people to “get the fuck outta here!”[edit | edit source]

    This is the primary reason to have a self managed PBX.

    12.1 Download the Sound Files[edit | edit source]

    First, SSH into your FreePBX machine:

    ssh [email protected]

    Download the sound files from the given URL using wget:

    wget http://downloads.asterisk.org/pub/telephony/sounds/asterisk-extra-sounds-en-g722-current.tar.gz

    12.2 Place the Files in the Proper Directory[edit | edit source]

    After downloading the archive, extract it and move the files to the appropriate directory in FreePBX. Asterisk sound files typically reside in /var/lib/asterisk/sounds.

    Extract the tarball:

    tar -xvzf asterisk-extra-sounds-en-g722-current.tar.gz

    Move the extracted files into the custom sound directory for FreePBX:

    mv asterisk-extra-sounds-en-g722 /var/lib/asterisk/sounds/custom

    If the custom directory doesn’t exist, you can create it:

    mkdir /var/lib/asterisk/sounds/custom

    12.3 Set Correct Permissions[edit | edit source]

    Make sure that FreePBX and Asterisk can access the sound files by setting the correct ownership and permissions. FreePBX generally runs under the asterisk user:

    chown -R asterisk:asterisk /var/lib/asterisk/sounds/custom/* -R
    chmod -R 755 /var/lib/asterisk/sounds/custom/* -R

    12.4 Find the Sound Files in the FreePBX GUI

    1. Log in to the FreePBX Admin Interface.
    2. Navigate to Admin > System Recordings.
    3. Under Add Recording, you should now be able to see & use the uploaded sound files from the /var/lib/asterisk/sounds/custom directory.

    12.5 Combine Sound Prompts into a Sequence[edit | edit source]

    To combine multiple sound files into a single prompt sequence, use the System Recordings feature in FreePBX:

    1. Go to Admin > System Recordings and create a new recording.
      • Select the option to Add Sound Recording by combining the existing files.
      • Add the sound files in the order you want them to play.
    2. Choose the following codecs:
      • alaw
      • g722
      • gsm
      • ulaw
      • wav
    3. EXCLUDE the following codecs:
      • g729
      • sln
      • sln16
      • sln48
    4. Save the combined sound as a new recording.

    12.6 Create an Extension That Plays the Sound Prompts[edit | edit source]

    To forward someone to an extension that plays back the sound prompts:

    1. Log in to the FreePBX Admin Interface.
    2. Navigate to Applications > Extensions.
    3. Click Add Extension and select Custom Extension.
    4. Set destination of unanswered to play your recording.
    5. Save, Submit and Apply Config.

    Now, you can transfer calls to this extension, and the selected sound prompts will be played back. Allison Smith will tell.

    Step 13: Get emails with voicemails using Postfix with Postmark SMTP Relay[edit | edit source]

    We are not doing the 1990s calling into voicemail system nonsense. That is miserable.

    13.1 Configure the FROM Address in FreePBX[edit | edit source]

    1. Log into your FreePBX web interface.
    2. Navigate to Settings → Voicemail Admin.
    3. Click the Settings tab.
    4. Click on the Email Config tab.
    5. Set the Server Email to an email address of your choice.
      • I suggest this address be within the domain of the email you set up in mailcow.
      • For instance, if you set up an email for yourself called [email protected] in mailcow, make this [email protected].
    6. Click Submit, then Apply Config (red button in the upper right corner).

    13.2 Configure user access to voicemail[edit | edit source]

    1. Navigate to Admin —> User Management.
    2. Click Edit next to the user.
      • Click the User Details tab at the top.
        • Check that the email address is correct.
      • Click the UCP tab at the top.
        • Click the Call History sub-tab.
          • In CDR Access, add the extensions for which you want to allow this user to listen to call recordings. So if your extension is 401, then 401 should be in this list.
          • Set Allow CDR to Yes.
          • Set Allow CDR Downloads to Yes.
          • Set Allow CDR Playback to Yes.
        • Click the Voicemail sub-tab.
          • Make sure every option here is set to Yes.
          • In Allowed Voicemail, make sure that your extension is in the list. So if your extension is 401, then 401 should be in this list.
      • Click Submit, then Apply Config (red button in the upper right corner).

    13.3 Configure extension for voicemail[edit | edit source]

    1. Navigate to Connectivity —> Extensions
      • Choose your extension
    2. Go to Voicemail
      • Set your Voicemail password
      • Set the Email Address to the email address you want it to email.
      • Click Submit, then Apply Config red button in the upper right corner
    3. Click UCP on the top menu to enter the User Control Panel
      • Click the plus sign in the upper left to add a panel.
      • Choose Voicemail.
      • Choose your extension, in this case, 401
      • Go to the little gear on the upper right corner of the panel you just added to open the settings menu
      • Make sure Email Attachment is On
      • Email Address should be the address that you want voicemails to go to.

    13.4 Get Postmark Credentials for SMTP relay[edit | edit source]

    We are using Postmark for SMTP relay so our emails are not immediately rejected by most providers.

    1. Go to postmarkapp.com
    2. Log in and click Servers
    3. Click onto the server you made earlier.
    4. Click Default Transactional Stream
    5. Navigate to the Setup Instructions page after clicking onto your message stream.
      • Under “Pick the library or integration” – pick “SMTP”.
      • This is the same thing we did when we set up mailcow with Postmark for SMTP relay in the mailcow section!
      • Take note of these, as we will be using them with FreePBX

    13.5 Modify Postfix Configuration[edit | edit source]

    1. Edit the main configuration file:

      sudo nano /etc/postfix/main.cf
    2. Find and modify/add these lines. Keep everything elsein the main.cf file unchanged. Adjust the sender_canonical_maps = static:[email protected] to the email address you wish to use.

    relayhost = [smtp.postmarkapp.com]:587
    smtp_use_tls = yes
    smtp_sasl_auth_enable = yes
    smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
    smtp_sasl_security_options = noanonymous
    smtp_sasl_mechanism_filter = plain
    sender_canonical_maps = static:[email protected]

    13.6 Set Up Authentication[edit | edit source]

    1. Ssh into the FreePBX virtual machine:

      ssh [email protected]
    2. Create/edit the SASL password file:

      sudo nano /etc/postfix/sasl_passwd
    3. Add this line (replace USERNAME:PASSWORD with your Postmark credentials):

      [smtp.postmarkapp.com]:587 USERNAME:PASSWORD
    4. Create the hash database and set permissions:

      sudo postmap /etc/postfix/sasl_passwd
      sudo chmod 600 /etc/postfix/sasl_passwd*

    13.7 Restart Postfix[edit | edit source]

    sudo systemctl restart postfix

    13.8 Test Configuration[edit | edit source]

    Send a test email:

    cat << EOF | sendmail [email protected]
    From: [email protected]
    To: [email protected]
    Subject: Test Email
    Content-Type: text/plain
    X-PM-Message-Stream: outbound
    
    This is a test email body.
    EOF

    Check mail logs for errors:

    sudo tail -f /var/log/mail.log

    Troubleshooting[edit | edit source]

    If emails aren’t sending:

    1. Check /var/log/mail.log for errors
    2. Check that Postmark credentials are correct (if you typed postmark.com instead of postmarkapp.com for server, etc)
    3. Verify sender domain (stevesavers.com) is properly configured in Postmark
    4. Check the activity tab on the transactional stream in Postmark
    5. Mail log will tell you what you fkd up 99% of time.

    !(Postmark Activity monitor](old/images/lu67917r1ezu_tmp_f60bd933.png)

    !(Postmark Activity monitor](old/images/lu67917r1ezu_tmp_c39a116d.png)

    Postmark Activity Monitor:[edit | edit source]

    If you want more troubleshooting information, check Postmark.

    1. Log into Postmark.
    2. Click Servers
    3. Click onto the server you made.
    4. Click onto your Default Transactional Stream
    5. Click Activity
    6. Poke around.

    Default /etc/postfix/main.cf config file[edit | edit source]

    Just in case you mess something up, here’s the default one, because the ones in /usr/share/postfix require configuration from scratch. What they mean when they say “more complete” version is “we don’t offer a copy anywhere of the just working version”, because it’s… GNU/Linux.

    # See /usr/share/postfix/main.cf.dist for a commented, more complete version
    
    
    # Debian specific:  Specifying a file name will cause the first
    # line of that file to be used as the name.  The Debian default
    # is /etc/mailname.
    #myorigin = /etc/mailname
    
    smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
    biff = no
    
    # appending .domain is the MUA's job.
    append_dot_mydomain = no
    
    # Uncomment the next line to generate "delayed mail" warnings
    #delay_warning_time = 4h
    
    readme_directory = no
    
    # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on
    # fresh installs.
    compatibility_level = 3.6
    
    
    
    # TLS parameters
    smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
    smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
    smtpd_tls_security_level=may
    
    smtp_tls_CApath=/etc/ssl/certs
    smtp_tls_security_level=may
    smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
    
    
    smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
    myhostname = debian.home.arpa
    alias_maps = hash:/etc/aliases
    alias_database = hash:/etc/aliases
    mydestination = $myhostname, debian, localhost.localdomain, localhost
    relayhost = 
    mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
    mailbox_size_limit = 0
    recipient_delimiter = +
    # WARNING: Changing the inet_interfaces to an IP other than 127.0.0.1 may expose Postfix to external network connections.
    # Only modify this setting if you understand the implications and have specific network requirements.
    inet_interfaces = 127.0.0.1
    inet_protocols = all
    message_size_limit = 102400000

    Self-Hosted Bitwarden Password manager:[edit | edit source]

    This is a bad idea.[edit | edit source]

    We are going to set this up on our mailcow virtual machine at 192.168.5.3.

    This is a bad idea. You shouldn’t do this. Not only are you starting off as a beginner self-managing something that literally is the key to every aspect of your life, but you aren’t even saving money. Simple basics like the haveibeenpwned integration to check for leaked passwords will cost you more to do yourself when self-hosting than it would if you just paid Bitwarden.

    A big reason we’re doing this is freedom; we want freedom from crappy companies. Bitwarden isn’t a bad company. They treat users well, and they give you the freedom to self-host your own instance with software they’ve open-sourced. If anything, these are the types of companies that have done more to earn the public’s trust than the rest.

    Step 1: Configure DNS Resolution in pfSense[edit | edit source]

    Before installing Bitwarden, we should configure DNS resolution since our server (192.168.5.3) already resolves to mailserver.home.arpa.

    Add Additional DNS Entry

    1. Log into your pfSense dashboard.
    2. Navigate to Services > DNS Resolver.
    3. Scroll down to Host Overrides.
    4. Click the plus (+) button to add a new entry.
    5. Fill in the following:
      • Host: bitwarden
      • Domain: home.arpa
      • IP Address: 192.168.5.3
      • Description: Bitwarden Password Manager
    6. For Additional Names for this Host:
      • Host name should be mailserver since 192.168.5.3 is also our mailserver and already has a static mapping as a mailserver.
      • Domain should be home.arpa (or whatever you set as your domain in System —> General Settings).
      • Description can be anything you want.
    7. Click Save.
    8. Click Apply Changes.

    Note: This server will now respond to both mailserver.home.arpa and bitwarden.home.arpa.

    Step 2 below is only necessary if you did NOT follow these while you were setting up this virtual machine for mailcow mailserver. Skip ahead to Step 3 if you already did this when setting up mailcow.

    Step 2: Prepare system for Bitwarden installation:[edit | edit source]

    2.0 SSH into the mailserver computer[edit | edit source]

    ssh [email protected]

    OR

    ssh [email protected]

    2.1 Update and upgrade your system[edit | edit source]

    sudo apt update && sudo apt upgrade -y
    sudo apt install curl git wget apt-transport-https ca-certificates software-properties-common -y

    2.2 Verify Docker installation:[edit | edit source]

    IF YOU ELECTED TO INSTALL MAILCOW ALREADY, THIS PART IS ALREADY DONE & YOU CAN SKIP TO STEP 3![edit | edit source]
      • If you installed mailcow & followed the instructions for it, you already installed docker properly on this virtual machine, and have no need to do this again. Skip to step 3 if that is the case.

    Run docker --version and make sure the version is 24.0.0 or later. If not, remove the old version:

    sudo apt remove docker docker-engine docker.io containerd runc

    2.3 Install Docker using official Docker script:[edit | edit source]

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh

    Note: It’s very important to use the official Docker installation and not the Snap version. The Snap version can cause issues due to its sandboxed nature, making it a mess for mailcow’s requirements. It is bad for our purposes, don’t use it.

    2.4 Install Docker Compose & prerequisites:[edit | edit source]

    sudo apt install docker-compose-plugin -y
    sudo systemctl enable --now docker

    2.5 Make sure it worked[edit | edit source]

    Run docker compose version and make sure the version is 2.0 or higher.

    Step 3: Configure Bitwarden Environment[edit | edit source]

    Bitwarden’s installation instructions are the opposite of Onlyoffice’s. They actually work, and their documentation is amazing. You can find them here.

    3.1 Create Bitwarden user and set permissions[edit | edit source]

    sudo adduser bitwarden
    sudo usermod -aG docker bitwarden

    Use the following command to log in as the new user, bitwarden:

    sudo login

    Enter credentials for the bitwarden user to log in.

    3.2: Create and Configure Bitwarden Directory[edit | edit source]

    sudo mkdir /opt/bitwarden
    sudo chmod -R 700 /opt/bitwarden
    sudo chown -R bitwarden:bitwarden /opt/bitwarden

    3.3: Enable Docker Service[edit | edit source]

    sudo systemctl start docker
    sudo systemctl enable docker

    3.4: Download and Prepare Installation Script[edit | edit source]

    cd /opt/bitwarden
    curl -Lso bitwarden.sh "https://func.bitwarden.com/api/dl/?app=self-host&platform=linux"
    chmod +x bitwarden.sh

    3.5: Run the Installation Script[edit | edit source]

    ./bitwarden.sh install

    3.6 Installation Configuration Notes[edit | edit source]

    During installation, you’ll need to provide:

    • Domain Name: Use bitwarden.home.arpa
    • SSL Certificate: Choose ‘n’ for Let’s Encrypt if using a self-signed certificate
      • Bitwarden auto-generates a self-signed certificate for you. Isn’t Bitwarden nice?
    • Installation Credentials: Get these from bitwarden.com/host

    Important: Your installation ID and key will look similar to:

    462b197d-14f0-410e-a2c6-b21200fd09f2
    Pcf8vNk5udgT3dI9OWJj

    3.7 Port Configuration[edit | edit source]

    If running multiple services (like mailcow), you’ll need to modify the ports in /opt/bitwarden/bwdata/config.yml:

    http_port: 81    # Changed from 80
    https_port: 444  # Changed from 443

    Step 4: Configure Bitwarden Settings[edit | edit source]

    4.1: Set Up Domain and Email Settings[edit | edit source]

    Edit the environment file:

    nano /opt/bitwarden/bwdata/env/global.override.env

    Add the following configurations. Use the credentials from your Postmark SMTP relay account to fill in the username, password, globalSettings__mail__smtp__port, and globalSettings__mail__smtp__host below. Feel free to adjust them based on your email and who you are using for SMTP relay. This assumes that you set up Postmark as an SMTP relay in the mailcow/mailserver section of this guide! If you did not, you will have to find another SMTP relay service; Gmail offers one. This is needed so that your Bitwarden instance can send emails to you without them going straight to spam.

    globalSettings__domain__baseUrl=https://bitwarden.home.arpa
    globalSettings__mail__smtp__host=smtp.postmarkapp.com
    globalSettings__mail__smtp__port=587
    globalSettings__mail__smtp__ssl=false
    globalSettings__mail__smtp__username=<your_email_username>
    globalSettings__mail__smtp__password=<your_email_password>
    [email protected]
    [email protected]

    4.2 Apply changes and start service[edit | edit source]

    ./bitwarden.sh rebuild
    ./bitwarden.sh start

    Step 5: Browser Extension Setup[edit | edit source]

    1. Make sure VPN Connection: Connect to your home server VPN
    2. Install Extension:

    Critical Step: When logging in, change the server URL from bitwarden.com to your self-hosted instance (e.g., https://bitwarden.home.arpa:444) DON’T FORGET THE ALTERNATIVE PORT AT THE END IF YOU CHOSE AN ALTERNATIVE PORT!

    Optional: Pin Extension[edit | edit source]

    • For Chrome/Brave: Right-click the Bitwarden icon and select “Pin”

    Setting up ZFS for data storage[edit | edit source]

    How we’re storing our data:[edit | edit source]

    We’re not keeping your 40 terabytes of GNU/Linux ISOs on solid state storage. That is a waste of money & resources (unless you’re insanely rich). I set up the system drives on SSDs so that my photos, documents, mail, and android backups would be quickly accessible and these services highly responsive. I don’t need that level of responsiveness for my collection of GNU/Linux ISOs, though. This is where ZFS pools come into play.

    What is ZFS?[edit | edit source]

    ZFS is a complete storage management system that combines:

    • File system functionality
    • Volume management
    • RAID capabilities
    • Data integrity checking
    • Automatic repair features

    It’s like having a RAID controller, Linux LVM, and a file system all in one.

    Why ZFS?[edit | edit source]

    1. Data Integrity Built-In[edit | edit source]

    • ZFS constantly checks for corruption using checksums
    • ZFS automatically repairs corrupted files if you have redundancy
    • ZFS saved me twice from the consequences of my bad decisions when I bought Seagate products.

    2. Snapshots That Actually Work (although I’m not getting into that here)[edit | edit source]

    • Take instant snapshots that don’t eat up space
    • Roll back changes when you inevitably mess something up
    • Keep multiple versions of files without doubling storage needs

    3. Dynamic Stripe Sizes[edit | edit source]

    • Unlike hardware RAID, ZFS can adjust stripe size on the fly

    ZFS Encryption:[edit | edit source]

    Setting Up Encryption[edit | edit source]

    You have two choices:

    1. Pool-wide encryption:
      • Everything in the pool is encrypted, or
    2. Dataset-level encryption:
      • Encrypt only specific datasets
      • Different keys for different datasets
      • More confusing, not necessary IMO here.

    NOTE: If you’re encrypting a pool for home use, pool-wide encryption is usually the way to go. Keep it simple unless you have a specific reason not to.

    What’s a ZFS Pool?[edit | edit source]

    • Traditional setup: Disk → Partition → Filesystem
    • ZFS setup: Disks → Pool → Datasets

    The pool:

    • Manages all your physical drives
    • Handles redundancy (like RAID)
    • Provides a storage “pool” that datasets can use

    It’s like having a fish pond (the pool) that different fish (datasets) can draw from, rather than a different water tank for each koi fishy.

    Understanding ZFS Redundancy[edit | edit source]

    ZFS has built-in redundancy options that are similar to RAID but better implemented. Here are the main types. You choose what works for you:

    Mirror (Similar to RAID 1)[edit | edit source]

    Disk 1 ───┐
             ├── Identical copies
    Disk 2 ───┘
    • Writes data to multiple disks
    • Can lose any disk and still work
    • 50% storage efficiency (2 drives = 1 drive’s worth of storage)

    RAIDZ1 (Similar to RAID 5)[edit | edit source]

    Disk 1 ───┐
    Disk 2 ───┼── Distributed data + parity
    Disk 3 ───┘
    • Can lose one drive
    • ~67-75% storage efficiency
    • Minimum 3 drives needed

    RAIDZ2 (Similar to RAID 6)[edit | edit source]

    Disk 1 ───┐
    Disk 2 ───┤
    Disk 3 ───┼── Distributed data + double parity
    Disk 4 ───┤
    Disk 5 ───┘
    • Can lose ANY two drives
    • ~60-80% storage efficiency
    • Minimum 4 drives needed

    Key Differences from Hardware RAID:[edit | edit source]

    1. No RAID controller needed
    2. Self-healing
      • Detects & fixes corruption automatically
      • Hardware RAID only handles drive failures though.
      • ZFS handles drive failures AND data corruption!

    HINT: ZFS IS NOT A BACKUP! ZFS redundancy protects against drive failures, but it’s NOT a backup. If you accidentally delete a file or your server dies in a fire, redundancy won’t help you. This is PART of a proper backup solution, it is not in & of itself THE backup solution! Always have proper backups!

    Step 1: Choose Hard Drives That Won’t Send you to Rossmann Data Recovery using Backblaze Data[edit | edit source]

    If you spend nine hours setting this server up only to put your data on a Seagate rosewood, I will come through your television like Samara from the ring and pull you down a well. You could either

    1. trust amazon reviews.

    2. use data from a company that runs over 260,000 hard drives & publishes their failure rates quarterly

    3. Use a seagate EXOS or rosewood

    In order of bad ideas, C, A, then B. We will be doing B.

    Find Backblaze’s Drive Stats here[edit | edit source]

    When Backblaze publishes failure rates, they’re telling you what drives cost them money to replace. They don’t care which manufacturer looks good. They are honest about which drives are trash, they run them 24/7 in actual mission-critical server environments.

    Tips for reading their reports:[edit | edit source]

    When you look at their quarterly reports, focus on:

    1. Annualized Failure Rate (AFR)
      • Under 1% = Great
      • 1-2% = Acceptable
      • Over 2% = No.
      • Over 3% = Probably a seagate rosewood or grenada, you might as well be giving your data to a NYS tax collector
    2. Drive Age & Sample Size
      • A 0% failure rate is useless if they only have 10 drives, Look for models with 1,000+ samples
    • Pay attention to how long they’ve been using the drive you’re looking at.

    Remember: The goal isn’t to spend five hours figuring out what drives are the best, it’s to spend a few minutes to learn which are the worst. A 0.32% vs 0.34% failure rate difference doesn’t matter, a 0.32% to 3.2% difference is what we’re looking to avoid.

    Step 1.5: Label your drive bays as you plug them in.[edit | edit source]

    I like to put the serial number of the drive on my bays, or if not possible to do this without blocking airflow, on the bottom or top of the case in-line with the drive bay. This way if I need to take a drive out I don’t have to guess which is which.

    The Rosewill RSV-L4412U server case is a very nice case for this purpose.

    Step 2: Installing ZFS on Ubuntu Server[edit | edit source]

    We are setting up ZFS on our host system that all of our virtual machines are running on, which is happycloud.home.arpa at 192.168.5.2.

    2.1 Update System Packages[edit | edit source]

    First, make sure your system is up to date:

    sudo apt update && sudo apt upgrade -y

    2.2 Install ZFS & Drive Monitoring Packages[edit | edit source]

    Install the ZFS utilities:

    sudo apt install zfsutils-linux smartmontools -y

    2.3 Load ZFS Kernel Module[edit | edit source]

    ZFS should load automatically, but make sure it’s loaded:

    lsmod | grep zfs

    If you don’t see output, load it manually:

    sudo modprobe zfs

    2.4 Configure System for ZFS[edit | edit source]

    Adjust ARC (Adaptive Replacement Cache) Size:

    Create a new sysctl configuration file:

    sudo nano /etc/sysctl.d/10-zfs.conf

    Add these lines to limit ZFS memory usage to 50% of RAM:

    # ZFS Maximum ARC Size (50% of RAM)
    vm.swappiness=1
    vm.min_free_kbytes=1524288
    vm.watermark_scale_factor=200

    2.5 Apply Sysctl Settings[edit | edit source]

    sudo sysctl -p /etc/sysctl.d/10-zfs.conf

    2.6. Set Up Automatic Module Loading[edit | edit source]

    Create a new file to make sure ZFS loads at boot:

    sudo nano /etc/modules-load.d/zfs.conf

    Add this line:

    zfs

    2.7 Make Sure Install Worked[edit | edit source]

    Run a quick check of ZFS commands:

    # Check ZFS command availability
    zfs list
    zpool list
    
    # Both commands should work (though they'll show no pools yet)

    Best Practices:[edit | edit source]

    • Set vm.swappiness=1 (use swap only when necessary)
    • Keep around 1 gigabyte of RAM per 1TB storage for basic usage
    • Use separate boot drive from ZFS pool
    • Set up notifications if something dies (we’ll cover this later)
    • Plan regular scrub schedule

    Step 3: Identify Your Hard Drives in Ubuntu Server[edit | edit source]

    Quick Commands to List Drives[edit | edit source]

    3.1 List Basic Drive Info[edit | edit source]

    lsblk

    Example output:

    louis@happycloud:~$ lsblk
    NAME                             MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
    sda                                8:0    0 232.9G  0 disk  
    ├─sda1                             8:1    0   512M  0 part  
    ├─sda2                             8:2    0     1G  0 part  
    │ └─md127                          9:127  0  1022M  0 raid1 /boot
    └─sda3                             8:3    0 231.4G  0 part  
      └─md126                          9:126  0 231.3G  0 raid1 
        └─dm_crypt-0                 252:0    0 231.2G  0 crypt 
          └─ubuntuinstall-ubunturoot 252:1    0 231.2G  0 lvm   /
    sdb                                8:16   0   7.3T  0 disk  
    sdc                                8:32   0 232.9G  0 disk  
    ├─sdc1                             8:33   0   512M  0 part  /boot/efi
    ├─sdc2                             8:34   0     1G  0 part  
    │ └─md127                          9:127  0  1022M  0 raid1 /boot
    └─sdc3                             8:35   0 231.4G  0 part  
      └─md126                          9:126  0 231.3G  0 raid1 
        └─dm_crypt-0                 252:0    0 231.2G  0 crypt 
          └─ubuntuinstall-ubunturoot 252:1    0 231.2G  0 lvm   /
    sdd                                8:48   0   7.3T  0 disk  
    sde                                8:64   0   7.3T  0 disk  
    sdf                                8:80   0   7.3T  0 disk  
    sdg                                8:96   0   7.3T  0 disk  
    sdh                                8:112  0   7.3T  0 disk  

    3.2 Show More Detailed Info (including serial numbers)[edit | edit source]

    lsblk -o NAME,SIZE,MODEL,SERIAL

    Example output:

    louis@happycloud:~$ lsblk -o NAME,SIZE,MODEL,SERIAL
    NAME                               SIZE MODEL            SERIAL
    sda                              232.9G Samsung SSD 870  S61VNJ0R413909T
    ├─sda1                             512M                  
    ├─sda2                               1G                  
    │ └─md127                         1022M                  
    └─sda3                           231.4G                  
      └─md126                        231.3G                  
        └─dm_crypt-0                 231.2G                  
          └─ubuntuinstall-ubunturoot 231.2G                  
    sdb                                7.3T ST8000VN004-2M21 WSD5720G
    sdc                              232.9G Samsung SSD 870  S61VNG0NC09403N
    ├─sdc1                             512M                  
    ├─sdc2                               1G                  
    │ └─md127                         1022M                  
    └─sdc3                           231.4G                  
      └─md126                        231.3G                  
        └─dm_crypt-0                 231.2G                  
          └─ubuntuinstall-ubunturoot 231.2G                  
    sdd                                7.3T ST8000VN004-2M21 WSD5725W
    sde                                7.3T WDC WD80EFZX-68U VKJ28YJX
    sdf                                7.3T WDC WD80EFZX-68U VKJ02D0X
    sdg                                7.3T WDC WD80EFZX-68U VKHZVJ7X
    sdh                                7.3T WDC WD80EFZX-68U VKJ1N8KX
    louis@happycloud:~$ 
    

    3.3 Check Drive Health and Additional Info[edit | edit source]

    louis@happycloud:~$ sudo smartctl -i /dev/sdd
    smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.0-47-generic] (local build)
    Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
    
    === START OF INFORMATION SECTION ===
    Model Family:     Seagate IronWolf
    Device Model:     ST8000VN004-2M2101
    Serial Number:    WSD5725W
    LU WWN Device Id: 5 000c50 0e3407989
    Firmware Version: SC60
    User Capacity:    8,001,563,222,016 bytes [8.00 TB]
    Sector Sizes:     512 bytes logical, 4096 bytes physical
    Rotation Rate:    7200 rpm
    Form Factor:      3.5 inches
    Device is:        In smartctl database 7.3/5528
    ATA Version is:   ACS-4 (minor revision not indicated)
    SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is:    Wed Oct 23 21:10:14 2024 UTC
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    
    louis@happycloud:~$ sudo smartctl -a /dev/sdd | grep -E 'Command_Timeout|Error_Rate';     echo ""; 
      1 Raw_Read_Error_Rate     0x000f   074   064   044    Pre-fail  Always       -       26263737
      7 Seek_Error_Rate         0x000f   089   060   045    Pre-fail  Always       -       766811756
    188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0

    HINT: Write down the serial numbers of your drives and which ports they’re connected to. If a drive fails, you’ll want to know exactly which physical drive to replace.

    3.4 Understanding the Output:[edit | edit source]

    • In this case, /dev/sda and /dev/sdc are the two SSDs that comprise the RAID 1 array that Ubuntu Linux Server is installed on.
    • sdb, sdd, sde, sdf, and sdg are the hard drives we plugged in.
    • The letters go in order of how they’re connected to the motherboard (sometimes).
    • Numbers after letters (like sda1) represent partitions

    Now you know which drive is which, so let’s set up a ZFS pool.

    Step 4: Creating an Encrypted ZFS Pool with Single-Drive Redundancy[edit | edit source]

    What We’re Setting Up

    • 6 drives in a RAIDZ2 configuration (similar to RAID6)
    • Full encryption with password
    • Two drives worth of redundancy
    • Ability to survive one drive failure

    4.1 Verify Our Drives[edit | edit source]

    First, let’s double-check we’re using the right drives:

    lsblk -o NAME,SIZE,MODEL,SERIAL

    You should see your two operating system drives listed, and the six hard drives we plugged in. Let’s make absolutely sure they’re empty:

    # Check if drives have any existing partitions
    sudo fdisk -l /dev/sd[bdefgh]

    If you see any partitions, you might want to clear them:

    # Only run these if you're SURE these are the right drives
    # THIS WILL ERASE ALL DATA ON THESE DRIVES
    sudo wipefs -a /dev/sdb
    sudo wipefs -a /dev/sdd
    sudo wipefs -a /dev/sde
    sudo wipefs -a /dev/sdf
    sudo wipefs -a /dev/sdg
    sudo wipefs -a /dev/sdh

    4.2 Create the Encrypted Pool[edit | edit source]

    We’ll create a RAIDZ2 pool (similar to RAID6) with encryption:

    sudo zpool create -o ashift=12 
       -O encryption=aes-256-gcm 
       -O keylocation=prompt 
       -O keyformat=passphrase 
       mediapool raidz2 /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh

    What do these commands do?

    • -o ashift=12: Optimizes for 4K sector drives
    • -O encryption=aes-256-gcm: Enables strong encryption
    • -O keylocation=prompt: Tells ZFS to ask for password
    • -O keyformat=passphrase: Use a password instead of keyfile
    • raidz2: Two drive redundancy
    • mediapool: Name of your pool (can be whatever you want)

    You’ll be prompted for a password. USE A STRONG PASSWORD AND DON’T FORGET IT!

    4.3 Set Good Pool Properties[edit | edit source]

    After creation, let’s set some good default properties:

    # Enable compression
    sudo zfs set compression=lz4 mediapool
    
    # Disable atime updates (better performance)
    sudo zfs set atime=off mediapool
    
    # Set correct recordsize for general media storage
    sudo zfs set recordsize=1M mediapool

    4.4 Verify Pool Creation[edit | edit source]

    Check that everything is set up correctly:

    # Check pool status
    sudo zpool status mediapool
    
    # Check pool properties
    sudo zpool get all mediapool
    
    # Check encryption is enabled
    sudo zfs get encryption mediapool

    The zpool status output should show something like:

    louis@happycloud:~$ sudo zpool status mediapool
      pool: mediapool
     state: ONLINE
    config:
    
        NAME        STATE     READ WRITE CKSUM
        mediapool   ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
    
    errors: No known data errors

    4.5: Create the Datasets for your data & virtual machine Backups[edit | edit source]

    Set permissions:

    # Set ownership (replace 'louis' with your actual username)
    sudo chown louis:louis /mediapool
    
    # Set permissions (only you can access it)
    sudo chmod 700 /mediapool

    4.6 Test Pool Import/Export[edit | edit source]

    Let’s make sure we can properly mount/unmount the pool:

    # Export (unmount) the pool
    sudo zpool export mediapool
    
    # Import it back
    sudo zpool import mediapool

    You’ll have to enter the password with sudo zfs load-key mediapool in order to do anything with it, but we will do that later. You will be prompted for the password again when importing.

    Important Notes[edit | edit source]

    1. BACKUP YOUR POOL PASSWORD!
      • If you lose it, your data is GONE
      • Store it in a password manager (that you don’t self-host)
      • Consider a paper backup in a secure location that is not a post-it-note on your monitor.
    2. Space Available
      • Total raw capacity: 6 × 8TB = 48TB
      • RAIDZ2 uses 2 drives for parity, so you lose 2 drives worth of capacity
      • Usable space is 4 × 8TB = 32TB
    3. What Redundancy Gives You
      • Can survive one drive failure
      • Can survive two drive failures
      • Not a backup! Still need proper backups

    Step 5: Setting Up ZFS Pool Mount Points and Permissions[edit | edit source]

    5.1 Creating the Base Dataset Structure[edit | edit source]

    First, let’s create our main dataset and its subdirectories:

    # Load the encryption key so we can work:
    sudo zfs load-key mediapool
    
    # Create mount points if they don't exist
    
    # Create the virtual machine backup dataset where we'll store VM images
    sudo zfs create -o mountpoint=/mediapool/vmbackups mediapool/vmbackups
    
    # Create the storage backup dataset where we'll store Linux ISOs and cooking recipes
    sudo zfs create -o mountpoint=/mediapool/archive mediapool/archive

    5.2 Setting Permissions for Regular User Access[edit | edit source]

    Set ownership for the main archive directory:

    # Set ownership of the main archive directory to louis
    sudo chown louis:louis /mediapool/archive
    
    # Set base permissions (rwx for owner, rx for group and others)
    sudo chmod 755 /mediapool/archive

    5.3 Securing vmbackups Directory for Root Only[edit | edit source]

    Set restricted permissions on the vmbackups directory:

    # Set vmbackups to be owned by root
    sudo chown root:root /mediapool/vmbackups
    
    # Set permissions to allow only root access (rwx for root, none for others)
    sudo chmod 700 /mediapool/vmbackups

    5.4 Verify the Settings[edit | edit source]

    Check that everything is set correctly:

    # Check ZFS mountpoints
    zfs get mountpoint mediapool/archive
    zfs get mountpoint mediapool/vmbackups
    
    # Check permissions
    ls -la /mediapool/archive
    ls -la /mediapool/vmbackups
    
    # Verify dataset properties
    zfs get all mediapool/archive
    zfs get all mediapool/vmbackups

    Expected output for permissions check, note that user louis cannot list the vmbackups directory without sudo.

    louis@happycloud:~$ zfs get mountpoint mediapool/archive
    NAME               PROPERTY    VALUE               SOURCE
    mediapool/archive  mountpoint  /mediapool/archive  local
    
    louis@happycloud:~$ zfs get mountpoint mediapool/vmbackups
    NAME                 PROPERTY    VALUE                 SOURCE
    mediapool/vmbackups  mountpoint  /mediapool/vmbackups  local
    
    louis@happycloud:~$ ls -la /mediapool/archive
    total 21
    drwxr-xr-x 2 louis louis    2 Oct 23 21:45 .
    drwxr-xr-x 4 root  root  4096 Oct 23 21:45 ..
    
    louis@happycloud:~$ ls -la /mediapool/vmbackups
    ls: cannot open directory '/mediapool/vmbackups': Permission denied
    
    louis@happycloud:~$ sudo ls -la /mediapool/vmbackups
    total 21
    drwx------ 2 root root    2 Oct 23 21:44 .
    drwxr-xr-x 4 root root 4096 Oct 23 21:45 ..
    

    5.5 Test Access[edit | edit source]

    Test the permissions are working:

    1. As user ‘louis’:

      # Should work
      touch /mediapool/archive/testfile
      
      # Should fail
      touch /mediapool/vmbackups/testfile
    2. As root:

      # Should work
      sudo touch /mediapool/vmbackups/testfile

    If any of these tests don’t work as expected, double-check the permissions and ownership settings above.

    5.6 frigate camera footage storage[edit | edit source]

    Earlier in the guide, we set up frigate for recording security camera footage. We left it recording to the frigate installation folder. This is bad. Recording to the main solid state drive is a waste of space & SSD life.

    Archived camera footage belongs on a giant hard drive, not an expensive SSD. If you’d like, you can now go back to the frigate config section and change these two lines:

          - ./storage:/media/frigate
          - ./database:/data/db

    to something like:

          - ./storage:/mediapool/archive/camerafootage/media/frigate
          - ./database:/mediapool/archive/camerafootage/data/db

    Of course, make the directories first:

    mkdir -p /mediapool/archive/camerafootage/data/db
    mkdir -p /mediapool/archive/camerafootage/media/frigate

    If you want to keep things separate, you could create a third dataset called camerafootage, mount it to /mediapool/camerafootage, and then edit the docker-compose.yml file to look like this:

          - ./storage:/mediapool/camerafootage/media/frigate
          - ./database:/mediapool/camerafootage/data/db

    And make sure the directories have been created before running frigate:

    mkdir -p /mediapool/camerafootage/data/db
    mkdir -p /mediapool/camerafootage/media/frigate

    The full file is provided below, with the assumption that you decided to make a camerafootage dataset that is mounted on /mediapool/camerafootage

    version: "3.9"
    services:
      frigate:
        container_name: frigate
        privileged: true # This may not be necessary for all setups
        restart: unless-stopped
        image: ghcr.io/blakeblackshear/frigate:0.13.2 # Last good version
        shm_size: "64mb" # Update for your cameras based on requirements
        devices:
          - /dev/bus/usb:/dev/bus/usb # USB Coral, modify for other hardware
          - /dev/apex_0:/dev/apex_0 # PCIe Coral, modify based on your setup
          - /dev/video11:/dev/video11 # For Raspberry Pi 4B
          - /dev/dri/renderD128:/dev/dri/renderD128 # Intel hwaccel, update for your hardware
        volumes:
          - /etc/localtime:/etc/localtime:ro
          - ./config:/config
          - ./storage:/mediapool/camerafootage/media/frigate # Changed media directory to ZFS pool
          - ./database:/mediapool/camerafootage/data/db # Changed database directory to ZFS pool
          - type: tmpfs # Optional: Reduces SSD wear
            target: /tmp/cache
            tmpfs:
              size: 1000000000
        ports:
          - "8971:8971"
          - "5000:5000" # Internal unauthenticated access. Be careful with exposure.
          - "8554:8554" # RTSP feeds
          - "8555:8555/tcp" # WebRTC over TCP
          - "8555:8555/udp" # WebRTC over UDP
        environment:
          FRIGATE_RTSP_PASSWORD: "password"

    Step 6: Setting Up Samba to Share ZFS Pool Directories[edit | edit source]

    6.1 Installing Samba[edit | edit source]

    First, let’s install Samba and its utilities:

    # Update package list
    sudo apt update
    
    # Install Samba packages
    sudo apt install samba samba-common-bin -y

    6.2 Backup Original Samba Config[edit | edit source]

    Always backup before making changes:

    sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.backup

    6.3 Configure Samba Share[edit | edit source]

    Create a new Samba configuration:

    # Clear existing config (but keep our backup)
    sudo bash -c 'echo "" > /etc/samba/smb.conf'
    
    # Edit the config file
    sudo nano /etc/samba/smb.conf

    Add this configuration to smb.conf, and change the realm to the domain you chose in pfsense under system ---> general setup

    [global]
        # Network settings
        workgroup = HOME
        realm = home.arpa
        netbios name = happycloud
        server string = ZFS Archive Server
        dns proxy = no
        
        # Security settings
        security = user
        map to guest = bad user
        server signing = auto
        client signing = auto
        
        # Logging
        log level = 1
        log file = /var/log/samba/%m.log
        max log size = 1000
        
        # Performance optimization
        socket options = TCP_NODELAY IPTOS_LOWDELAY
        read raw = yes
        write raw = yes
        use sendfile = yes
        min receivefile size = 16384
        aio read size = 16384
        aio write size = 16384
        
        # Multichannel support
        server multi channel support = yes
        
        # Disable unused services
        load printers = no
        printing = bsd
        printcap name = /dev/null
        disable spoolss = yes
        
        # Character/Unix settings
        unix charset = UTF-8
        dos charset = CP932
    
    [archive]
        comment = ZFS Archive Share
        path = /mediapool/archive
        valid users = louis
        invalid users = root
        browseable = yes
        read only = no
        writable = yes
        create mask = 0644
        force create mode = 0644
        directory mask = 0755
        force directory mode = 0755
        force user = louis
        force group = louis
        veto files = /._*/.DS_Store/.Thumbs.db/.Trashes/
        delete veto files = yes
        follow symlinks = yes
        wide links = yes
        ea support = yes
        inherit acls = yes
        hide unreadable = yes

    6.4 Verify Samba Configuration[edit | edit source]

    Check if your config is valid:

    testparm

    6.5 Create Samba User[edit | edit source]

    Add your GNU/Linux user to Samba and set a password:

    # Add Samba password for user 'louis'
    sudo smbpasswd -a louis
    
    # Enable the user
    sudo smbpasswd -e louis

    6.6 Start and Enable Samba[edit | edit source]

    # Restart Samba services
    sudo systemctl restart smbd
    sudo systemctl restart nmbd
    
    # Enable them to start at boot
    sudo systemctl enable smbd
    sudo systemctl enable nmbd

    Step 7: Connecting to your Samba Share[edit | edit source]

    What’s the point of this if we can’t access it from other systems?

    Windows Systems[edit | edit source]

    Connect using one of the following in the address bar of Windows Explorer:

    • \\happycloud.home.arpa\archive

    GNU/Linux Systems[edit | edit source]

    Connect in a file manager like Thunar (my personal favorite) by putting this in the address bar:

    • smb://happycloud.home.arpa/archive

    File Manager Navigation:

    1. Press Ctrl+L to open location bar
    2. Enter the SMB URL
    3. Enter credentials when prompted

    macOS Systems[edit | edit source]

    Connect using Finder by selecting Go > Connect to Server and entering the SMB URL.

    Connect using:

    • smb://happycloud.home.arpa/archive

    Finder Navigation:

    1. Press Cmd+K
    2. Enter the SMB URL
    3. Click ‘Connect’
    4. Enter credentials when prompted

    Mounting from Command Line (GNU/Linux)[edit | edit source]

    If you want the share to show up as if it were just another directory on your system, you could do this:

    # Create mount point
    mkdir -p ~/archive
    
    # Mount by entering credentials when prompted
    sudo mount -t cifs //happycloud.home.arpa/archive ~/archive -o username=louis,uid=1000,gid=1000,vers=3.1.1,seal
    
    # Check that the `testfile` we made earlier shows up here. If you see the following, congratulations, you did not mess it up!!
    
    [louis@studiobauer ~]$ ls -la ~/archive
    total 13
    drwxr-xr-x  2 louis louis     0 Oct 23 18:11 .
    drwx------ 48 louis louis 12288 Oct 23 18:14 ..
    -rwxr-xr-x  1 louis louis     0 Oct 23 18:11 testfile

    HINT: If you can’t connect via VPN, try from local network first. If that works, then troubleshoot VPN/remote access issues afterwards.

    Security Notes[edit | edit source]

    1. The share is only accessible to authenticated users
    2. Files created will be owned by ‘louis’
    3. The VMBackups directory remains inaccessible (root only)
    4. Password is stored separately from system password
    5. All traffic is unencrypted - use VPN for remote access!

    Now you should be able to access your ZFS pool’s archive directory from any device on your network, with proper authentication as user ‘louis’.

    Step 7: Backing up virtual machines[edit | edit source]

    Now that we have a giant storage array that will continue working even in the event of multiple drive deaths, we can set up our virtual machines to back up regularly. This way, if we destroy one with idiocy, or if it becomes corrupt, we can restore it instantly to what it was like before the mess happened.

    7.1 Backup script creation:[edit | edit source]

    This script below will allow you to have your virtual machines backed up automatically. It does the following:

    • Shuts down the virtual machine
    • Copies its disk image qcow2 file to the /mediapool/vmbackups zfs dataset
    • Copies its configuration so it can be set up again
    • Keeps five backups but deletes the oldest ones after you have five.

    This means the following:

    • You can mess things up by deleting files you weren’t supposed to, mess up configurations and programs, and restore everything to where it was last week with one or two kindergarten level GNU/Linux commands.
    • You can migrate this to another computer entirely & start the virtual machine up there.

    shuts each virtual machine down one by one, backs up the virtual

    Save this as /root/vm_backup.sh:

    # Open the text editor
    sudo nano -w /root/vm_backup.sh
    #!/bin/bash
    
    # thank you to stack overflow for giving me the courage to wade through 100s of posts and hack together something that looks like it works. 
    
    # config for backups
    BACKUP_DIR="/mediapool/vmbackups"
    LOG_FILE="/var/log/vm_backups.log"
    RETENTION_DAYS=56  # how long to keep backups
    
    # Function to write messages to our log file`
    log_message() {
        # Get the current timestamp and message
        echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
    }
    
    # Function to find the actual disk path for a VM when the default path doesn't exist
    # Uses virsh dumpxml to get the disk source path from the VM's XML configuration
    find_vm_disk_path() {
        local vm_name=$1
        # Get the VM's XML configuration and extract the first disk source path
        # Using grep with -o to only output the matched portion
        # Using sed to extract just the path part from the source attribute
        local disk_path=$(virsh dumpxml "$vm_name" | grep -o "source file='[^']*'" | head -n1 | sed "s/source file='\(.*\)'/\1/")
        
        # Check if we found a path and if it exists
        if [ -n "$disk_path" ] && [ -f "$disk_path" ]; then
            echo "$disk_path"
            return 0
        else
            return 1
        fi
    }
    
    # main backup function 
    backup_vm() {
        local virtual_machine_name=$1  # The name of the virtual machine we're backing up
        local date_stamp=$(date +%Y%m%d)  # Today's date for the backup file name
        local source_file="/var/lib/libvirt/images/${virtual_machine_name}.qcow2"  # Where the virtual machine is
        
        # If the default path doesn't exist, try to find the actual disk path
        if [ ! -f "$source_file" ]; then
            log_message "Default disk path not found for ${virtual_machine_name}, searching XML configuration..."
            local found_path=$(find_vm_disk_path "$virtual_machine_name")
            
            # If we found a valid path, use it instead
            if [ -n "$found_path" ]; then
                log_message "Found alternate disk path: ${found_path}"
                source_file="$found_path"
            fi
        fi
        
        local backup_file="${BACKUP_DIR}/${virtual_machine_name}-${date_stamp}.qcow2"  # Where we're putting the backup of it
        local config_file="${BACKUP_DIR}/${virtual_machine_name}-${date_stamp}.xml"  # Where it saves the virtual machine config
        
        # Check if source file exists before attempting backup
        if [ ! -f "$source_file" ]; then
            log_message "ERROR: Source file $source_file does not exist for ${virtual_machine_name}"
            return 1
        fi
        
        # Announce backup is starting
        log_message "Starting backup process for ${virtual_machine_name}"
        
        # Save virtual machine's config
        virsh dumpxml "$virtual_machine_name" > "$config_file"
        
        # Set ownership and permissions for config file
        chown libvirt-qemu:kvm "$config_file"
        chmod 644 "$config_file"
    
        # Try to shut down the virtual machine nicely 
        log_message "Shutting down ${virtual_machine_name}"
        virsh shutdown "$virtual_machine_name"
        
        # Wait patiently for the virtual machine to shut down 
        local count=0
        while [ "$(virsh domstate $virtual_machine_name)" != "shut off" ] && [ $count -lt 30 ]; do
            sleep 10
            count=$((count + 1))
        done
        
        # If it doesn't turn off, make it turn off(like holding the power button)
        if [ "$(virsh domstate $virtual_machine_name)" != "shut off" ]; then
            log_message "WARNING: Force shutting down ${virtual_machine_name}"
            virsh destroy "$virtual_machine_name"
            sleep 10
        fi
        
        # Make sure it's actually off - trust but verify
        if [ "$(virsh domstate $virtual_machine_name)" != "shut off" ]; then
            log_message "ERROR: Failed to shut down ${virtual_machine_name}"
            return 1
        fi
        
        # Create the backup - doesn't use compression since qemu-img convert compression is single threaded and insanely slow
        log_message "Creating backup of ${virtual_machine_name}"
        if ! qemu-img convert -p -f qcow2 -O qcow2 "$source_file" "$backup_file"; then
            log_message "ERROR: Backup failed for ${virtual_machine_name}"
            virsh start "$virtual_machine_name"
            return 1
        fi
        
        # Set ownership and permissions for backup file
        chown libvirt-qemu:kvm "$backup_file"
        chmod 644 "$backup_file"
    
        # Make sure the backup isn't insanely small since that means this didn't work
        # Fixed stat command for Linux systems
        local source_size=$(stat -c%s "$source_file")
        local backup_size=$(stat -c%s "$backup_file")
        if [ "$backup_size" -lt 1048576 ]; then  # Less than 1MB is suspicious - like a $5 "genuine" Rolex
            log_message "ERROR: Backup file suspiciously small for ${virtual_machine_name}"
            rm -f "$backup_file" "$config_file"
            virsh start "$virtual_machine_name"
            return 1
        fi
        
        # Turn virtual machine back on when backup is done. 
        log_message "Starting ${virtual_machine_name}"
        virsh start "$virtual_machine_name"
        
        # Wait for it to come back online 
        count=0
        while [ "$(virsh domstate $virtual_machine_name)" != "running" ] && [ $count -lt 12 ]; do
            sleep 5
            count=$((count + 1))
        done
        
        # Make sure it actually started(inspect what you expect)
        if [ "$(virsh domstate $virtual_machine_name)" != "running" ]; then
            log_message "ERROR: Failed to start ${virtual_machine_name}"
            return 1
        fi
        
        # announce that it worked
        log_message "Backup of ${virtual_machine_name} completed!"
        
        # Clean up old backups - because nobody likes a full hard drive
        log_message "Cleaning up old backups for ${virtual_machine_name}"
        find "$BACKUP_DIR" -name "${virtual_machine_name}-*.qcow2" -mtime +${RETENTION_DAYS} -exec rm -f {} \;  # Delete old qcow2 files
        find "$BACKUP_DIR" -name "${virtual_machine_name}-*.xml" -mtime +${RETENTION_DAYS} -exec rm -f {} \;   # Delete old xml files
    }
    
    # Start of the main backup process
    log_message "Starting backup process"
    
    # Make sure we're running as root 
    if [ "$EUID" -ne 0 ]; then
        log_message "ERROR: Must run as root"
        exit 1
    fi
    
    # Check if the backup directory exists 
    if [ ! -d "$BACKUP_DIR" ]; then
        log_message "ERROR: Backup directory $BACKUP_DIR does not exist"
        exit 1
    fi
    
    # Get list of ALL virtual machines, not just running ones
    # Changed to list all VMs instead of just running ones
    VMS=($(virsh list --all --name))
    
    # Check if we have enough disk space to back up
    available_space=$(df -B1 "$BACKUP_DIR" | awk 'NR==2 {print $4}')
    required_space=0
    
    # Calculate how much space we need
    for virtual_machine in "${VMS[@]}"; do
        if [ -n "$virtual_machine" ]; then
            # Try the default path first
            local_path="/var/lib/libvirt/images/${virtual_machine}.qcow2"
            
            # If default path doesn't exist, try to find actual path
            if [ ! -f "$local_path" ]; then
                local_path=$(find_vm_disk_path "$virtual_machine") || continue
            fi
            
            if [ -f "$local_path" ]; then
                virtual_machine_size=$(du -b "$local_path" 2>/dev/null | cut -f1)
                required_space=$((required_space + virtual_machine_size))
            fi
        fi
    done
    
    # Make sure we have enough space 
    if [ "$available_space" -lt "$required_space" ]; then
        log_message "ERROR: Insufficient space in backup directory"
        exit 1
    fi
    
    # loop for backing up every virtual machine
    for virtual_machine in "${VMS[@]}"; do
        if [ -n "$virtual_machine" ]; then
            backup_vm "$virtual_machine"
        fi
    done
    
    # announce it's all done
    log_message "Backup process completed!"
    

    Nerd note: This script would be laughed out of the room for use in production environments for major web companies & datacenters. This script turns off the virtual machine to back it up.

    This means that at 1 AM, the service goes down. This would be unacceptable in a production environment where people expect the service to be available 24/7.

    There are ways to do live backups where you flush mysql tables and lock them, make redis background save, pause call processing in asterisk, pause io, create atomic snapshots, coordinate with databases of all the different programs…. the audience of this guide is a person running a home server in his closet. Do you really want to subject a beginner to docker volumes that may not be in a consistent state, email delivery/receipt being interrupted, database transactionst hat are messed with in the middle fo a write, corrupt call recordings, partially written large files, all so someone can get live backups of a server in their closet, you monster?

    If you need that level of uptime, you’re not a newbie reading this guide. or you are, and you need to hire a consultant to set you up with something like veeam.

    To subject a newbie to the risk of error/corruption/screwups that comes with doing live backups for these things when they’re at the level of this guide being helpful to them is cruel.

    7.2 Set permissions so script works[edit | edit source]

    This won’t work if we don’t give it permissions to be executable.

    # Make script executable
    sudo chmod +x /root/vm_backup.sh
    
    # Test script permissions
    sudo -u root /root/vm_backup.sh

    7.3 Tell computer to run script every week, at 1 AM on Sunday[edit | edit source]

    Cron is a scheduler. You can tell cron to run a command, a script, etc. once a week, once a month, twice a day, every 10 minutes. We’re going to set this to back up at 1 AM every Sunday.

    1. Open root’s crontab:

      sudo crontab -e
    2. Add this line:

      0 1 * * 0 /root/vm_backup.sh >> /var/log/vm_backup.log 2>&1

      This will:

      • Run at 1:00 AM every Sunday

      • Log output to /var/log/vm_backup.log

      • Include both standard output and errors in the log

      • The virtual machine will be down while the transfer occurs

      If anyone is calling Rossmann Repair Group at 1 AM on a Sunday morning, they deserve to get a busy signal. Actually they deserve allison smith telling them to get the fk out of here. but a busy signal will suffice.

    7.4 Make sure cron is running[edit | edit source]

    sudo systemctl status cron

    View scheduled cron jobs:

    sudo crontab -l

    Step 8: Restoring a virtual machine from a backup[edit | edit source]

    So you messed up and deleted everything inside your virtual machine. You want to go back to where you were before.

    Remember:

    A BACKUP PLAN IS ONLY AS GOOD AS HOW EASY IT IS TO RESTORE FROM A BACKUP!

    Basic Restore[edit | edit source]

    By “basic restore” I mean what to do when you messed up a program configuration or deleted files inside a virtual machine or corrupted something accidentally. You want to go back to the image of the virtual machine you had before, on the same happycloud host computer.

    8.1 Before You Start[edit | edit source]

    I’m assuming the following is true: - Your virtual machine is already defined in Virtual Machine Manager(you see it when you run virtual machine manager GUI) - Your backups are in /mediapool/vmbackups - The backups were created using the qemu-img backup script I provided above - You just need to restore the virtual machine’s disk because you messed up some files or programs

    8.2 Find Your Backup[edit | edit source]

    1. List available backups for your virtual machine:
    ls -l /mediapool/vmbackups/name-of-your-virtual-machine-*.qcow2

    You’ll see files named like this: - name-of-your-virtual-machine-20240101.qcow2 - name-of-your-virtual-machine-20240108.qcow2

    These are the disk image files that have all of the data/programs/databases/operating system.

    Each backup will have an XML file to go with it: - name-of-your-virtual-machine-20240101.xml - name-of-your-virtual-machine-20240108.xml

    These are the files that tell virtual machine manager all of the details about your virtual machine(RAM/CPU, hardware setup, etc.)

    Pick the most recent backup before you screwed something up.

    Fast Restore:[edit | edit source]

    1. Turn off the virtual machine
    # Shut down the virtual machine gracefully
    virsh shutdown name-of-your-virtual-machine
    
    # Wait until it's actually off. Check status with:
    virsh list --all
    1. Backup Current Disk (just in case)
    # Move the current (messed up/broken) disk with date
    mv /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2 /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2.broken-$(date +%Y%m%d)
    1. Restore Backup
    # a cool command to put the virtual machine back where it was
    qemu-img convert -p -f qcow2 -O qcow2 /mediapool/vmbackups/name-of-your-virtual-machine-20240101.qcow2 /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    
    # set permissions so that our virtual machine management stuff can use it.
    chown libvirt-qemu:kvm /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    chmod 644 /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    1. Start the Virtual Machine
    virsh start name-of-your-virtual-machine

    Check the Restore[edit | edit source]

    1. Watch the virtual machine console in Virtual Machine Manager to make sure it boots
    2. Try logging in when it’s up
    3. Check that services(mailcow, immich, syncthing) actually work

    Complicated Restore[edit | edit source]

    Let’s say you destroyed more. You also messed up the virtual machine’s configuration in virsh. You edited the xml file for the virtual machine or messed with its settings in the Virtual Machine Manager GUI, and now nothing works.

    For a complete restore of both disk & configuration:

    1. Remove the current virtual machine:
    virsh destroy name-of-your-virtual-machine
    virsh undefine name-of-your-virtual-machine
    1. Restore the Disk:
    # Convert the compressed backup to the images directory
    qemu-img convert -p -f qcow2 -O qcow2 /mediapool/vmbackups/name-of-your-virtual-machine-20240101.qcow2 /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    
    # Fix permissions
    chown libvirt-qemu:kvm /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    chmod 644 /var/lib/libvirt/images/name-of-your-virtual-machine.qcow2
    1. Restore the virtual machine config:
    # The backup includes the XML configuration file
    virsh define /mediapool/vmbackups/name-of-your-virtual-machine-20240101.xml
    1. Start the VM:
    virsh start name-of-your-virtual-machine

    Common screwups[edit | edit source]

    1. “Failed to convert image”:
      • Make sure you have enough disk space
      • Check that the backup file isn’t corrupted
    2. “Failed to start VM”:
      • Usually permissions. Everyone is excited to realize they had a backup file whilst copying it back; in the excitement of realizing you actually HAVE a backup, nobody remembers to set permissions on the backup file.
      • Check that the XML file matches the system config. use virtual machine manager for this to see if anything sticks out in the GUI as a stupid mistake.
      • Verify all paths exist
    3. “Could not access storage file”: Check paths in both:
      • /var/lib/libvirt/images/
      • The virtual machine XML config
      • Make sure permissions are right (644 for files)

    Verifying Success[edit | edit source]

    After restoration, verify: 1. VM boots properly 2. Network connectivity works 3. All services start correctly 4. Data and configurations are as expected 5. Check logs for any errors

    If something isn’t right, you can always try an older backup - they’re kept for 56 days.

    Accessing Your Samba Share from Any Device[edit | edit source]

    Let’s say you want to watch a GNU/Linux ISO while you’re on the go. You connect to your VPN, and you can browse your files right there. OwlFiles can play music & video files right inside the application and stream them without you having to download them, for a wide variety of codecs, and it does so exceptionally well. It even gives options for hardware vs. software decoding of the video file in case one works better than the other for the format you’re using.

    It’s not open source, but it’s the best samba client I have ever used for android.

    Android Access with OwlFiles[edit | edit source]

    1. Install OwlFiles[edit | edit source]

    1. Open Google Play Store
    2. Search for “OwlFiles”
    3. Install the app (it’s free!)

    2. Configure OwlFiles for Samba Access[edit | edit source]

    1. Open OwlFiles
    2. Tap the “+” button in the bottom right
    3. Select “Network Storage (SMB)”
    4. Fill in the connection details:
      • Server: Your server’s IP (e.g., 192.168.5.2)
      • Share: zfsarchive
      • Username: louis
      • Password: Your Samba password
      • Name: Whatever you want to call it (e.g., “Home Server”)
    5. Tap “Test Connection” to verify
    6. Tap “Save” if test is successful

    3. Using OwlFiles[edit | edit source]

    1. Browse Files:
      • Tap your newly created connection
      • Navigate through folders
      • Files will stream rather than download first
    2. Stream Media:
      • Tap a video/audio file to stream
      • No need to download completely first
    3. File Operations:
      • Long-press files for options
      • Copy, move, delete as needed
      • Upload from phone to server

    HINT: Enable “Show hidden files” in settings if you need to see dot files.

    File Operation Best Practices:[edit | edit source]

    1. Large Files:
      • Use copy instead of move for safety
      • Don’t interrupt transfers
      • Check free space first
    2. Media Streaming:
      • Test a small file first
      • Check your connection speed
      • Consider pre-downloading for trips
    • Long-press files for options
    • Copy, move, delete as needed
    • Upload from phone to server

    HINT: Enable “Show hidden files” in settings if you need to see dot files.

    Have your server email you when a hard drive is dying.[edit | edit source]

    There is one caveat that makes ZFS & RAID functionally useless for many of its users..

    99% of the population don’t know their drive is failing until things start crashing and working horribly slow. By then, it’s usually too late. You’re heading to Rossmann Repair for data recovery.

    Then they think, “if I use RAID, I’m good! One drive can fail and it’ll still work!!!”

    No.

    You could have RAID 1 with 20 discs and it still wouldn’t matter, because NOBODY WHO HAS A LIFE CHECKS THE HEALTH OF THEIR DISK DRIVE EVERY DAY.

    If you only check your drive health when it fails, then RAID 1 with 5 disks is useless. You’re still only going to check it when the fifth one starts failing.

    Step 1: Setting Up Postfix Email System on Ubuntu Server 24.04[edit | edit source]

    1.1 Install Required Packages[edit | edit source]

    sudo apt update
    sudo apt install postfix libsasl2-modules mailutils -y

    When prompted during install:

    • Choose “Internet Site” for configuration type
    • Enter your system’s fully qualified domain name when asked of where we are sending emails from, in our case it is home.arpa
    • Recipient for root & postmaster mail will be the email you want to receive that at, for me I set it as the same email as ZFS alerts which is [email protected] for me
    • Set “Force synchronous updates on mail queue?” to no

    1.2 Configure Main Postfix Configuration - this is similar to what we did for FreePBX voicemail alerts in the previous section[edit | edit source]

    1. Backup existing configuration:

      sudo cp /etc/postfix/main.cf /etc/postfix/main.cf.backup
    2. Create new main.cf:

      sudo nano /etc/postfix/main.cf
    3. Copy and paste the provided configuration template if you need, and edit the [email protected] email in the configuration file with the email you wish to have Postfix use to send you an email.

      # See /usr/share/postfix/main.cf.dist for a commented, more complete version
      
      
      # Debian specific:  Specifying a file name will cause the first
      # line of that file to be used as the name.  The Debian default
      # is /etc/mailname.
      #myorigin = /etc/mailname
      
      smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
      biff = no
      
      # appending .domain is the MUA's job.
      append_dot_mydomain = no
      
      # Uncomment the next line to generate "delayed mail" warnings
      #delay_warning_time = 4h
      
      readme_directory = no
      
      # See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on
      # fresh installs.
      compatibility_level = 3.6
      
      
      
      # TLS parameters
      smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
      smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
      smtpd_tls_security_level=may
      
      smtp_tls_CApath=/etc/ssl/certs
      smtp_tls_security_level=may
      smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
      
      
      smtpd_relay_restrictions = permit_mynetworks 
      permit_sasl_authenticated defer_unauth_destination
      myhostname = debian.home.arpa
      alias_maps = hash:/etc/aliases
      alias_database = hash:/etc/aliases
      mydestination = $myhostname, debian, localhost.localdomain, localhost
      relayhost = [smtp.postmarkapp.com]:587
      smtp_use_tls = yes
      smtp_sasl_auth_enable = yes
      smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
      smtp_sasl_security_options = noanonymous
      smtp_sasl_mechanism_filter = plain
      sender_canonical_maps = static:[email protected]
      mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
      mailbox_size_limit = 0
      recipient_delimiter = +
      # WARNING: Changing the inet_interfaces to an IP other than 127.0.0.1 may expose Postfix to external network connections.
      # Only modify this setting if you understand the implications and have specific network requirements.
      inet_interfaces = 127.0.0.1
      inet_protocols = all
      message_size_limit = 102400000

    1.3 Set Up SMTP Authentication, and use your usernames/passwords/emails to replace mine[edit | edit source]

    1. Create the SASL password file:

      sudo nano /etc/postfix/sasl_passwd
    2. Add this line to the file, replacing the username & password with your credentials from postmark:

    [smtp.postmarkapp.com]:587 1788dd83-9917-46e1-b90a-3b9a89c10bd7:1788dd83-9917-46e1-b90a-3b9a89c10bd7
    1. Set proper permissions for security:

      sudo chmod 600 /etc/postfix/sasl_passwd
    2. Create the hash database file:

      sudo postmap /etc/postfix/sasl_passwd

    1.4 Restart and Test[edit | edit source]

    1. Restart Postfix:

      sudo systemctl restart postfix
    2. Verify Postfix is running:

      sudo systemctl status postfix
    3. Test the email setup:

      echo "Test email from $(hostname)" | mail -s "Test Email" [email protected]

    Verification Steps:

    1. Check mail logs for errors:

      sudo tail -f /var/log/mail.log
    2. Verify permissions:

      ls -l /etc/postfix/sasl_passwd*

      Should show:

      • -rw------- 1 root root for sasl_passwd
      • -rw------- 1 root root for sasl_passwd.db

    Troubleshooting:[edit | edit source]

    If emails aren’t being sent:

    1. Check Postfix status:

      sudo systemctl status postfix
    2. View detailed mail logs:

      sudo journalctl -u postfix

    Check mail logs for errors:

    sudo tail -f /var/log/mail.log
    1. Check /var/log/mail.log for errors
    2. Check that Postmark credentials are correct (e.g., if you typed postmark.com instead of postmarkapp.com for server, etc.)
    3. Verify sender domain (stevesavers.com) is properly configured in Postmark
    4. Check the Activity tab on the transactional stream in Postmark
    5. Mail log will tell you what you fkd up 99% of the time.

    This setup does as follows:

    The system is now ready for the next step in the ZFS monitoring setup.

    Step 2: Creating Complete ZFS Monitoring Script with Logging[edit | edit source]

    2.1 Create Log Directory[edit | edit source]

    sudo mkdir -p /var/log/zfs-monitor
    sudo chown root:root /var/log/zfs-monitor
    sudo chmod 755 /var/log/zfs-monitor

    2.2 Make the Monitoring Script[edit | edit source]

    sudo -u root nano /root/zfs_health_check.sh

    Copy and paste this complete script:

    #!/bin/bash
    
    # Configuration
    EMAIL="[email protected]"
    HOSTNAME=$(hostname)
    LOG_FILE="/var/log/zfs-monitor/health_check.log"
    LOG_MAX_SIZE=$((10 * 1024 * 1024))  # 10MB in bytes
    
    # Email configuration
    FROM_EMAIL="[email protected]"
    FROM_NAME="Steve"
    REPLY_TO="Steve <[email protected]>"  # Use a more consistent Reply-To address
    RETURN_PATH="[email protected]"  # A safe Return-Path address to handle bounces properly
    
    # Create required directories
    mkdir -p "$(dirname "$LOG_FILE")"
    
    # Initialize error log
    errors=""
    
    # Logging functions
    rotate_log() {
        if [ -f "$LOG_FILE" ] && [ $(stat -f%z "$LOG_FILE" 2>/dev/null || stat -c%s "$LOG_FILE") -gt "$LOG_MAX_SIZE" ]; then
            mv "$LOG_FILE" "$LOG_FILE.old"
        fi
    }
    
    log_message() {
        echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
    }
    
    log_error() {
        local message="$1"
        errors="${errors}\n$message"
        log_message "ERROR: $message"
    }
    
    # Check overall pool status
    check_pool_status() {
        while IFS= read -r pool; do
            status=$(zpool status "$pool")
            
            # Check for common failure keywords
            if echo "$status" | grep -E "DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED|FAIL|DESTROYED|SUSPENDED" > /dev/null; then
                log_error "ALERT: Pool $pool is not healthy:\n$status"
            fi
            
            # Check for errors
            if echo "$status" | grep -v "No known data errors" | grep -i "errors:" > /dev/null; then
                log_error "ALERT: Pool $pool has errors:\n$status"
            fi
            
            # Check scrub status
            if echo "$status" | grep "scan" | grep -E "scrub canceled|scrub failed" > /dev/null; then
                log_error "ALERT: Pool $pool has unusual scrub status:\n$(echo "$status" | grep "scan")"
            fi
        done < <(zpool list -H -o name)
    }
    
    # Check individual device status
    check_devices() {
        while IFS= read -r pool; do
            devices=$(zpool status "$pool" | awk '/ONLINE|DEGRADED|FAULTED|OFFLINE|UNAVAIL|REMOVED/ {print $1,$2}')
            
            echo "$devices" | while read -r device state; do
                if [ "$state" != "ONLINE" ] && [ "$device" != "pool" ] && [ "$device" != "mirror" ] && [ "$device" != "raidz1" ] && [ "$device" != "raidz2" ]; then
                    log_error "ALERT: Device $device in pool $pool is $state"
                fi
            done
        done < <(zpool list -H -o name)
    }
    
    # Check capacity threshold (80% by default)
    check_capacity() {
        while IFS= read -r pool; do
            capacity=$(zpool list -H -p -o capacity "$pool")
            if [ "$capacity" -ge 80 ]; then
                log_error "WARNING: Pool $pool is ${capacity}% full"
            fi
        done < <(zpool list -H -o name)
    }
    
    # Check dataset properties
    check_dataset_properties() {
        while IFS= read -r dataset; do
            # Skip base pools
            if ! echo "$dataset" | grep "/" > /dev/null; then
                continue
            fi
            
            # Check if compression is enabled
            compression=$(zfs get -H compression "$dataset" | awk '{print $3}')
            if [ "$compression" = "off" ]; then
                log_error "WARNING: Compression is disabled on dataset $dataset"
            fi
            
            # Check if dataset is mounted
            mounted=$(zfs get -H mounted "$dataset" | awk '{print $3}')
            if [ "$mounted" = "no" ]; then
                log_error "WARNING: Dataset $dataset is not mounted"
            fi
            
            # Check available space
            available=$(zfs get -H available "$dataset" | awk '{print $3}')
            if [ "$available" = "0" ] || [ "$available" = "0B" ]; then
                log_error "CRITICAL: Dataset $dataset has no available space"
            fi
        done < <(zfs list -H -o name)
    }
    
    # Function to send email
    send_email() {
        local subject="$1"
        local content="$2"
        
        {
            echo "Subject: $subject"
            echo "To: ${EMAIL}"
            echo "From: ${FROM_NAME} <${FROM_EMAIL}>"
            echo "Reply-To: ${REPLY_TO}"
            echo "Return-Path: ${RETURN_PATH}"
            echo "Content-Type: text/plain; charset=UTF-8"
            echo
            echo "$content"
        } | sendmail -t
    }
    
    # Main execution
    rotate_log
    log_message "Starting ZFS health check"
    
    # Run all checks
    check_pool_status
    check_devices
    check_capacity
    check_dataset_properties
    
    # Send notification if there are errors
    if [ -n "$errors" ]; then
        log_message "Issues detected - sending email alert"
        subject="Storage Alert: Issues Detected on ${HOSTNAME}"  # Simplified subject line
        content=$(echo -e "ZFS Health Monitor Report from ${HOSTNAME}\n\nThe following issues were detected:${errors}")
        send_email "$subject" "$content"
    else
        log_message "All ZFS checks passed successfully"
    fi

    2.3 Set Proper Permissions[edit | edit source]

    sudo -u root chmod +x /root/zfs_health_check.sh

    2.4 Test the Script[edit | edit source]

    sudo /root/zfs_health_check.sh

    2.5 Make sure logging works[edit | edit source]

    tail -f /var/log/zfs-monitor/health_check.log

    2.6 Features of this Script:[edit | edit source]

    • Monitoring:
      • It tells you when your pool has issues BEFORE all your drives die
      • Device status checks
      • Capacity warnings
    • Email Alerts:
      • Sends when issues are detected
      • Includes error information

    The script is now ready for cron job configuration and regular use. Cron jobs are tasks we tell the machine to perform at regular intervals, similar to setting a utility bill to autopay.

    Step 3: Create Cron Job[edit | edit source]

    1. Open root’s crontab:

      sudo crontab -e
    2. Add these lines:

      # ZFS Health Check - Run every 15 minutes
      */15 * * * * /root/zfs_health_check.sh >/dev/null 2>&1
      
      # Log rotation - Run daily at midnight
      0 0 * * * find /var/log/zfs-monitor -name "*.old" -mtime +7 -delete

    Step 4: Verify it works again, just because[edit | edit source]

    Run the script manually to ensure it works:

    sudo /root/zfs_health_check.sh

    Check Logs[edit | edit source]

    Monitor the log file for any issues:

    tail -f /var/log/zfs-monitor/health_check.log

    Make sure Cron Job is listed[edit | edit source]

    Verify that the cron job is correctly listed:

    sudo crontab -l

    Test Email Notifications[edit | edit source]

    1. Unplug a drive.
    2. Wait.
    3. Does an email come through?

    The monitoring system is now fully configured and will:

    • Check ZFS status every 15 minutes
    • Log all checks to /var/log/zfs-monitor/health_check.log
    • Automatically rotate logs when they reach 10MB
    • Send email alerts only when issues are detected
    • Clean up old log files after 7 days

    How to tell if you won:[edit | edit source]

    • ✓ Test email received
    • ✓ Script detects simulated issues
    • ✓ Cron job executes on schedule
    • ✓ Logs show proper entries
    • ✓ Alerts generated for pool degradation
    • ✓ System returns to normal after tests

    If you got an email, congrats, it works!

    Step 5: Set up OS RAID Array to email you when there’s a problem as well[edit | edit source]

    What we set up above is for your ARCHIVE storage. What about your operating system? We will do the same thing, and also go over a barbaric backup routine that works for me.

    5.1 Creating the alert script[edit | edit source]

    I’m not a programmer, so bear with me. This script is for my personal use, but I’m sharing it because it works. Here’s what you need to do:

    1. Edit Email Addresses: You’ll need to change the email addresses in the script. This includes:
      • The recipient email
      • The sender email
      • The reply-to address
      • The return path for bounced emails
    2. Script Location: Save the script at root/mdadm_alert.sh
    sudo -u root nano -w /root/mdadm_alert.sh

    Enter the following:

    #!/bin/bash
    
    # thank you to stack overflow for giving me the courage to wade through 100s of posts and hack together something that looks like it works. 
    
    # stricter error handling
    set -euo pipefail  # ‘set -e’ exits on errors, ‘u’ throws errors on unset variables, & ‘pipefail’ exits if any part of a pipeline fails
    IFS=$'\n\t'  # Set IFS (Internal Field Separator) to newline & tab to avoid issues with spaces and other weird characters in filenames
    
    # Configuration variables (where settings are stored)
    EMAIL="[email protected]"  # Email to send alerts to - EDIT THIS 
    HOSTNAME=$(hostname)  # Pull the system's hostname dynamically and save it here
    LOG_DIR="/var/log/mdadm-monitor"  # Directory path for where logs go
    LOG_FILE="${LOG_DIR}/raid_health_check.log"  # Full path to the specific log file for RAID checks
    LOG_MAX_SIZE=$((10 * 1024 * 1024))  # Maximum log file size in bytes (10 MB here)
    
    # Email configuration for the alert message
    FROM_EMAIL="[email protected]"  # The email address that will appear as the sender - EDIT THIS
    FROM_NAME="Steve"  # name of the sender, EDIT THIS
    REPLY_TO="Steve <[email protected]>"  # Reply-to email address, EDIT THIS
    RETURN_PATH="[email protected]"  # Return path for bounced emails when email fails EDIT THIS
    
    # make empty variables & associated arrays 
    errors=""  # Empty variable to collect error messages
    drive_health_report=""  # Another empty variable to store drive health details
    declare -A RAID_ARRAYS  # array to keep track of RAID arrays we find, indexed by  name like "boot"
    declare -A SMART_SCORES  # array to store SMART scores for drives, indexed by rive path
    
    # Set up log directory and ensure permissions are correct
    setup_logging() {
        # Make the log directory if it doesn’t already exist
        mkdir -p "$LOG_DIR" || { echo "ERROR: Cannot create log directory $LOG_DIR"; exit 1; }  # Exit with error if I can’t make the directory
        chmod 750 "$LOG_DIR"  # Set directory permissions to allow owner & group access but not others
    
        # Check if the log file exists and exceeds the max size limit
        if [ -f "$LOG_FILE" ] && [ "$(stat -c%s "$LOG_FILE")" -gt "$LOG_MAX_SIZE" ]; then  # ‘stat -c%s’ gives the size in bytes
            mv "$LOG_FILE" "$LOG_FILE.old"  # Archive the old log file by renaming it
        fi
        touch "$LOG_FILE"  # Create an empty log file if it doesn’t exist
        chmod 640 "$LOG_FILE"  # Set permissions on the log file (read/write for owner, read for group)
    }
    
    # Function for logging messages w/ timestamps
    log_message() {
        local timestamp  # Make local variable for this
        timestamp=$(date '+%Y-%m-%d %H:%M:%S')  # Generate a timestamp in this specific format
        echo "[$timestamp] $1" | tee -a "$LOG_FILE"  # Output the message with the timestamp to both console & log file
    }
    
    # Function for logging errors (adds them to the error string and logs them as "ERROR")
    log_error() {
        local message="$1"  # Message passed to this function
        errors="${errors}\n$message"  # Append this message to the errors variable
        log_message "ERROR: $message"  # Log the error with a timestamp
    }
    
    # Check that required (commands) are installed on the system
    check_dependencies() {
        log_message "Checking required dependencies..."  # Announce the check in the log
        local missing_deps=()  # Initialize an empty array for any missing commands
    
        # Loop through each command we need, checking if it’s available
        for dep in mdadm smartctl lsblk findmnt awk grep dmsetup; do
            if ! command -v "$dep" &>/dev/null; then  # If the command is missing, add it to the array
                missing_deps+=("$dep")
            fi
        done
    
        # If the array of missing dependencies isn’t empty, log an error and exit
        if [ ${#missing_deps[@]} -ne 0 ]; then
            log_error "Missing required dependencies: ${missing_deps[*]}"  # Log missing commands
            log_error "Install them with: sudo apt-get install mdadm smartmontools util-linux findutils gawk grep dmsetup"
            exit 1  # Exit with error because we’re missing something we need(find what you need if you're getting this)
        fi
    }
    
    # Find & detect RAID arrays on this system
    detect_raid_arrays() {
        log_message "Detecting RAID arrays..."  # Log that we’re looking for RAID arrays
    
        # Find all block devices with names like /dev/md0, /dev/md1 (these are RAID arrays like the one you made for the OS & boot)
        local md_devices
        md_devices=$(find /dev -name 'md[0-9]*' -type b)  # Save this list to the md_devices variable
    
        # Loop through each RAID array found and log its details
        for md_dev in $md_devices; do
            local array_detail  # Temporary variable for array details
            array_detail=$(mdadm --detail "$md_dev" 2>/dev/null) || continue  # Get RAID details; skip if it fails
    
            # Extract the RAID array name from the details
            local array_name
            array_name=$(echo "$array_detail" | grep "Name" | awk '{print $NF}')  # Last word on the "Name" line is the array name
    
            # Use the name to decide if this array is for boot or root, then add it to RAID_ARRAYS
            if [[ "$array_name" == *"bootraid"* ]]; then  # Array name contains "bootraid"
                RAID_ARRAYS["boot"]="$md_dev"  # Save the device path with the key "boot"
                log_message "Found boot array: $md_dev ($array_name)"  # Log the found boot array
            elif [[ "$array_name" == *"osdriveraid"* ]]; then  # Array name contains "osdriveraid"
                RAID_ARRAYS["root"]="$md_dev"  # Save the device path with the key "root"
                log_message "Found root array: $md_dev ($array_name)"  # Log the found root array
            fi
        done
    
        # Check if we actually found both root and boot arrays, and log an error if any are missing
        if [ -z "${RAID_ARRAYS["boot"]:-}" ] || [ -z "${RAID_ARRAYS["root"]:-}" ]; then  # If either key is empty
            log_error "Failed to detect both boot and root RAID arrays"  # Log a general error
            [ -z "${RAID_ARRAYS["boot"]:-}" ] && log_error "Boot array not found"  # Specific message if boot is missing
            [ -z "${RAID_ARRAYS["root"]:-}" ] && log_error "Root array not found"  # Specific message if root is missing
            return 1  # Return an error code
        fi
    
        # Print out a summary of all arrays found
        log_message "Detected arrays:"
        for purpose in "${!RAID_ARRAYS[@]}"; do
            log_message "  $purpose: ${RAID_ARRAYS[$purpose]}"
        done
    }
    
    # Check the health of a specific RAID array
    check_array_status() {
        local array="$1"  # The path of the array device
        local purpose="$2"  # Either "boot" or "root" to clarify which array this is
    
        # Verify that the array actually exists as a block device
        if [ ! -b "$array" ]; then
            log_error "$purpose array device $array does not exist"  # Log the missing device
            return 1  # Return error because we can’t check a nonexistent device
        fi
    
        # Get details about the RAID array and store it in the detail variable
        local detail
        detail=$(mdadm --detail "$array" 2>&1) || {  # ‘2>&1’ captures error output in case of issues
            log_error "Failed to get details for $purpose array ($array)"
            return 1  # Exit with an error code if it failed
        }
    
        # Extract the state of the array (like "clean" or "active") and log it
        local state
        state=$(echo "$detail" | grep "State :" | awk '{print $3,$4}')  # Get the words after "State :" from the details
        log_message "$purpose array status: $state"
    
        # If the array is in an undesirable state, log a warning
    if [[ "$state" =~ degraded|DEGRADED|failed|FAILED|inactive|INACTIVE ]]; then
        log_error "$purpose array ($array) is in concerning state: $state"
    fi
    
        # Detect failed devices within the array
        local failed_devices
        failed_devices=$(echo "$detail" | grep "Failed Devices" | awk '{print $4}')  # Pull the failed devices count
        if [ "$failed_devices" -gt 0 ]; then  # If there are failed devices, go through each one
            while read -r line; do
                if [[ "$line" =~ "faulty" ]]; then  # If the line mentions "faulty"
                    local failed_dev
                    failed_dev=$(echo "$line" | awk '{print $7}')  # Get the 7th word (the device name)
                    log_error "$purpose array ($array) has failed device: $failed_dev"  # Log which device failed
                fi
            done < <(echo "$detail" | grep -A20 "Number" | grep "faulty")  # Look up to 20 lines after "Number" to find "faulty"
        fi
    
        # Check if any devices are rebuilding, and log it if they are
        if echo "$detail" | grep -q "rebuilding"; then
            while read -r line; do
                if [[ "$line" =~ "rebuilding" ]]; then  # Check for "rebuilding" in the line
                    local rebuilding_dev
                    rebuilding_dev=$(echo "$line" | awk '{print $7}')  # Get the device name being rebuilt
                    log_error "$purpose array ($array) is rebuilding device: $rebuilding_dev"  # Log the rebuilding device
                fi
            done < <(echo "$detail" | grep -A20 "Number" | grep "rebuilding")  # Again, look ahead 20 lines for any "rebuilding" mention
        fi
    }
    
    # Function to check the health of each drive within a RAID array
    check_drive_health() {
        local drive="$1"  # The drive device to check (e.g., /dev/sda)
        local health_score=100  # Initialize health score to 100 (a perfect score)
        local issues=""
    
        # Skip the check if it’s not a valid block device
        if [ ! -b "$drive" ]; then
            log_error "Device $drive is not a block device"  # Log the invalid device
            return 1  # Exit with an error code
        fi
    
        log_message "Checking health of drive $drive..."  # Announce which drive we’re checking
    
        # Run SMART health check and reduce health score if it fails
        if ! smartctl -H "$drive" | grep -q "PASSED"; then  # If it does NOT say "PASSED"
            health_score=$((health_score - 50))  # Drop score by 50 points if it fails
            issues+="\n- Overall health check failed"  # Log this specific issue
        fi
    
        # Collect SMART attributes for further checks
        local smart_attrs
        smart_attrs=$(smartctl -A "$drive" 2>/dev/null) || true  # Redirect error to /dev/null
    
        # Check for reallocated sectors (sign of drive wear and tear)
        local reallocated
        reallocated=$(echo "$smart_attrs" | awk '/^  5/ {print $10}')  # Look for attribute ID 5 in SMART data
        if [ -n "$reallocated" ] && [ "$reallocated" -gt 0 ]; then
            health_score=$((health_score - 10))  # Drop health score by 10 if we have reallocated sectors
            issues+="\n- Reallocated sectors: $reallocated"  # Add to issues list
        fi
    
        # Check for pending sectors (could cause read/write errors)
        local pending
        pending=$(echo "$smart_attrs" | awk '/^197/ {print $10}')  # Look for attribute ID 197 in SMART data
        if [ -n "$pending" ] && [ "$pending" -gt 0 ]; then
            health_score=$((health_score - 10))  # Drop health score by 10 if pending sectors are present
            issues+="\n- Pending sectors: $pending"  # Add to issues list
        fi
    
        SMART_SCORES["$drive"]=$health_score  # Save the final score in SMART_SCORES array
        if [ "$health_score" -lt 100 ]; then
            drive_health_report+="\nDrive: $drive\nHealth Score: $health_score/100\nIssues:$issues"  # Append issues to report if any were found
        fi
    }
    
    # Send email if any errors or health issues were found
    send_email() {
        local subject="RAID Alert: Issues Detected on ${HOSTNAME}"  # Set email subject line
        local content="RAID Health Monitor Report from ${HOSTNAME}\nTime: $(date '+%Y-%m-%d %H:%M:%S')\n"
        [ -n "$errors" ] && content+="\nRAID Issues:${errors}"  # Append RAID issues to the email content if any
        [ -n "$drive_health_report" ] && content+="\nDrive Health Report:${drive_health_report}"  # Append drive health report if any issues were found
    
        # Build the email using sendmail syntax
        {
            echo "Subject: $subject"
            echo "To: ${EMAIL}"
            echo "From: ${FROM_NAME} <${FROM_EMAIL}>"
            echo "Reply-To: ${REPLY_TO}"
            echo "Return-Path: ${RETURN_PATH}"
            echo "Content-Type: text/plain; charset=UTF-8"  # Text format for readability
            echo
            echo -e "$content"  # Use ‘-e’ to allow newline characters
        } | sendmail -t  # Pipe the entire email message to sendmail for delivery
    }
    
    # Main function to execute checks and send email if needed
    main() {
        # Make sure script is run as root for necessary permissions
        [ "$(id -u)" -ne 0 ] && { echo "ERROR: This script must be run as root"; exit 1; }
        setup_logging  # Call function to initialize logging setup
        log_message "Starting RAID health check"  # Announce the start of the health check
        check_dependencies  # Verify dependencies are available
        detect_raid_arrays  # Detect RAID arrays
    
        # Loop through each RAID array and check its status, then check each drive in the array
        for purpose in "${!RAID_ARRAYS[@]}"; do
            array="${RAID_ARRAYS[$purpose]}"
            check_array_status "$array" "$purpose"
    
            # For each device in the RAID array, check health
            while read -r device; do
                if [[ "$device" =~ ^/dev/ ]]; then
                    check_drive_health "$device"
                fi
            done < <(mdadm --detail "$array" | grep "active sync" | awk '{print $NF}')
        done
    
        # Send an email if errors or health issues were found; otherwise, log a success message
        [ -n "$errors" ] || [ -n "$drive_health_report" ] && send_email || log_message "All checks passed successfully"
    }
    
    # Execute the main function to start everything
    main  # Calls the main function, running all the checks

    Set permissions properly so it can run:

    sudo -u root chmod +x /root/mdadm_alert.sh

    5.2 Setting Up the Cron Job[edit | edit source]

    We want this script to run regularly. I am going to set it to run every 15 minutes.

    # Open the crontab editor
    sudo -u root crontab -e

    Add the following line to run the script every minute (for testing purposes):

    * * * * * /root/mdadm_alert.sh

    Note: For regular use, set it to run every fifteen minutes, with a line such as */15 * * * * /root/mdadm_alert.sh

    5.3 Testing the setup - software run first.[edit | edit source]

    Let’s simulate a fault condition on /dev/md126 which is what I set up as the RAID1 array for the operating system installation; this is where we created the logical volume for /

    1. Check the status of it as it is now:
    sudo mdadm --detail /dev/md126
    1. If it shows up as healthy, run the script to make sure we do not have false positives.
    sudo -u root /root/mdadm_alert.sh
    1. If no false positives, simulate fault condition:
    sudo mdadm /dev/md126 --fail /dev/sdb3

    /dev/sdb3 was the drive & partition that was used in my RAID array. Yours may differ, refer to the output of mdadm --detail to see how your RAID array is comprised, and then fail one of the two devices.

    1. Run the monitoring script to test again.
    sudo -u root /root/mdadm_alert.sh

    You should receive an email. Check spam.

    1. Undo what you did, un-fail the drive.
    sudo mdadm /dev/md126 --remove /dev/sdb3
    sudo mdadm /dev/md126 --add /dev/sdb3
    1. Watch it re-sync. Don’t mess with anything until it is fully resynced.
    watch cat /proc/mdstat

    5.4 Testing the setup for real - hardware fault.[edit | edit source]

    Now, let’s test this setup. Unplug one of the drives and see if you get a failure alert. Obviously, don’t do this after you start storing anything important on here. We do this in the build phase of our system to make sure it works, BEFORE trusting this system with anything important.

    1. Check the status of it as it is now:
    sudo mdadm --detail /dev/md126
    1. If it shows up as healthy, run the script to make sure we do not have false positives.
    sudo -u root /root/mdadm_alert.sh
    1. If no false positives, unplug the drive from the running system.

    /dev/sdb3 was the drive & partition that was used in my RAID array. Yours may differ, refer to the output of mdadm --detail to see how your RAID array is comprised, and then fail one of the two devices.

    1. Run the monitoring script to test again.
    sudo -u root /root/mdadm_alert.sh

    You should receive an email. Check spam.

    1. Undo what you did, un-fail the drive after plugging it back in..
    sudo mdadm /dev/md126 --remove /dev/sdb3
    sudo mdadm /dev/md126 --add /dev/sdb3
    1. Watch it re-sync. Don’t mess with anything until it is fully resynced.
    watch cat /proc/mdstat

    Step 6: Backup Strategy[edit | edit source]

    Now, let’s talk about backups. It’s not enough to just have a RAID setup; you need a backup plan for when carelessness strikes.

    6.1 Backup Method[edit | edit source]

    Here’s my approach:

    • Physical Copy: I make a physical copy of my disk. This might seem old-school, but it works for me.

    Another approach:

    • LVM Snapshots: You can take an LVM snapshot and then use rsync to back up your data. This method can be hit or miss. I don’t use this.

    You can take a snapshot of your drive with LVM, rsync your files off of the drive elsewhere, reinstall the operating system, and rsync them back, but… what if some of your files are for older libraries, or programs/configuration files that have different syntax with different versions? It can become a rabbit hole to hell very easily, and I’m not going to begin to torture newbies with this.

    DDRescue is the tool I use to make a copy of my drive. I connect the drive via a USB 3 to SATA plug and create a backup. It’s best to do this to the same make/model of drive if possible.

    6.2 DDRescue Guide from Ubuntu Server Live Environment[edit | edit source]

    We’re going to boot from the same Ubuntu Server LiveUSB image you created to install Ubuntu Server onto the happycloud host machine.

    • Boot from the USB Drive
    1. Insert the USB drive into your server.
    2. Power on the server and enter the boot menu (usually by pressing F12 or another function key).
    3. Select the UEFI option for your USB drive.
    4. Choose to Try Ubuntu Server & do not install it.
    • Install ddrescue
    1. Update package list & install ddrescue:
    sudo apt update
    sudo add-apt-repository universe
    sudo apt install gddrescue
    1. Check Current Drives (BEFORE Plugging in Source)
    sudo fdisk -l

    Take note of the present drives.

    1. Connect Source Drive (operating system solid state drive from the happycloud host machine). Either will do. Either connect it physically to an existing SATA/NVME port, or use a USB-SATA or USB-NVME enclosure if this makes it easier for you.

    2. Wait 5-10 seconds. Be patient.

    3. Check which drive it is. It will be the new drive that shows up. Make sure the model as well as the size & partitions matches what you are expecting.

      sudo fdisk -l
    4. Connect Target Drive (blank identical disk you are making into a backup drive)

    5. Wait 5-10 seconds. Be patient.

    6. Check which drive it is. It will be the new drive that shows up. Make sure the model as well as the size & partitions matches what you are expecting.

      sudo fdisk -l

    TRIPLE CHECK YOUR DEVICES

    # List all drives again
    sudo fdisk -l
    1. Run DDRescue
    sudo ddrescue -f -d -r3 /dev/source /dev/target logfile.log

    For instance, if the source is /dev/sdc & target is /dev/sdd:

    sudo ddrescue -f -d -r3 /dev/sdc /dev/sdd logfile.log

    Option meanings: - -f : Force overwrite target - -d : Use direct disk access - -r3: Number of retry attempts on bad sectors - logfile.log: Saves progress (can resume if interrupted)

    ⚠️ WARNING: ⚠️

    1. TRIPLE CHECK device names

      • Wrong device = destroyed data
      • Source and target reversed = destroyed source
    2. Target MUST be same size or larger than source

    3. Make sure you’re using whole drives:

      • /dev/sdc (correct, whole drive)
      • /dev/sdc1 (WRONG, just one partition)
    4. If unsure which is which, unplug/replug and watch:

      sudo dmesg | tail

      It will show new devices added to the linux machine

    IMPORTANT NOTE: Always have a physical copy of a known-working server solid state drive. If something wrong, you can quickly restore your system by plugging in the backup drive and be back up in 90 seconds or less.

    RAID Configuration Recommendations[edit | edit source]

    • For those who are extra cautious, consider running a RAID 1 setup with three drives instead of two. Here’s why:

      • Redundancy: When one drive fails, the others are likely not far behind. Having a third drive adds some padding.
      • Peace of mind: If you’re paranoid about data loss, this setup is a safer bet.

      If you wanted to avoid stressing the SSD, you could create a ZFS dataset on the ZFS pool of hard drives you set up for virtual machines, mount that as /var/lib/libvirt/images/, but I’ve gotten spoiled by the speed of SSDs - I don’t want to go back. I realize that writing to them a lot means killing them, and I’m ok with that. :)

    OS drive backup conclusion:[edit | edit source]

    Once everything is set up the way you like, shut down your system, remove one of the drives, and make a backup. Use a drive of equal or greater size for the backup. This way, if disaster strikes, you can restore your system in no time.

    We now have a simple & effective way to know when our operating system drive is about to die on us, so we can take action before anything horrible occurs. Best of all, if you set this up properly, you can have zero downtime & not even have to turn off the machine to get back up and running when a drive fails.

    Setting Up Immich: Google Photos/iCloud replacement[edit | edit source]

    What is immich?[edit | edit source]

    Immich is like Google photos or iCloud, if you hosted it yourself; but better! It has the following features that make it stand out to me:

    Why Immich?[edit | edit source]

    Insanely fast[edit | edit source]

    Immich loads & scrolls through things on a core i3 NUC with an old SATA drive faster than nextcloud allowed me to on an i7-14700k with an NVME SSD. it’s snappy even on slower computers & phones.

    Nextcloud made the experience of browsing through images & photos not on my phone so bad I stopped doing it; a flagship phone and an i7-14700k, 64 GB of RAM, and a $400 SSD wasn’t good enough to make this usable.

    Machine learning for image search[edit | edit source]

    I can type “cat on chair” and have every image of a cat on a chair show up. It actually works, it isn’t half assed and full of false positives.

    Immich’s machine learning features & included libraries are also used for face detection. Immich can sort your images by people, so you can see every image with your dad, cousin, girlfriend, ex-girlfriend, etc.

    You can choose the model you want to use. The default model works best for me, but I appreciate Immich respecting my right to choose the model I want.

    Immich’s machine learning is done LOCALLY. Immich can be blocked from connecting to the internet and all machine learning & facial recognition will still work.

    When people hear the words “Artificial intelligence,” “cloud,” & “machine learning,” they hear buzzwords for processes which were supposed to be used for our benefit, but instead have become tools of data mining & abusive models. These are not bad things when they are done in a freedom-respecting way. I have no problem with machine learning algorithms going through all of my photos & videos & knowing the names of the people in my photos, because that information will never leave my computer.

    Easy proxies[edit | edit source]

    Immich supports video & image proxy files. Proxies are photos & videos that are further compressed. They are lower in quality, but their smaller size allows you to load them faster when you’re on a poor internet connection. I use a google pixel, so this is handy. Google pixels have horrible cellphone service & reception because Google is too stubborn to use Qualcomm modems. Google decided that its users care more about lame AI features than having working cell service; This is where image proxies & video proxies come in handy.

    Nextcloud allows image proxies(with config file editing; ew). Immich allows both image AND video proxies, so high bitrate videos can still be loaded & viewed on slow internet.

    Ease of use[edit | edit source]

    This program is so easy to use you’ll almost forget you’re using GNU/Linux. When I set up my Nextcloud instance, I had to edit config files to get thumbnails to work. Further, nextcloud only allows image thumbnails, but not video proxies. Not only is it more work with nextcloud to get thumbnails & proxies so you have something that loads well on a slow connection - it’s not as functional. Everything here is doable within the web interface after installation, and it’s easy as can be.

    This program has the easiest installation & documentation I’ve found for this type of GNU/Linux software. It is useless for me to provide instructions here because [following Immich team’s instructions, this will all work perfectly(https://immich.app/docs/install/docker-compose/) with no confusion. Immich is as good as as bitwarden with regards to “just working” out of the box & a big part of why I fell in love with their progam.

    Prerequisites[edit | edit source]

    Before starting, ensure you have:

    • Docker Compose version 2.x installed(you should’ve done this setting up onlyoffice on this VM earlier)
    • Docker installed from the official Docker repository(you should’ve done this setting up onlyoffice on this VM earlier)
    • Enough storage space for the photos & videos from your phone
    • Did I mention not to install docker using SNAP from the ubuntu install? Don’t do that.

    Step 1: Install docker properly.[edit | edit source]

    NOTE: This step may not be necessary!

      • YOU DO NOT NEED TO PERFORM THIS STEP IF YOU INSTALLED DOCKER WHILE INSTALLING ONLYOFFICE. IF YOU INSTALLED DOCKER PRIOR TO INSTALLING ONLYOFFICE, SKIP THIS STEP! IF YOU DID NOT INSTALL ONLYOFFICE BECAUSE YOU DIDN’T WANT ONLYOFFICE, THAT MEANS YOU SKIPPED INSTALLING DOCKER AS WELL; IN WHICH CASE, YOU WILL NEED TO FOLLOW THESE INSTRUCTIONS.

    Never use Ubuntu’s snap version of docker[edit | edit source]

    Ubuntu installs docker by default using the cancerous snap. We do not want to use snap. Ubuntu installer will ask if you want to install Docker, and you should always say No. 

    Doesn’t onlyoffice’s install script install docker for me?[edit | edit source]

    Onlyoffice’s installation script DOES install docker for you. I am still going to have you do it manually.

    • If you choose to not install onlyoffice, and wish to install Immich, I want you to know how to install docker on this virtual machine yourself.
    • I don’t want to rely on onlyoffice’s script. It won’t install docker for us if it detects Docker already, so we’re not going to do a double install. What if onlyoffice’s installation script stops installing docker the same way in a new version, or stops installing docker at all within its script?

    It’s little work to install Docker the right way for our purposes manually, and it’s good to have it documented so that you can use docker for immich even if you elect not to install Onlyoffice.

    1.1 Update and upgrade your system[edit | edit source]

    sudo apt update && sudo apt upgrade -y
    sudo apt install curl git wget -y

    1.2 Check for other Docker installations:[edit | edit source]

    Run docker --version and see what is installed. Nothing should be installed yet since this is a fresh system. If something is installed, remove it.

    # Just in case you accidentally installed snap version of docker:
    
    sudo snap remove docker
    
    # For other versions of docker: 
    
    sudo apt remove docker docker-engine docker.io containerd runc

    1.3 Install Docker using official Docker script:[edit | edit source]

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh

    Note: It’s very important to use the official Docker installation and not the Snap version. The Snap version can cause issues due to its sandboxed nature, making it a mess for mailcow’s requirements. Docker snap makes me sad, and it’ll make you sad too if you try to make things work with it.

    1.4 Install Docker Compose:[edit | edit source]

    Ubuntu’s docker-compose-plugin is safe to use, it is not snap cancer.

    sudo apt install docker-compose-plugin -y
    sudo systemctl enable --now docker

    1.5 Verify the install[edit | edit source]

    Run docker compose version and make sure the version is 2.0 or higher. Run docker --version and make sure version is 24.0.0 or higher

    1.6 Set proper permissions:[edit | edit source]

    Docker needs to be run as root for some operations, but you can add your user to the docker group to avoid using sudo all the time. To be clear, mailcow’s own documentation and community suggest starting with root or sudo, and you should trust them more than me. To quote mailcow developers, “Controlling the Docker daemon as non-root user does not give you additional security. The unprivileged user will spawn the containers as root likewise. The behaviour of the stack is identical.” Run this command to add your user:

    sudo usermod -aG docker $USER

    Log out and log back in, or run: newgrp docker

    Step 2: Understand how this will be set up differently from stock setup.[edit | edit source]

    2.1 How you’re supposed to use Immich[edit | edit source]

    The stock setup of Immich, by default, is to have Immich upload your images & videos from your phone to the immich server. You control your library on your phone & on your server in the immich application.

    2.2 Syncthing conflict with Immich[edit | edit source]

    Didn’t we already set up syncthing to do this? Yes, we did!

    I don’t want to use Immich to sync my phone’s DCIM/Camera folder, and then syncthing for everything else. In my opinion, it doesn’t make sense to use Immich by itself to do this; Immich is for photos & videos, it is not for Music, Documents, & all the other folders on our phone. If we used syncthing for those files & folders, and used Immich for photos/videos, that means we have two applications running at the same time, that do the same thing. This means 2 points of failure rather than 1.

    Syncthing was designed with one purpose in mind; transfer files from device to device. I would prefer to use a tool that was designed for the job.

    As a result, I am going to set up the ~/androidbackup/DCIM folder as an external library in Immich.

    2.3 Attaching ZFS pool to Immich[edit | edit source]

    See Also, remember the giant ZFS pool we created? On my setup, that’s over 100 terabytes of stuff! Much of that are old images & videos that are not in my phone photo backup directory. I want to see those in Immich.

    We created a Samba share for our ZFS pool so we could access it from elsewhere. I am going to create a read only samba share that is mounted on ~/Pictures, and then set this up with Immich as a second external library.

    TL;DR - Immich will have access to everything stored in your ZFS pool archive as a photo library, as well as your android phone’s photos. This allows me to perform machine learning on everything; my android phone photo backups, current android phone photos, as well as all of my stuff from the past 15 years all within one piece of software.

    After this is done I will be able to use the search feature and find photos I forgot about within seconds, dating back 15 years. Awesome. :)

    QUESTION: Why do we want the zfs pool share to be read only?

    The androidstuff virtual machine that houses our syncthing backup of our android phone is going to be backed up regularly to our zfs pool. We have a copy of that being backed up every week. The entire ZFS pool, for me, is over 100 terabytes - so having version controlled backups is much more difficult.

    As a result, I am personally much more protective of the data on my zfs pool than I am the data on my androidphone backups.

    Step 3: Mount a read only samba share of the ZFS pool for Immich onto the androidstuff virtual machine[edit | edit source]

    We are going to do the following:

    1. On the happycloud host machine, create another samba share of our ZFS pool /mediapool/archive that is read only.
    2. Mount this inside the androidstuff virtual machine on ~/Pictures which is the Pictures subdirectory of my home directory. ~/ is shorthand for your home directory; in my case, ~/ is the same as /home/louis/

    3.1 Modify samba configuration on happycloud host machine[edit | edit source]

    SSH into the happycloud host machine:

    ssh [email protected]

    or

    ssh [email protected]

    Our /etc/samba/smb.conf file currently looks like this:

    [global]
        # Network settings
        workgroup = HOME
        realm = home.arpa
        netbios name = happycloud
        server string = ZFS Archive Server
        dns proxy = no
        
        # Security settings
        security = user
        map to guest = bad user
        server signing = auto
        client signing = auto
        
        # Logging
        log level = 1
        log file = /var/log/samba/%m.log
        max log size = 1000
        
        # Performance optimization
        socket options = TCP_NODELAY IPTOS_LOWDELAY
        read raw = yes
        write raw = yes
        use sendfile = yes
        min receivefile size = 16384
        aio read size = 16384
        aio write size = 16384
        
        # Multichannel support
        server multi channel support = yes
        
        # Disable unused services
        load printers = no
        printing = bsd
        printcap name = /dev/null
        disable spoolss = yes
        
        # Character/Unix settings
        unix charset = UTF-8
        dos charset = CP932
    
    [archive]
        comment = ZFS Archive Share
        path = /mediapool/archive
        valid users = louis
        invalid users = root
        browseable = yes
        read only = no
        writable = yes
        create mask = 0644
        force create mode = 0644
        directory mask = 0755
        force directory mode = 0755
        force user = louis
        force group = louis
        veto files = /._*/.DS_Store/.Thumbs.db/.Trashes/
        delete veto files = yes
        follow symlinks = yes
        wide links = no
        ea support = yes
        inherit acls = yes
        hide unreadable = yes
        guest ok = no

    We are going to add something like this to the bottom of the /etc/samba/smb.conf file. Obviously feel free to set the path folder to what YOU want Immich to see. This will be read-only, so if something happens on your host, you won’t lost everything.

    Use nano to edit the file:

    sudo nano -w /etc/samba/smb.conf

    Enter the following at the end. Hit enter so there’s a pretty little space before the new section. :)

    [immich]
        comment = ZFS Archive Share (Read-Only)
        path = /mediapool/archive
        valid users = louis
        browseable = yes
        read only = yes
        guest ok = no
        create mask = 0644
        directory mask = 0755
        veto files = /._*/.DS_Store/.Thumbs.db/.Trashes/
        delete veto files = yes
        follow symlinks = yes
        wide links = no
        ea support = yes
        inherit acls = yes
        hide unreadable = yes

    3.2 Configure the samba share on the androidstuff virtual machine[edit | edit source]

    We want this to mount each time the androidstuff virtual machine that will run Immich boots. To do this, we will edit /etc/fstab - the file that defines where hard drives, partitions, network shares, are mounted on the filesystem.

    We have to install the packages that allow us to mount samba shares:

    sudo apt install cifs-utils -y

    Edit the file:

    sudo nano -w /etc/fstab

    Add the following line:

    //192.168.5.2/immich /home/louis/Pictures cifs ro,credentials=/etc/samba_credentials,iocharset=utf8,vers=3.0 0 0

    Make sure that the IP address matches the IP address of the machine that you have your ZFS pool on.

    • //192.168.5.2 is the address of the computer that is running samba server for our samba share.
      • immich is the name of the samba share.
      • In happycloud’s’ /etc/samba/smb.conf configuration file, the linepath = /mediapool/archive is present under the [immich] share settings.
      • Therefore, `//192.168.5.2/immich will show us /mediapool/archive on the machine located at 192.168.5.2
    • cifs is the filesystem type. CIFS stands for Common Internet File System.
    • ro means readonly.
    • /etc/samba_credentials is the file that will house the username & password to access this share.
    • For the love of god, do not forget to set the proper permissions on the/etc/samba_credentials file when I tell you to.

    Once you’re done adding that line to the file, we need to provide it a username/password so it can log into the password protected share.

    # Create the credentials file that will house the username & password:
    sudo nano -w /etc/samba_credentials

    Add your username and password you set when you set the password for your samba user to the file in the following format:

    username=louis
    password=passwordman

    If you forgot what the samba password is for your user, refer to step 6.5 in the Setting up ZFS for data storage portion of the guide.

    Make sure that this file is not accessible by anyone besides root!

    sudo chown root /etc/samba_credentials
    sudo chmod 600 /etc/samba_credentials

    3.3 Set the permissions for samba credentials file[edit | edit source]

    Important enough to be worth stating again:

    sudo chown root /etc/samba_credentials
    sudo chmod 600 /etc/samba_credentials

    3.4 Mount the samba share on the androidstuff virtual machine[edit | edit source]

    Run the following to mount everything in the /etc/fstab file, including your samba share.

    sudo mount -a
    sudo systemctl daemon-reload

    3.5 Make sure it worked.[edit | edit source]

    In /home/louis/Pictureson the androidstuff virtual machine you should see everything that is on /mediapool/archive on the happycloud host server. Try making a file and saving it there. It shouldn’t work.

    Create a file on happycloud. Go to the terminal window for happycloud, or just ssh in if you don’t have one open.

    ssh [email protected]
    
    # Put a file called hello_world.log that says "hi" inside of it into the /mediapool/archive directory
    
    echo "hi" > /mediapool/archive/helloworld.log

    Then, on the androidstuff virtual machine, try to view it. We mounted this samba share on /home/louis/Pictures so hello_world.log should show up on /home/louis/Pictures/hello_world.log

    louis@androidstuff:~$ cat helloworld.log
    louis@androidstuff:~$ cat ~/Pictures/helloworld.log 
    hi
    louis@androidstuff:~$ rm ~/Pictures/helloworld.log 
    rm: remove write-protected regular file '/home/louis/Pictures/helloworld.log'? y
    rm: cannot remove '/home/louis/Pictures/helloworld.log': Read-only file system
    louis@androidstuff:~$ sudo rm ~/Pictures/helloworld.log 
    [sudo] password for louis: 
    rm: cannot remove '/home/louis/Pictures/helloworld.log': Read-only file system
    

    As you can see, I can see the file, I can read the file, but I can’t delete the file. Perfect.

    Step 4: Make your directories[edit | edit source]

    4.1 Create Directory Structure[edit | edit source]

    I like to put the programs I am downloading/working on in /home/louis/Downloads/programs The ~/ means your home directory: so if your username is chris, ~/Downloads/programs means /home/chris/Downloads/programs

    # Create and enter directory
    mkdir -p ~/Downloads/programs/immich-app
    cd ~/Downloads/programs/immich-app

    4.2 Download Program[edit | edit source]

    This is installed via docker and the installation files/instructions from Immich themselves are completely plug & play. a

    # Get docker-compose.yml
    wget -O docker-compose.yml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
    
    # Get environment file
    wget -O .env https://github.com/immich-app/immich/releases/latest/download/example.env

    4.3 Optional Hardware Acceleration Files[edit | edit source]

    I don’t use hardware acceleration since my machine does not have a GPU, or any sort of coral device. This is experimental as well so it may give you issues. However, if you plan to use hardware acceleration, grab these to set them up & follow the instructions from Immich documentation.

    # For transcoding acceleration
    wget -O hwaccel.transcoding.yml https://github.com/immich-app/immich/releases/latest/download/hwaccel.transcoding.yml
    
    # For machine learning acceleration
    wget -O hwaccel.ml.yml https://github.com/immich-app/immich/releases/latest/download/hwaccel.ml.yml

    Step 5: Edit docker-compose.yml & Environment File[edit | edit source]

    5.1 Edit the .env file[edit | edit source]

    nano -w .env
    1. Database Setting

      • Change DB_PASSWORD . You should use characters from A to Z, a to z, and 0 to 9 - don’t use anything funky. I recommend the Bitwarden password generator.
        • You can use bitwarden password generator on their website without installing their program, but I suggest installing their program at some point.
    2. Upload Location

      • Set UPLOAD_LOCATION to where you want items you upload to immich to go.

        -I don’t use this because I use syncthing to upload things to /home/louis/androidbackup** rather than uploading straight to immich.

      • For the purposes of how I use Immich & this guide, I will not be changing this.

    3. Timezone

      • Uncomment and set the TZ= line to your timezone.
      • Find timezone codes here
      • For example, mine would be America/Chicago

    5.2 Edit the docker-compose.yml file[edit | edit source]

    Open the file for editing:

    nano -w docker-compose.yml

    This file in its entirety is fine as is. Nothing has to be changed. The two lines I add are to allow immich access to the ~/Pictures directory where my ZFS pool’s files are located, and the ~/androidbackup/DCIM directory where the photos & videos I took using the camera app on my android phone are stored.

    The two lines I added to the file below are:

          - /home/louis/androidbackup/DCIM:/files/phonepics:rw
          - /home/louis/Pictures:/files/zfspics:ro

    These lines do the following:

    • Makes /home/louis/androidbackup/DCIM on the host computer Immich is running on show up as /files/phonepics inside the docker container for Immich, with read write permissions.
    • Makes /home/louis/Pictures on the host computer Immich is running on show up as /files/zfspics inside the docker container for Immich, with read only permissions.

    To see where I put these in the context of the full file, look below:

    #
    # WARNING: Make sure to use the docker-compose.yml of the current release:
    #
    # https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
    #
    # The compose file on main may not be compatible with the latest release.
    #
    
    name: immich
    
    services:
      immich-server:
        container_name: immich_server
        image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
        # extends:
        #   file: hwaccel.transcoding.yml
        #   service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
        volumes:
          # Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
          - ${UPLOAD_LOCATION}:/usr/src/app/upload
          - /etc/localtime:/etc/localtime:ro
          - /home/louis/androidbackup/DCIM:/files/phonepics:rw
          - /home/louis/Pictures:/files/zfspics:ro
        env_file:
          - .env
        ports:
          - '2283:2283'
        depends_on:
          - redis
          - database
        restart: always
        healthcheck:
          disable: false
    
      immich-machine-learning:
        container_name: immich_machine_learning
        # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
        # Example tag: ${IMMICH_VERSION:-release}-cuda
        image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
        # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
        #   file: hwaccel.ml.yml
        #   service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
        volumes:
          - model-cache:/cache
        env_file: 
          - .env
        restart: always
        healthcheck:
          disable: false
    
      redis:
        container_name: immich_redis
        image: docker.io/redis:6.2-alpine@sha256:2ba50e1ac3a0ea17b736ce9db2b0a9f6f8b85d4c27d5f5accc6a416d8f42c6d5
        healthcheck:
          test: redis-cli ping || exit 1
        restart: always
    
      database:
        container_name: immich_postgres
        image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
        environment:
          POSTGRES_PASSWORD: ${DB_PASSWORD}
          POSTGRES_USER: ${DB_USERNAME}
          POSTGRES_DB: ${DB_DATABASE_NAME}
          POSTGRES_INITDB_ARGS: '--data-checksums'
        volumes:
          # Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
          - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
        healthcheck:
          test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1; Chksum="$$(psql --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
          interval: 5m
          start_interval: 30s
          start_period: 5m
        command:
          [
            'postgres',
            '-c',
            'shared_preload_libraries=vectors.so',
            '-c',
            'search_path="$$user", public, vectors',
            '-c',
            'logging_collector=on',
            '-c',
            'max_wal_size=2GB',
            '-c',
            'shared_buffers=512MB',
            '-c',
            'wal_compression=on',
          ]
        restart: always
    
    volumes:
      model-cache:

    DOCKER CHEAT SHEET: going through docker-compose.yml file for Immich

    This file sets up a bunch of containers(virtualized, minimalistic computers that run inside your computer) for the Immich photo gallery/library/machine learning & management system.

    1. name: immich This is the name of the overall Docker Compose project.

    2. services: This section lists all the containers (services) that make up the Immich application. Each service is a part of the overall program.

    immich-server 3. immich-server: This is the primary backend service of Immich. It handles the main functions of the program like uploading, managing, & displaying photos.

    4. container_name: immich_server This is the name of the container so when you run docker ps -a to see what containers are running you can see this one and know what it is for immediately. Custom name for the main immich container so it is easy to find when you type docker ps -a . Sometimes while debugging things that are not working you may want to enter the environment of the virtual container(this is like sshing into your server, but into the virtual server that runs immich), which you can do by typing docker exec -it immich_server bash - but to do that you need to know which container is which! This is where using sensible names comes into play.

    5. image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release} This tells it what Docker image to use for the backend. It pulls the latest stable version unless you’ve overridden IMMICH_VERSION in your .env file. Since Immich does not destroy their software with new releases, I am setting it to grab the latest version.

    6. volumes:

    • ${UPLOAD_LOCATION}:/usr/src/app/upload: Links the photo upload storage location from your system to the container. The path ${UPLOAD_LOCATION} is defined in the .env file. Whatever this is will show up inside the container at /usr/src/app/upload
    • /etc/localtime:/etc/localtime:ro: This makes the container use the same time as your computer’s time. The :ro makes it read-only so your computer can’t do what the characters in predestination did. The only thing worse than using google photos is SPOILER ALERT having your machine send you back in time so you are an orphan who was its own mother like in Predestination. Still a decent time travel movie but it has nothing on Primer.
    • /home/louis/androidbackup/DCIM:/files/phonepics:rw: Maps a directory with phone pictures to /files/phonepics in the container. This is read-write (rw). SO whatever is inside my /home/louis/androidbackup/DCIM directory on the androidstuff virtual machine running at 192.168.5.5 that we set up will show up inside the immich-server docker container under the directory /files/phonepics.
    • /home/louis/Pictures:/files/zfspics:ro: Maps a directory with other pictures to /files/zfspics in the container. This one is read-only (ro).

    7. env_file: Loads environment variables from the .env file, which centralizes configuration settings.

    8. ports:

    • '2283:2283': Maps port 2283 on your host system to port 2283 in the container. This allows you to access Immich’s server on your browser at http://192.168.5.5:2283 since we are installing this dockerized deployment of Immich to the androidstuff virtual machine located at 192.168.5.5

    9. depends_on: This lists the services this container depends on. redis and database must be running before the server starts. Don’t be scared by the word depends. It is included in “dependency”, but you’re using a docker image deployed by good developers; dependencies are no longer something to be afraid of :) I promise :)

    10. restart: always Automatically restarts the container if it crashes or if the system reboots. When you turn the system on immich will be on without having to go to its directory & run docker compose up -d each time the computer starts.

    11. healthcheck: Monitors the container’s health. The disable: false line means health checks are enabled.

    immich-machine-learning 12. immich-machine-learning: This container handles machine learning tasks, like face or object recognition(searching for “cat on chair”) in your photos.

    13. container_name: immich_machine_learning Custom name for the machine learning container so it is easy to find when you type docker ps -a . Sometimes while debugging things that are not working you may want to enter the environment of the virtual container(this is like sshing into your server, but into the virtual server that runs immich), which you can do by typing docker exec -it immich_machine_learning bash - but to do that you need to know which container is which! This is where using sensible names comes into play.

    14. image: Pulls the machine learning image from GitHub. You can enable hardware acceleration by adding a specific tag (e.g., -cuda) if supported by your system.

    15. volumes:

    • model-cache:/cache: Links a Docker-managed volume to the container’s /cache directory for storing machine learning model data.

    16. env_file: Loads environment variables from .env for consistent configuration. For instance, instead of editing certain configuration files after or while setting up/compiling the program, you put them in the environment file and when the docker container starts, it uses what is in the environment file.

    17. restart: always The container restarts if it crashes & will start up with the computer.

    18. healthcheck: Keeps the container healthy and ensures it’s running properly.

    redis 19. redis: Redis is a high-speed database used for caching data and managing background tasks.

    20. container_name: immich_redis Custom name for the Redis container so it is easy to find when you type docker ps -a . Sometimes while debugging things that are not working you may want to enter the environment of the virtual container(this is like sshing into your server, but into the virtual server that runs immich), which you can do by typing docker exec -it immich_redis bash - but to do that you need to know which container is which! This is where using sensible names comes into play.

    21. image: Specifies the exact Redis image to use, including a SHA256 checksum for security.

    22. healthcheck: Runs a simple test (redis-cli ping) to confirm the Redis service is working.

    23. restart: always Automatically restarts Redis if it fails/it starts with the computer.

    database 24. database: This is the PostgreSQL database, which stores metadata and application data for Immich.

    25. container_name: immich_postgres Custom name for the database container so it is easy to find when you type docker ps -a . Sometimes while debugging things that are not working you may want to enter the environment of the virtual container(this is like sshing into your server, but into the virtual server that runs immich), which you can do by typing docker exec -it immich_postgres bash - but to do that you need to know which container is which! This is where using sensible names comes into play.

    26. image: Specifies a custom PostgreSQL image with vector support, used by Immich for advanced search features.

    27. environment: - POSTGRES_PASSWORD: Password for the database. - POSTGRES_USER: Username for the database. - POSTGRES_DB: Name of the database. - POSTGRES_INITDB_ARGS: Additional arguments for database

    28. volumes:

    • ${DB_DATA_LOCATION}:/var/lib/postgresql/data: Maps the database storage location from your system to the container. Edit ${DB_DATA_LOCATION} in the .env file to change where your database files are stored.

    29. healthcheck: Runs periodic checks to ensure the database is healthy. It verifies that the database is running, accessible, and free of checksum errors.

    30. command: Customizes PostgreSQL’s behavior with specific options, like enabling vector indexing (shared_preload_libraries=vectors.so) & improving performance with optimized settings like max_wal_size=2GB.

    31. restart: always Makes database container restart if something goes wrong/it starts with the computer.

    volumes 32. volumes: - model-cache: A named volume for storing machine learning models. This ensures that cached data persists across container restarts or recreations.

    Step 5: Start the System[edit | edit source]

    While in the directory you downloaded the docker-compose.yml and .env file to, run the following:

    docker compose up

    I like to type docker compose up at first without the -d because I can see what is happening without having to use tail on a logfile somewhere. If you don’t care to do that, you can start it up like this with the -d which allows the program to start without it stopping when you close the terminal window you ran the command in.

    docker compose up -d

    Visiting Immich web interface: at this point you should be able to visit http://192.168.5.5:2283 or http://androidstuff.home.arpa:2283 and see Immich, in all its glory :)

    If it doesn’t work:[edit | edit source]

    1. Wrong Docker Version If you get unknown shorthand flag: 'd' in -d, you’re likely using the wrong Docker version. Fix by:
      • Remove the distribution’s docker.io package. If you used snap, I will hurt you.
      • Install Docker from the official repository
      • If you used ubuntu version of docker installed via snap upon installation of ubuntu server after all the times I told you not to in the past 1000 pages of this guide…..
      • You asked for this.
    2. Docker Compose Command
      • Use docker compose (not docker-compose)
      • Installing from Docker official repository is required here. You saw how to do this in the onlyoffice setup section on this virtual machine.

    Step 6: Configure Immich[edit | edit source]

    Once it’s started it’ll ask you to set up a username and a password. Once that’s done, we have a few tasks to complete.

    6.1 Set up your android backup & zfs pool as libraries in Immich[edit | edit source]

    This is necessary so you can see your files.

    1. Click the circle in the upper right corner that has the first letter of your username.

    2. Click Administration

    3. Click External Libraries on the left menu

    4. Click on the plus or on the Create Library button in the upper right to create a library.

    5. Create two libraries: and set yourself as the owner of each.

    6. Click on the three dots next to the library.

    7. Click Rename

    8. Name each library - (e.g. zfs pool and android phone.)

    9. Click the three dots again and click Edit Import Paths

    10. Set each external library to have the path we chose above for our zfs pool and our android phone backup. - /home/louis/androidbackup/DCIM:/files/phonepics:rw - /home/louis/Pictures:/files/zfspics:ro

    11. Once done with this, go back to Settings in the left hand menu

    12. Go to Video Transcoding Settings

    13. If you want video proxies created so you are watching lower bitrate files when you load immich(useful if you use this on a phone with bad internet speeds), change Transcode policy to All videos

      NOTE: Transcoding videos doesn’t delete the original. It creates new videos in a subfolder of the immich-app directory. The original video file is preserved in full quality in its original location.

    14. If you have a fast computer, or lots of patience, set Preset to fast - this will make video files that are smaller for the same quality as ultrafast. For Constant Rate Factor, higher is smaller file/worse quality, lower number is larger file/better quality. If you are making video proxies because your internet service sucks I’d set this to 28.

    15. In Settings, go over to External Library

    16. Under Library Watching enable Watch external libraries for file changes

    17. Under Periodic Scanning Make sure this is turned on. I would make this something daring; perhaps once an hour. Remember, since we are not using the Immich app to upload the photos to Immich, Immich is not aware without scanning manually if we have added files or not.

    18. On the left hand menu, go over to Jobs.

    19. Next to LIBRARY, click the ALL button.

    20. Wait patiently.

    21. You’re done. :)

    Step 7: Enjoy Immich[edit | edit source]

    Once the Jobs tab shows that Immich is done processing everything, head over to the homepage, and try the search box. It’s awesome.

    Step 8: Install Android App[edit | edit source]

    8.1 Install the F-Droid store app[edit | edit source]

    Download F-Droid from the F-Droid website and then open the apk to install it. F-Droid allows you to downlod all sorts of interesting open source apps.

    8.2 Install Immich[edit | edit source]

    Find & install Immich

    8.3 Start Immich[edit | edit source]

    When you start Immich, in the Server Endpoint URL field, but the same thing you put in your web browser to connect; http://192.168.5.5:2283 or http://androidstuff.home.arpa:2283

    Don’t forget to put the port. Also, this will only work on local wifi or with your VPN on from your smartphone. Make sure you are connected to wifi or are connected to the VPN!

    Notes on upgrades/updates:[edit | edit source]

    • “Breaking changes” are when an old version of Immich will not work properly when updating to a new version of immich.
    • Review release notes to see if this is the case with your version. This is something that is being worked on so it won’t happen in the future. Alex is great with informing users on these changes.

    Update Process[edit | edit source]

    To upgrade to a new version, go to the directory with Immich, in our case, ~/Downloads/programs/immich-app. Turn Immich off, pull the new version, and then turn it on again. I suggest having a backup of everything before doing this. Doing perfect VM backups will be in the next section.

    cd ~/Downloads/programs/immich-app
    docker compose down
    docker compose pull
    docker compose up -d

    Nextcloud Notes to replace Google Keep[edit | edit source]

    For most intents & purposes, nextcloud is horrible. It does one thing right for me; notes. plaintext, or markdown notes.

    I live my life on a schedule where my day is mapped out in 5 to 15 minute increments, that is constantly changing. I discussed this in a video 11 years ago. Throughout the day I am constantly opening my notes application & hitting the voice-to-text button so I can talk into my phone before I forget what I wanted to type or do. Sometimes, in the middle of the note I forget what I wanted to jot down and will speak out something that resembles the idea I hope I remember later.

    I need my notes. I need them to be easily accessible, available as either lists or as post-it-notes in the style of google keep. I need the notes application and the web interface to be easily accessible without having to install extra stuff on my computer if I don’t want to. I need the interface to be as simplistic & uncluttered as possible. More options = more chances for confusion for someone who needs a notes application(or physical notepad) to not forget what I am doing constantly.

    Nextcloud’s interface does that for me. It mimics google keep’s functionality and is the closest spot on thing I’ve found to it.

    If you want something that is well programmed, forget about this. Go install Joplin. I use nextcloud notes because the interface & ease of use/deployment is worth it for me. I have played around with joplin. It’s obviously better coded software; but the phone application interface isn’t it for me, and I don’t want to go hunting for a client that will at best provide me the same experience I already have.

    I am a single user loading plain text files. As bad as nextcloud is, it can’t mess that up. Well, maybe it can - but it hasn’t for me yet.

    Follow these steps to deploy Nextcloud on your server (IP: 192.168.5.5) with Docker Compose. This setup is restricted to clients within the 192.168.5.0/24 and 192.168.6.0/24 subnets.

    Installing Nextcloud for notes[edit | edit source]

    Nextcloud notes we install via docker. We will install ONLY the notes component when we enter the web interface so the least amount of nextcloud is on our system as is necessary.

    Step 1: SSH into the androidstuff virtual machine computer[edit | edit source]

    ssh [email protected]

    OR

    ssh [email protected]

    Step 2: Install docker[edit | edit source]

    2.1 Verify Docker installation:[edit | edit source]

    IF YOU ELECTED TO INSTALL IMMICH OR ONLYOFFICE ON THIS VIRTUAL MACHINE, THIS PART IS ALREADY DONE & YOU CAN SKIP TO STEP 3![edit | edit source]
      • If you installed onlyoffice or immich on the androidstuff virtual machine, & followed the instructions for it, you already installed docker properly on this virtual machine, and have no need to do this again. Skip to step 3 if that is the case.

    Run docker --version and make sure the version is 24.0.0 or later. If not, remove the old version:

    sudo apt remove docker docker-engine docker.io containerd runc

    2.3 Install Docker using official Docker script:[edit | edit source]

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh

    Note: It’s very important to use the official Docker installation and not the Snap version. The Snap version can cause issues due to its sandboxed nature, making it a mess for mailcow’s requirements. It is bad for our purposes, don’t use it.

    2.4 Install Docker Compose & prerequisites:[edit | edit source]

    sudo apt install docker-compose-plugin -y
    sudo systemctl enable --now docker

    2.5 Make sure it worked[edit | edit source]

    Run docker compose version and make sure the version is 2.0 or higher.

    Step 3: Install nextcloud using docker[edit | edit source]

    3.1 Create directory to store Docker Compose file & volumes:[edit | edit source]

    mkdir -p ~/nextcloud && cd ~/nextcloud

    3.2 Copy your docker-compose.yml file into this directory or create it:[edit | edit source]

    nano docker-compose.yml

    Paste the content below:

    services:
      db:
        image: mariadb:10.11
        restart: always
        command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
        volumes:
          - db:/var/lib/mysql
        environment:
          - MYSQL_ROOT_PASSWORD=rootpasswd
          - MYSQL_PASSWORD=dbpasswd
          - MYSQL_DATABASE=nextcloud
          - MYSQL_USER=nextcloud
    
      redis:
        image: redis:alpine
        restart: always
    
      app:
        image: nextcloud
        restart: always
        ports:
          - 8089:80
        depends_on:
          - redis
          - db
        volumes:
          - nextcloud:/var/www/html
        environment:
          - MYSQL_PASSWORD=dbpasswd
          - MYSQL_DATABASE=nextcloud
          - MYSQL_USER=nextcloud
          - MYSQL_HOST=db
          - NEXTCLOUD_TRUSTED_DOMAINS=192.168.5.5 192.168.5.0/24 192.168.6.0/24
    
    volumes:
      nextcloud:
      db:

    Save and exit the file.

    I’ll help reformat this markdown to ensure each line starts with “>” and remove the horizontal rules (“—”) while preserving all the original text exactly. Here’s the reformatted version:

    DOCKER CHEAT SHEET: going over the docker-compose.yml for nextcloud

    This file sets up three services (containers): one for the Nextcloud app, one for the database (MariaDB), & one for caching (Redis). Let’s go through it line by line so you understand what’s going on.

    1. services: This section lists the containers (services) that make up the Nextcloud deployment. Each container plays a specific role in the overall application.

    Database (db)

    2. db: This is the MariaDB database container. MariaDB is a database similar to mysql database. It’s where nextcloud stores info on users, settings, files, etc.

    3. image: mariadb:10.11 This tells Docker to use the MariaDB 10.11 image. It’s a specific version of MariaDB that ensures compatibility with the version of Nextcloud you’re running. This is why docker is awesome; this just pulls the right version of the right program. You don’t have to worry about this. The maintainers of the software provide template docker-compose.yml files that rarely need more than minimal adjustment to work for your needs. No dependency rabbit hole to hell.

    4. restart: always Makes the database container restart automatically if it crashes or when the system reboots, and has it start up when you turn on the virtual machine(or computer, if you are installing directly onto the host machine)

    5. command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW Customizes how MariaDB runs: - --transaction-isolation=READ-COMMITTED: Prevents dirty reads, ensuring reliable database transactions. - --log-bin=binlog: Enables binary logging for replication (useful for backups or scaling). - --binlog-format=ROW: Logs changes at the row level for better replication accuracy.

    6. volumes:

    • db:/var/lib/mysql: Maps the container’s /var/lib/mysql directory (where the database stores its files) to the db volume. This makes data persist even if the container is removed or restarted as it is stored to a volume(remember containers are like linux livecds, nothing is saved when you reboot them)

    7. environment: These environment variables configure MariaDB: - MYSQL_ROOT_PASSWORD=rootpasswd: Sets the root password for MariaDB. - MYSQL_PASSWORD=dbpasswd: Password for the nextcloud user, who will access the database. - MYSQL_DATABASE=nextcloud: Creates a database named nextcloud during container setup. - MYSQL_USER=nextcloud: Creates a database user named nextcloud.

    Redis (redis)

    8. redis: This is the Redis container which is a caching system that speeds up Nextcloud by temporarily storing frequently used data. “Speeds up” in the theoretical sense. Nothing speeds up nextcloud.

    9. image: redis:alpine Specifies the Redis image to use. The alpine tag uses a lightweight version of Redis for minimal resource usage.

    10. restart: always Automatically restarts the Redis container if it crashes or when the system reboots.

    Nextcloud Application (app)

    11. app: This is the main container for the Nextcloud application. It provides the web interface and handles user requests.

    12. image: nextcloud Tells Docker to use the official Nextcloud image.

    13. restart: always Ensures the Nextcloud container restarts if it crashes or when the system reboots.

    14. ports:

    • 8089:80: Maps port 80 in the container (Nextcloud’s default web server port) to port 8089 on the host. You’ll access Nextcloud in your browser at http://192.168.5.5:8089 since this is being set up on the androidstuff virtual machine.

    15. depends_on: Ensures that redis and db containers start before the Nextcloud container. Without this, Nextcloud would crash while waiting for its database and caching system.

    16. volumes:

    • nextcloud:/var/www/html: Links the container’s /var/www/html directory (where Nextcloud’s files live) to the nextcloud volume. This ensures Nextcloud’s data persists even if the container is recreated.

    17. environment: Configures the Nextcloud container with the following environment variables: - MYSQL_PASSWORD=dbpasswd: Matches the database user’s password set in the db service. - MYSQL_DATABASE=nextcloud: Specifies the name of the database created in the db service. - MYSQL_USER=nextcloud: Specifies the database user created in the db service. - MYSQL_HOST=db: Tells Nextcloud where to find the database (the db service within this docker-compose.yml). - NEXTCLOUD_TRUSTED_DOMAINS=192.168.5.5 192.168.5.0/24 192.168.6.0/24: Lists IP addresses or subnets that are allowed to access the Nextcloud instance. I want nextcloud to be accessible when I am on my LAN which is the same network as nextcloud, and I also want it to be accessible when I am connecting to my home server using my VPN, so I have put my LAN of 192.168.5.0/24 & my VPN network of 192.168.6.0/24

    Volumes

    18. volumes: Defines persistent storage for Nextcloud and MariaDB: - nextcloud: Stores Nextcloud’s files. - db: Stores MariaDB’s database files.

    FINAL NOTE: This docker-compose.yml file sets up a fully functional Nextcloud deployment with three containers working together: - MariaDB (db): Handles data storage for Nextcloud. - Redis (redis): Speeds up Nextcloud by caching frequently used data. - Nextcloud (app): Provides the web interface and file management. The volumes ensure your data persists, and the environment variables make configuration easy. By using this file, you avoid dependency hell and can back up your Nextcloud setup easily by saving the volumes and docker-compose.yml file.

    3.4 Deploy the Containers[edit | edit source]

    Run Docker Compose to start nextcloud:

    docker-compose up -d

    Access Nextcloud for first time[edit | edit source]

    Visit http://192.168.5.5:8089 in your web browser to complete the setup. Don’t enable ANY application when asked besides notes! Click onto the notes tab at the top to experiment with notes.

    Installing Nextcloud Android App[edit | edit source]

    I use nextcloud notes from my phone all the time. It is one of my favorite ways of getting random things I type/copy & paste/dump onto my desktop into my phone & vice versa. Here’s how to install the Nextcloud app on your phone and connect it to your server.



    Step 1: Install the Nextcloud App[edit | edit source]

    1. Open the Google Play Store (or F-Droid store).
    2. Search for “Nextcloud” and install the official app by “Nextcloud”.
    3. Once installed, open it.
    4. I hope this part is self explanatory by now.



    Step 2: Add Your Server[edit | edit source]

    1. On the app’s welcome screen, tap “Log in”.
    2. Enter your server address:

    http://192.168.5.5:8089

      • (Make sure your phone is connected to the same network as your server.
      • If not, connect to your VPN using the OpenVPN application we set up.
    1. Tap “Next” & wait for the app to verify the server connection. It might take a while; this is nextcloud, after all.



    Step 3: Log In[edit | edit source]

    1. Enter the username & password you created during the first step of accessing Nextcloud’s web interface from the web browser on your desktop earlier.
    2. Tap “Log in”.
    3. Allow the application the permission it asks for to access your nextcloud account.



    Step 4: Enable Notes Synchronization[edit | edit source]

    1. Once logged in, you’ll see a list of notes.



    You’re done. You can write down your notes on your phone & they’ll sync instantly with your server at home. You can make it look like google keep if you want. It just makes sense right out of the box with a very intuitive user interface and doesn’t try to add a bunch of stuff I don’t need/want. It works. Even though it’s nextcloud; it works. :) If the lack of https/ssl bothers you, feel free to follow the instructions from the frigate part of the guide that goes over setting up nginx as a reverse proxy so you can use ssl. If you are using onlyoffice on port 443, you’ll have to choose a different port for nextcloud, but that’s fine. You’d visit https://192.168.5.5:444 to get to nextcloud instead of https://192.168.5.5 - you’ll live!

    Setting Up trusted & untrusted WiFi with TP-Link EAP610 & pfSense[edit | edit source]

    Step 1: Understanding the problem. Why do this?[edit | edit source]

    Let’s say there’s a device on your network you don’t trust. You want to use it, but you don’t trust it. Exhibit A, a Chinese security camera. Hikvision makes good, cheap cameras; but my government tells me I shouldn’t trust them, and I listen to & believe everything that my government tells me.

    I will want to limit its access to the internet, and other machines. Let’s say it connects via wifi.

    You can block it from connecting to the internet by its IP - but what if it tries to change its IP? You could create a static mapping in pfSense based on its MAC address, but what if it spoofs its MAC address? If this device were truly malicious, it could do the following:

    • Spoof its MAC address to get around a static mapping
    • Try to connect using every single IP address
    • See if it eventually finds an IP address in that subnet that allows it to go online & connect to other networks/devices
    • Upload audio recordings of you saying you had a celebrity crush on Sabrina Carpenter, or that you cry listening to Tori Amos’ Baker Baker. Where’d your reputation be then?

    If you want to be more stringent with this - if you genuinely believe your refridgerator is out to get you by recording your intimate moments & blackmailing you with them(it’s probably not), we can make a separate network for them.

    We’ll create two separate networks:

    • Main Network: 192.168.5.0/24 for trusted devices (we’ve already created this)
    • Guest Network: 192.168.7.0/24 for untrusted devices (needs to be created)

    Note: This is not a normal wifi access point. it is an enterprise level device that allows seamless switching between multiple access points, so that if you have a giant area you never lose your connection or connection strength. The downside is that this isn’t as simple as a standard wifi router, this isn’t your linksys wrt54g from 2005 you configure by typing 192.168.1.1 and typing in admin for the user & password. You need to install controller software to use it; and it’s worth it. These access points like the eap610 can be found used on ebay in liquidation sales for $45, which is cheaper than a lot of wifi routers.

    Our LAN subnet, where our servers & computers connect to, is 192.168.5.0/24 meaning that clients connecting here can grab from 192.168.5.2 to 192.168.5.254 - 192.168.5.1 is taken by the router.

    Our OpenVPN subnet that we connect to when we use our VPN is 192.168.6.0/24meaning that clientst hat connect here can grab from 192.168.6.2 to 192.168.6.254 - 192.168.6.1 is taken by the VPN gateway.

    Here we’re going to create 192.168.8.0/24 as another subnet.

    If you’re trusted wifi, you get to connect to the 192.168.5.0/24 network. If you are connecting to the untrusted wifi, you get to connect to the 192.168.7.0/24 untrusted network.

    When we set up OpenVPN, pfSense created a firewall rule automatically that allowed the VPN subnet of 192.168.6.0/24 to connect to everything. We will do the opposite for this network. We can create a rule that blocks all traffic TO and FROM the 192.168.7.0/24 network. Then, we can create specific allow rules for the very specific devices we want it to connect to. If it’s a thermostat, we allow it a connection to & from to 192.168.5.4, our home assistant machine. If it is a camera, we allow it a connection to & from 192.168.5.2, our frigate machine.

    It doesn’t matter if the device spoofs its MAC address to get around a static mapping at this point. It doesn’t matter if it tries to grab every single IP address on the subnet - because NOTHING on 192.168.7.0/24 is allowed to connect to anything anyway. So, it’s stuck.

    This is more “secure” if your threat model includes a thermostat with a hidden microphone in it connected to your wifi, that might want to get around being blocked from phoning home.

    1. Can’t access your main network
    2. Can’t see your devices
    3. Can still access the internet

    This is what VLANs are for. We’ll create two completely separate networks:

    • Main Network (192.168.5.0/24): For your trusted devices
    • Guest Network (192.168.7.0/24): For everyone else

    Step 2: PfSense Configuration Guide for Trusted & Untrusted Networks[edit | edit source]

    We want to have two separate networks; but we are using one cable to connect the switch to our wifi access point. We do that with VLANs, which are “virtual” LANs. Each packet we send is going to have a tag on it that tells it which LAN it is. The switch, & in the case the wifi access point, will use this to direct the traffic to the correct virtual LAN.

    Each of our wifi clients will be connecting to a LAN. The trusted wifi network will connect to the standard 192.168.5.0/24 LAN, and the untrusted to a 2nd network we create on 192.168.7.0/24

    2.1 Create VLANs[edit | edit source]

    1. Navigate to: Interfaces > Assignments > VLANs
    2. Click “Add” to create first VLAN:
      • Parent Interface: Select your LAN interface (usually igb0 or em0)
      • VLAN Tag: 7
      • Priority: leave blank
      • Description: “maliciouswifi”
      • Click “Save”

    2.2 Create Network Interfaces[edit | edit source]

    1. Go to: Interfaces > Assignments
    2. From the “Available network ports” dropdown:
      • Select the VLAN 7 interface and click “Add”
      • Note the names assigned (typically OPT1 and OPT2
      • Name this maliciouswifi

    2.3 Set IP range of new interface[edit | edit source]

    1. Go to: Interfaces > MALICIOUSWIFI
    2. In “General Configuration” set the following options:
      • Set “Description” to maliciouswifi
      • Set “IPv4” Configuration Type” to Static IPv4
      • Set “IPv6 Configuration type” to None.
        • If you have a reason to use IPv6, you are probably a network administrator for the world trade tower or a mall or something & aren’t reading this guide anyway.
    3. In “Static IPv4 Configuration” set the following options:
    • “IPv4 Address” to 192.168.7.1
      • The slash thingie at the end to /24 - this means we get the entire range from 192.168.7.2 to 192.168.7.254 for wifi clients connecting to this network when we set up DHCP server.
    • Set “IPv4 Upstream Gateway” to None
    1. Hit “Save”

    2.4 Configure DHCP Server[edit | edit source]

    DHCP is what allows you to connect to a wifi network and get online without having to specify the IP address, gateway, DNS server, etc. This is necessary so clients get an IP address when they connect to the wifi network automatically.

    • Malicious wifi Network DHCP:
    1. Navigate to: Services > DHCP Server > MALICIOUSWIFI
    • The interface maliciouswifi will be at the top after you click onto “DHCP Server”
    1. Configure:
      • Enable: ✓ Checked ” Enable DHCP server on MALICIOUSWIFI interface “
      • “Address Pool Range”:
        • From: 192.168.7.2
        • To: 192.168.7.254
    2. Click Save

    Step 3: Configure Firewall Rules[edit | edit source]

    Now, we’re going to block this from connecting to anything.

    3.1 Block maliciouswifi to everything[edit | edit source]

    1. Navigate to: Firewall > Rules > MALICIOUSWIFI

    2. Add this rule:

      1. Block Inter-VLAN Access:
      • Action: Block
      • Interface: “MALICIOUSWIFI”
      • Protocol: Any
      • Source: Any
      • Destination: Any
      • Description: “Block maliciouswifi access to everything”
      • Click Save

    3.2 Add allow rules for devices you wish to speak to one another.[edit | edit source]

    Right now devices connected to this wifi network can’t connect to anything. Even if it were a malicious device that were going to try every IP on this subnet after spoofing its MAC address and try to get access to the outside world, it’s stuck.

    We would want to add rules ABOVE the “Block maliciouswifi access to everything” rule for things we did want to talk.

    For instance, let’s say a wireless camera were attached here. We would want to add a rule to allow traffic from the camera, let’s say it’s at 192.168.7.15 to the frigate machine at 192.168.5.2 , and then another rule to allow traffic from the frigate machine to the camera. This rule would be listened to before the rule to block everything.

    You can use this to make sure that the thermostat only communicates with home assistant, that the fish camera only communicates with your VPN, etc. It’s a great way to keep untrusted devices from having rampant access to everything.

    Step 4: TP-Link Omada Controller SDN Installation Guide[edit | edit source]

    4.0 Optional note for the paranoid(skip ahead if not paranoid)[edit | edit source]

    To be clear, if you’re at this level of paranoia, just find a router that has meshing with openwrt and deal with the lower level of performance with switching you’ll get with it. I have yet to find an open source access point + open source firmware that is even close to closed source ones with regards to seamless roaming across multiple access points without dropoffs or slowdowns

    If you have a problem with running closed source software from a company headquartered in Shenzhen on your computer - I don’t blame you. Rather than install this onto your host system, you can install it onto a virtual machine you do not allow to access the internet, that runs nothing but this software. You would install the virtual machine for omada the same way you would install the virtual machine for mailcow. We have done this many times - simply follow the instructions we’ve already followed, with the following changes:

    • When installing Ubuntu server, choose minimal install in the installer.

    • Set the IP to 192.168.5.7 instead of 192.168.5.3 we chose for mailcow

    • Set the hostname & name of the computer to wifitool

    • Set the static mapping in pfsense with hostname wifitool

    • Make a pfSense firewall rule blocking all traffic to and from 192.168.5.7 on the LAN interface for any protocol, so it looks like this:

    Lastly, if you want a level of paranoia that matches congress, you can set up temporary pfSense firewall rules that block the computer you use to access the tp-link omada controller in your web browser from connecting as well - and toggle them on each time you run the tp-link omada controller software in your browser, and make a rule blocking the IP address of each individual access point from going online as well.

    4.1 Prepare the System[edit | edit source]

    Before installation, remove any conflicting packages like older MongoDB versions, Java, or remnants of previous Omada installations to avoid conflicts. We never installed these packages onto our server, so they should not be there. Just in case they are. To be clear, you should not have any use for these packages at this point if you’ve been following this guide.

    sudo apt purge -y mongodb-org* openjdk-11-* openjdk-8-* jsvc
    sudo apt autoremove -y
    sudo apt clean

    4.2 Install Java 8 and MongoDB[edit | edit source]

    Install Java 8, as the Omada Controller requires it, and install MongoDB (v7.0 is recommended here). It wants old Java. Not version 11.

    sudo apt update
    
    # Some of this software you may already have. No big deal, it doesn't hurt to make sure. 
    sudo apt install -y openjdk-8-jre-headless jsvc curl gnupg lsb-release
    
    curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor
    echo "deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg] https://repo.mongodb.org/apt/ubuntu $(lsb_release -sc)/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
    
    sudo apt update
    
    sudo apt install -y mongodb-org
    sudo systemctl enable mongod --now
    sudo systemctl status mongod

    IMPORTANT NOTE: mongodb is expecting you to be using an older version of Ubuntu Linux(22.04, codename “jammy”) for this to work. We are using Ubuntu Server (24.04, code name “noble”). There is nothing wrong with this(besides the fact that I subjected you to ubuntu in the first place, but that’s a conversation for another time). 24.04 is the latest stable, long term release. However, mongodb still thinks that jammy is the latest long term/stable release.

    If mongodb does not have a repository for ubuntu 24.04 jammy by the time this guide is released, you will have to make the following edit for apt to let you install mongdo from this repository:

    # Open source list file for mongodb for editing
    sudo nano -w /etc/apt/sources.list.d/mongodb-org-7.0.list
    # Find the following line:
    deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg] https://repo.mongodb.org/apt/ubuntu noble/mongodb-org/7.0 multiverse
    # Replace the word `noble` with `jammy`
    deb [arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse

    The steps in the three grey code boxes above are only necessary if you received an error while trying to install mongodb

    4.3 Find Omada SDN Controller Software on tp-link’s website to download[edit | edit source]

    Download the latest .deb package from TP-Link’s Download section. Right click the download button, click copy link in your browser, and paste it into the command below:

    # Make subdirectory for storing programs if it isn't already there in our home directory
    
    mkdir -p ~/Downloads/programs
    cd ~/Downloads/programs
    
    # Check TP-Link's website for the latest version of this sfotware, it should be a .deb file with a filename that looks something like what you see below, just with a newer version
    
    wget https://static.tp-link.com/upload/software/2024/202411/20241101/Omada_SDN_Controller_v5.14.32.3_linux_x64.deb

    4.4 Install the Omada Controller[edit | edit source]

    Install the Omada Controller SDN package. If dependencies are flagged, ignore them to proceed with the installation.

    sudo dpkg --ignore-depends=jsvc -i Omada_SDN_Controller.deb
    # Just in case anything funny happened while installing an ancient version of java
    sudo apt --fix-broken install

    4.5 Verify it installed & Start the Controller[edit | edit source]

    The Omada Controller should now be running. Access the Omada interface by navigating to https://192.168.5.2:8043.

    NOTE: If it gets stuck on “Starting Omada Controller. Please wait….” and keeps outputting dots, and never starts, and it gives you a bs error about java virtual machine not being available, you followed TP-Link’s documentation instead of mine. Do not pass go, do not collect $200, go directly to jail. That is your punishment for expecting GNU/Linux documentation for a piece of software to work; and you deserve it.


    To enable it on boot, type systemctl enable tpeap , but it should already be starting on boot.

    Step 4.5: VLAN tags[edit | edit source]

    This can be confusing. There are $250 wifi routers that, when put in wifi bridge mode to be used as a switch, will not pass VLAN tags properly. Then there are $20 Netgear GS308v3 switches that support VLAN tags perfectly.

    You don’t have to spend a lot of money to get a switch that has VLAN tags. How do you tell if yours supports VLAN tags? Good question. Netgear’s datasheet for the GS308 and their instructional manual for the GS308 do not mention the word “VLAN” - not even once. It says it supports 802.1p QOS, but that is not 802.1Q VLAN tagging.

    Most modern switches DO support this; but what if you have an old one? What if you are re-purposing an old wifi router as a switch for this setup? Many wifi routers, even older ones, have settings that allow them to be used as a wireless bridge.

    As I have said earlier on, when people tell you to “RTFM”, what they are actually saying is “eat shit and die” - it’s their way of expressing that they hate you. Manuals are functionally useless for 99% of products sold, and rarely if ever answer actual questions. They answer questions that can be answered intuitively without a manual.

    My best answer is as follows; if you are going to have a very small home network, the Netgear GS308 is a great pick that works with VLAN tags. It’s dirt cheap and a workhorse. If you want something that is more upscale, I’d suggest looking at the TP-Link Omada SG3218XP-M2 & other switches in that series, for the following reasons:

    2.5 GbE speeds

    Most switches have gigabit ports. This means 1 gigabit - which translates to 100-120 megabytes per second in the real world. Around 2009 when these started to become cheaper(sub-$200), this was more than enough, since hard drives of the time were in the 70-120 megabyte per second range. This meant that it made no sense to pay extra for a switch with more bandwidth, since your hardware was not capable of making use of the extra bandwidth. Whether using a $10,000 switch or the $50 1 gigabit switch, your transfer speed would be the same.

    As time has moved on, even cheap desktop hard drives do over 180-250 megabytes per second, and cheapie solid state drives can achieve 200-400 megabyte per second read & write easily. 1 gigabit ports on switches mean you are losing out on transfer speed.

    2.5 GbE switches are capable of 270-290 megabytes per second,approximately, in the real world. This is still under the capability of more expensive NVME solid state drives, but it is over double what you get with the old gigabit switches.

    Power over Ethernet(PoE)

    If you do plan on setting up security cameras, PoE means that you can plug the ethernet cable into the camera without having to run a separate line for power. The power for the camera is provided by the switch through the ethernet cable.

    Easy management using Omada controller software

    If you want to have fun with some of this switch’s other features, you can use the same software we’ll be using for EAP-610 wireless access points to control the switch.

    Step 5: Configuring TP-Link EAP610 VLANs in Omada Controller[edit | edit source]

    5.1 Loading controller & adopting your access point[edit | edit source]

    1. Visit Omada Controller in your browser:

      https://192.168.5.2:8043

      NOTE: Take a close look at the IP address & port in the terminal and visit the URL it tells you to upon finishing the installation of TP-Link Omada controller software.

    2. Adopt the access point that matches the IP address you see in pfSense under Diagnostics –> ARP Table or under

      • Go to Devices
      • check that EAP610 shows as “Connected”
      • If not adopted, use “Adopt” button

    5.2 Navigate to where we create a new network[edit | edit source]

    1. Click on the zone you just created on the main homepage under Site List once you log in. In our case, that is home_demo
    2. Click on “Settings” in the lower left corner.
    • Make sure you clicked on a zone first - if you click on “Settings” in the lower left corner it will take you to the settings for the controller program rather than for the zone you’re setting up for wifi.
    1. Click “Wireless Networks”

    2. Click “Create New Wireless Network”

    5.3 Configure the easy settings for the network[edit | edit source]

    1. Fill in all the usual settings for normal wifi setup you’ve done before on normal wifi routers
      • SSID: maliciouswifi
        • this is the name of the network that shows up when you search for wifi networks on your laptop or phone
      • Device type: EAP
        • Band: 2.4 GHz, 5 GHz
      • Security Key: whatever password you want for connecting to it
        • This is the wifi password for the network

    5.4 Configure VLAN settings you’re likely not familiar with if you’re reading this[edit | edit source]

    1. Click “Advanced Settings”
    2. Set “VLAN” to “Custom” and “Add VLAN” should show up as a new menu item.
    3. Choose “By VLAN ID” when the “Add VLAN” part shows up after you click “Custom”
    4. Set the number to 7, which we chose when making the VLA N in pfSense.

    Step 6: Make sure blocking rules work[edit | edit source]

    1. Connect your phone to this network. Don’t use a VPN. Turn VPN off.
    2. Try connecting to the web or to home assistant, or anything we set up. It shouldn’t work.
    3. Add a firewall rule to allow traffic to & from the IP address your phone has grabbed, to the home assistant VM which we set up at 192.168.5.4
    4. Try to access the home assistant VM now on your phone.
    5. If it works now, but didn’t before, you did a good job.

    You can now connect untrusted wifi IoT devices to this and be confident that there is a slightly lower chance that your refridgerator is going to report you fapping back to the manufacturer.

    How to Set Up VLC on Android to Play Videos from a Samba Server[edit | edit source]

    What’s the point of hoarding 100 terabytes of recipes and GNU/Linux ISOs if you can’t enjoy them on your home entertainment system or your smartphone; no matter where you are in the world?

    I’ll start with the phone setup and then move on to the home entertainment system. A lot of people think my GNU/Linux setup that connects to my home entertainment system is way more complicated than it actually is. It’s simpler than you think.

    Step 1: Installing VLC on Android[edit | edit source]

    1.1 Download VLC from the F-Droid Store[edit | edit source]

    1. Go to f-droid store app
    2. Search for VLC
    3. Download VLC
    4. Install VLC
    5. be happy

    Step 2: Adding Samba share as a “favorite”[edit | edit source]

    1. Open VLC: Once you have it installed, open it up.
    2. Grant VLC the permissions it asks for, if you want it to find files on your phone & be able to play them.
    3. Add a Server:
      • Go to Browse.
      • Click on the three dots in the upper right corner.
      • Select Add a server favorite.
    4. Choose Protocol:
      • Select SMB as your protocol. It defaults to FTP, but we want SMB.
    5. Enter Server Address:
      • Type in the server address where your ZFS pool is located. This could be something like happycloud.com or simply the IP address: 192.168.5.2.
    6. Server Name:
      • Enter a server name, like ZFS pool, and hit OK.

    Step 3: Find your hidden share[edit | edit source]

    Once you’ve added the server to your favorites, you might notice nothing pops up on the screen. Plus, you might see items in favorites you never added. Don’t worry, it’s just open source quirks. Scroll over, and you’ll see the share you added. No mistakes here—it’s just open source being open source.

      • Make sure you’re connected to your VPN!

    Before you can connect to your share, make sure you’re attached to your VPN. Without it, you won’t be able to access your share. Once connected, you can click on your share!

    1. Authenticate:
      • You’ll see Archive, but you need to authenticate with your username and password.
      • Enter your credentials. You can save them in VLC or use a password manager like Bitwarden—whatever floats your boat. I’ll save it in VLC for now.
    2. Access Files:
      • Click on your file, and let’s watch. Ignore any video player tips. And there you have it! That’s how you connect to your share from anywhere in the world to view your files.

    Alternative Programs[edit | edit source]

    There are also non-open source programs that are pretty good at browsing, like Owl Files. You’d set them up similarly by entering your Samba credentials and network share information. Connect to your VPN, and your files are right there and available for you.

    And that’s it! Enjoy your media from your ZFS pool wherever you are.

    Setting Up a GNU/Linux-Based Home Entertainment System[edit | edit source]

    Here we’ll set up a living room stereo & source for your television for fun. Welcome to the portion of our guide where we dive into setting up a GNU/Linux computer and a Samba server as the heart of your living room entertainment center. I’ll also walk you through setting up a hi-fi stereo system to achieve high-fidelity sound affordably, with no audiophile snake oil.

    Hooking everything up:[edit | edit source]

    Step 1: Connect your computer to your televison with an HDMI cable.[edit | edit source]

    I hope I don’t have to explain this one. I wouldn’t use your server for this, I use a computer I have in my living room.

    1.1 You don’t need a powerful computer for an entertainment system[edit | edit source]

    Most modern CPUs (Intel/AMD) have integrated graphics that support hardware-accelerated video decoding for formats like H.264, HEVC (H.265), and VP9. This means:

    • Nearly anything made in the past 10 years, even cheap stuff, will be fine for most video playback.
    • You do not need to buy a GPU to watch stuff in your living room.

    If buying a dedicated living room computer, II would suggest buying a machine that has optical audio output so that you can output digital audio to a receiver or DAC(digital to analog converter); rather than doing the audio processing on the computer, this sends the digital signal to another device to do it. If your motherboard’s analog audio output/headphone jack is noisy and you don’t have a digital output, you’re stuck paying for an audio interface to get good sound or digital output.

    It is difficult to recommend cheap pre-built PCs as many have loud fans due to poor cooling that become annoying in an environment where you will listen to music and quiet scenes in youtube videos, movies & television. If you’re reading this guide, you most likely will want to build one yourself.

    1.2 Understanding HDMI Cable Requirements[edit | edit source]

    If your computer is far away from the television, and you want to do 4k, you may want a 50 ft cable. The problem is that most 50 ft cables that advertise they do 4k are a scam. The vendors prey upon people that can’t tell the difference between 4k30hz(30 frames per second) and 4k60(4k resolution at 60 frames per second).

    Here’s what you need to know:

    • Integrated graphics is fine - you do not need a dedicated GPU to playback 4k 60 hz video content. A decent CPU made sometime in the past 10 years is more than enough.
    • Expensive HDMI cables will never make a difference in picture quality. They either do 4k at 60 hz or they don’t.
    • HDMI 2.0 bandwidth requirements:
      • 4K @ 30Hz requires about 8.16 Gbps
      • 4K @ 60Hz requires about 16.32 Gbps
      • Any cable claiming “18 Gbps” should handle 4K60 if it actually meets specs. Most amazon/walmart no-name junk don’t.

    1.3 Video Cable Buying Guide[edit | edit source]

    • A cheap sub-$10 cable from monoprice is fine for shorter runs(25 feet or less) and will do 4k60 with ease.
    • If impatient, you can buy a high quality, General Electric HDMI cable that does 4k60 from walmart for about $20, in store, same day.
    • If running more than 25 ft, fiber optic active HDMI cables from reputable vendors like monoprice or bluejeanscable become necessary because:
      • Traditional copper cables have signal degradation over longer distances
      • This doesn’t mean a worse picture; rather, you can’t use higher resolutions or framerates.
      • Active fiber cables regenerate the signal and pass signal with less degredation allowing 4k at 60 hz(60 fps) over long distances.
      • They’re more expensive but actually work at advertised specs unlike amazon/walmart scams.
    • There are cheaper 50 ft options, but they are scams.
    • bluejeanscable humiliated monster.com’s legal department, are honest & upfront about what they sell, produce quality products, and debunk bs on their blog. For this reason alone, they’ve earned my loyalty.

    Step 2: Hook up your computer’s sound output to your stereo.[edit | edit source]

    2.1 Analog out from your desktop motherboard or laptop headphone jack.[edit | edit source]

    A cable like a 1/8” to stereo RCA from monoprice allows you to hook up the headphone jack from your computer to many stereo amplifiers and home audio receivers. In my setup, I do not have a receiver that is capable of video - I have a 30 year old Rotel RB-1090 tank with RCA input, so this is what I would use to hook up my laptop or a desktop to my stereo if I didn’t have a separate audio interface.

    If you don’t wait to wait for an order, you can also buy these at your local walmart.

    Why this will suck:

    The analog audio output from your motherboard is often horrible because you have so much else going on in there. Your GPU, CPU, RAM, are all high bandwidth devices, you have everything on a single circuit board. Things have improved vastly in this regard since I was young and dealt with the horrors of trash like the ac97, where there was audible hissing & warbling that changed in pitch & intensity when you dragged windows around the screen, and weird high frequency sounds depending on the sensitivity of your stereo system. However, it is often still there.

    2.2 HDMI output from your computer to your TV.[edit | edit source]

    If you use the speakers built into your television, you are missing out bigtime. However, there are cases where you have no choice. In this case, the audio and video will get to your television over the HDMI cable, and your setup will be simple.

    Why this will suck:

    Television speakers are trash.

    • They will be filled with cabinet resonances from the giant television.
    • You can’t fit speakers that will do a proper job inside of a very thin television.
    • The proper location of speakers in your room will not be the same location as the television in your room.

    2.3 HDMI output from your computer to your receiver.[edit | edit source]

    You may have a setup where you have a receiver that you hook up between your devices & your sound system/TV - in that case, just plug the HDMI out from the computer into that. Then you can use HDMI to carry the sound & the video. This is common in home theater setups where you might a bluray player, a cable box/FIOS TV box, and a game console that plug into a receiver. This receiver usually feeds your television a video feed, and connects directly to your speakers and subwoofer.

    Why this is better: You are not using the analog audio output from your laptop or desktop. This allows you the flexibility to choose an audio device that does not have poor sound quality, rather than being stuck with what comes in your computer. Digital output means even if the motherboard’s audio circuit is total garbage, it doesn’t make a difference, since you aren’t using it. You will be sending the raw 1s & 0s of the audio to another device & letting it do the work of turning it into an audio signal.

    2.4 Optical output from your computer to your receiver.[edit | edit source]

    Optical audio output is available on most desktop motherboards. It is worth checking to see if yours has this; which is green in the photo above. Most sound cards also have this port, if you have a sound card.

    This requires an optical cable, but optical audio cables are considerably cheaper than optical video cables, since the bandwidth requirements are so much lower. For same day purchases, a cheap walmart optical cable will do fine, and high quality 50 ft cables from reputable vendors like monoprice cost less than $15.

    Why this is better: You are not using the analog audio output from your laptop or desktop. This allows you the flexibility to choose an audio device that does not have poor sound quality, rather than being stuck with what comes in your computer. Digital output means even if the motherboard’s audio circuit is total garbage, it doesn’t make a difference, since you aren’t using it. You will be sending the raw 1s & 0s of the audio to another device & letting it do the work of turning it into an audio signal.

    “UHM, AKSHUALLY” NOTICE: Some wiseguy’s going to say that this is unnecessary. They might say that if we’re connecting our computer to the television using an HDMI cable, that the audio is already going to the television through the HDMI cable, and that you can use the optical output from the television to send the audio to the receiver digitally without having to worry about whether your computer has a digital SPDIF output.

    Some televisions do this. Some don’t. Some claim they do and have broken menus. If we are going to be buying gear from scratch, I think it makes sense to keep our options open. In 2024, there is no price premium to pay by asking for an audio jack that came out in 1983.

    2.5 Basic purchase considerations[edit | edit source]

    We will go into this in greater detail later: for now, let’s go over the basics.

    If your computer is far away from the receiver or amplifier, you should really consider using an optical cable to connect your computer to the audio source to avoid hiss, distortion, hum, and horrible audio. Even high quality analog audio cables suck when they are unbalanced over long distances; no laptop or standard desktop computer uses balanced output. However, even the cheapest of spdif optical cables will be fine even at 50 ft with audio signals. Digital audio signals are far lower in bandwidth so there is no real worry about degradation at any practical household length of cable.

    On the low end, I would suggest a used stereo receiver that has optical audio input from a reputable brand. These can be found on eBay by searching with the following filters:

    • Number of channels:
      • 2.1 if you want to attach a subwoofer later
      • 2 if you don’t care.
    • Type: Stereo Receiver and Integrated Amplifier
      • This will provide you with volume control, ability to utilize multiple sources, and an amplifier for your speakers.
      • I don’t bother with surround sound.
        • Two good speakers will always beat 6 crappy speakers.
        • For any given price point, you get a higher budget when buying two speakers than you do buying six.
    • Audio Inputs: Digital Optical TOSLINK.
      • This means the receiver has a DAC (digital to analog converter). This turns the 1s & 0s that make up the audio files on your computer into an analog signal.
      • The one in your receiver usually does a better job. I’m not talking about audiophile nonsense, just basic competence.
      • On lower quality PC motherboards you can literally hear hiss that changes when you do CPU intensive things on your machine when using sensitive stereo equipment or headphones.
    • Buying Format: Buy it now.
      • Bidding wars on eBay are fun & dopamine releasing. They’re exciting. When people win, they feel like they’ve won until they realize they’ve paid 30% over the used-market-value of what they purchased.
      • If buying from auctions on eBay, consider using an auction sniper like Gixen.com. Auction snipers are programs where you input what you wish to pay for something, and it submits your bid seconds before the auction ends. This way, you don’t get caught up in a bidding war. If you bid too early, it allows others the chance to increase their bid as well, which drives up the price.
      • Using an auction sniper removes the emotional aspect from auctions that drives up prices, and encourages price-discipline in setting the max you are willing to pay for the item early on.

    Step 3: Access Media On Your Samba Share[edit | edit source]

    For a media computer, my setup is actually dirt simple. It goes like this.

    1. Connect computer to TV/stereo.
    2. Find file I want to play(music or video) in my file explorer(thunar, windows explorer, etc)
    3. Double click to open in VLC
    4. Enjoy

    Here’s how you can access your files:

    1. File Explorer: I use Thunar as my file explorer. If you’re on Windows, you’re likely using Windows Explorer.
    2. Accessing Samba Share:
      • In Windows, you’d type \\ followed by the IP address of your ZFS pool computer. For instance, in our case, it would be \\192.168.5.2
      • On Linux, type smb:// followed by the IP address or hostname of your share in Thunar file manager Enter your username and password when prompted. For instance, in our case, it would be smb://192.168.5.2
  • Once you do this, you’ll have access to all your files stored on the ZFS pool.

    Step 4: Play Media with VLC[edit | edit source]

    To enjoy your GNU/Linux ISOs & recipes, simply find your files in the Samba share, double-click to open them in VLC. Boom, you’re set. VLC is an open source media player that is fast, efficient, and supports nearly every audio format, video format, codec etc. on earth. It works on Windows, Mac, GNU/Linux, Android, iPhones, ChromeOS, FreeBSD.. just about everything. .

    Putting together affordable home hi-fi[edit | edit source]

    Now, let’s break down the audio components because, you don’t need to spend a fortune for good sound. You just have to avoid snake oil and sound bars.

    Speakers[edit | edit source]

    • Speakers: I use Vandersteen Audio speakers. They have minimal cabinet resonance, phase coherent crossovers & driver positioning, and even frequency response for clear sound.
    • Amplifier: An Rotel RB-1090 powers my speakers.
    • Subwoofers: Two HSU Research ULS-15 from Dr. Hsu, one of the inventors of the original subwoofer.

    Speaker Selection: Why I Use Vandersteens[edit | edit source]

    Minimal cabinet resonances & diffraction off the bezel.[edit | edit source]

    These speakers cost $1100 used when I got them, and under $900 were available with minor crossover issues. I like these for a very good reason; exceptional engineering with little/no attention put to marketing or looks. True function over form.

    They have great frequency response AND phase response. Further, their shape avoids baffle diffraction & cabinet resonances.

    Do this - put your hands by your mouth and cup them. That weird boxy sound you get? That’s what it’s like when you have a speaker that’s a giant box. It’s why your television sounds like garbage.

    When you look at these Vandersteens, you notice that even though it looks like a big speaker, the top part is actually just a pole. It’s nothing in there - it’s almost completely hollow besides the bass cabinet. Minimal baffle, minimal diffraction. When there’s diffraction that means you’re listening to the noise from the speaker driver PLUS the reflections off the cabinet that are milliseconds apart.

    When you get used to hearing speakers that have minimal cabinet resonances and baffle diffraction, it’s really hard to go back to speakers that do. Everything else sounds like a speaker; this sounds real.

    Used market availability.[edit | edit source]

    Inflation? What inflation?

    In 2009 I bought a set of model 2 for $400 from someone with a leaky apartment, and in 2011 I had the choice of $1100 for high quality used or $900 for a set with minor, repairable crossover issues. Now, a set of 2c are $650, and 3a signatures are $1600 in good condition.

    Low end models feature same high end engineering

    99% of what you get in the Vandersteen Model 3a signature which is $7000 new and $1000-$2000 used you get in the lower end models like the 2c.

    These are always for sale.

    Anytime you to go ebay.com or audiogon.com , someone is selling a set of these.

    Subwoofer? Why?

    These are down about 1 dB at 30 hz. That’s insane. Most likely, the subwoofer you have with your soundbar or home theater produces less bass than these.

    The extension along with a small cabinet does come at a price - you’re not going to 120 dB with a lot of low end with these. But, for most music and even movies at reasonable volumes in average sized spaces, you’ll get the full range experience without feeling like you’re missing out.

    Speaker Selection: Why I Use Axiom M3[edit | edit source]

    Quality engineering over marketing wankery

    Axiom is a company that was early to the scene with direct-to-the-consumer online sales. Their “marketing budget” was a guy named Alan Lofft who answered people’s questions on an early webforum that looked like a usenet newsgroup with their logo in the top.

    Axiom conducted research at the National Research Council in Canada, where double-blind tests were performed in which ordinary people would say what they preferred with regards to audio quality. Taking this scientific approach with input from the public, combined with extensive testing and design in a top quality facility allowed them to draw direct conclusions from how speakers measured to what people wanted.

    Affordability

    Although their prices have went up for brand new speakers, they can still be found dirt cheap used. Back in the day, their m22 speaker sounded similar to paradigm studio speakers that were near triple the price.

    Speakers like the M60 that cost $800 new are no longer competitive deals at their current pricing of $2000/pair - but speakers like the M3 can be found for $160-$220, fit on a desk, and offer exceptional sound for dirt cheap.

    Exceptional frequency response

    Speakers like the M3 have a neutral frequency response and a very natural sound.

    Minimal cabinet resonances

    Take a look at the M3. Notice how the walls are not parallel? This lessens the type of internal standing waves that occur when a speaker is a perfect cube box. It’s a small touch, but little details like this show them actually focusing on engineering rather than making it look pretty, paying for annoying influencer marketing campaigns, and trendy nonsense.

    Same deal with Vandersteens - you can grab the Model 2s used for like $600. These go down to about 30 hertz, very linearly. So you could easily use these without a subwoofer and get better bass than 99% of those computer speaker setups with their tiny subwoofers. These actually have a 10-inch passive radiator in the back and an 8-inch woofer in the front.

    The key is don’t buy this stuff new. Just look through eBay for a few minutes, check AudioGon, and you can find insane deals. You’ll end up with speakers that absolutely destroy setups that cost 5-10 times more.

    Debunking Audiophile Myths[edit | edit source]

    Now, let’s address some audiophile myths. There’s this idea that more expensive always equals better, especially when it comes to cables. You’re going to hear about people justifying $5,000 cables, which is absolute nonsense.

    ABX Double blind testing doesn’t matter[edit | edit source]

    A key sign you’re speaking to someone who has their head as far up their ass as their ego, or a salesman, is when they refuse to acknowledge the benefits of ABX double blind tests. Hydrogenaudio is THE place for top tier codec developers & programmers to congregate and showcase their new developments; they have had ABX testing as part of their forum rules for over 20 years. If you post about sonic differences without sharing ABX test results.. you’re gone.

    That should tell you something.

    The ABX test is a method used to objectively compare audio equipment. It involves three inputs—A, B, and X, where X is randomly selected from either A or B. The listener must identify X without prior knowledge, and if they can’t consistently tell the difference, their input is considered irrelevant.

    You have a program where you know what A is(an uncompressed wav file), you know what B is(a compressed 128 kbps AAC file), but you don’t know what X is. Every time you hit the X button, you are listening to either A or B - but you don’t know which. It is your job to figure out what X is, each time.

    If you can’t get it right 12 out of 16 times, you didn’t hear a difference. It was all in your head.

    Our memory for people’s voices is exceptional. Our ability to be honest with ourselves about our auditory memory is complete garbage. The reason for this is that we forget what something sounded like with regards to every sonic detail the moment we stop listening to it. It’s easy for our brain to “think” it heard a difference when it didn’t.

    This is a good thing, right?

    It depends how you see it.

    The upside of ABX testing:[edit | edit source]

    If you can’t hear the difference between a $200 amplifier and a $20,000 amplifier, you just avoided breaking into your 401k for $19,800 worth of audiophile bs.

    The downside of ABX testing:[edit | edit source]

    The crushing of fragile egos.

    Read youtube comments sections. Anytime a company, a manufacturer, a developer, etc. screws a group of people over, there are two groups of people in the comments:

    • People who are supportive
    • People who say that everyone who made a difference choice than them in
      • Who they voted for
      • Who they worked for
      • Who they did work for
      • What software they bought
      • What hardware they bought
      • etc, etc. is an idiot.

    This is done because it makes people feel better about themselves. If I can hear the difference between a $5 cable and a $500 cable, it means I’m a connoisseur, unlike the plebs & unwashed masses who can’t tell the difference. It also makes me feel like I am getting an upgrade when I am actually not. Above all, it gives people an ego boost, and who doesn’t want that?

    • Audio Memory and Bias:
      • Human auditory memory is fleeting. The moment you switch from one system to another, your ability to accurately remember the sound diminishes.
      • This makes it easy to convince yourself that a more expensive component sounds better, even if it doesn’t.

    Warning: Avoid falling for marketing gimmicks that promise crazy improvements whose vendors will hide from ABX testing. Forums like hydrogenaudio can offer a reality check with evidence-based discussions.

    By focusing on what ACTUALLY matters; speaker quality, room acoustics, and well-researched purchases—you can make a hi-fi system that satisfies both your ears and your wallet.

    Expensive equipment is a priority over acoustic treatment[edit | edit source]

    When you look on audiophile webforums, you will see people with Krell amplifiers, wilson watt puppy speakers, and lavry digital to analog converters in untreated drywall rooms. No bass traps. No acoustic panels. It’s insane.

    A $400 stereo in a good room will beat a $40,000 stereo in an untreated room.

    Avoid falling into the trap of spending thousands on equipment that doesn’t deliver proportionately better sound. Focus on well-designed, affordable electronics, and you’ll have a setup that works amazing in your living room without emptying your wallet.

    Receivers, amps, electronics[edit | edit source]

    Today, audio electronics that are competently designed(key word; competently) will be indistinguishable in an ABX test from gear that costs $10,000. Paying $10,000 for an amplifier or $8000 for a DAC isn’t an exercise in audible improvements; they’re just status symbols.

    The Basic Building Blocks[edit | edit source]

    You need three main things to get from digital music to sound: 1. Something to turn digital into analog (DAC) 2. Something to control volume and inputs (preamp) 3. Something to make it loud enough for speakers (power amp)

    Digital to Analog Converter (DAC)[edit | edit source]

    • Takes the digital signal from your computer, the 1s and 0s, and turns it into an electrical audio signal.
    • This could be in your computer motherboard, a soundcard, in a box by itself, in your receiver, or your television.
    • Most modern ones are fine - don’t fall for a $10000 DAC or similar bs
    • Having this OUTSIDE of your computer usually means less chances for computer-y noise in your audio like hiss/high frequency noises when doing something with your computer.

    Preamp[edit | edit source]

    • This controls which input you’re listening to, so you can switch between a bluray player, cable box, playstation, etc.
    • This is what has a volume knob, so it’s pretty much a fancy switch & volume control, sometimes has bass/treble controls on it.
    • This isn’t what makes things louder - that’s for the power amplifier.
    • This can be better than taking the output from a digital to analog converter and simply lowering the audio volume in VLC.

    Preamp? Why the hell would I pay for a fancy volume knob when my mouse wheel and VLC let me do that for free?[edit | edit source]

    When you lower the audio signal volume in VLC, you’re not attenuating an analog signal. Attenuating an analog signal takes the same audio you had and just shrinks the waveform.

    When you lower the volume using VLC and then amplifying that, you’re lowering the volume digitally. 16 bit audio has 96 dB of dynamic range, 24 bit audio has 144. If you lower the volume digitally too much, it will start to sound like you are actually losing bits of audio. Even digital preamps usually use a digital signal to control analog amplification & attenuation.

    A great example of this would be to hook a digital to analog converter with no volume control/attenuation knob up to your computer, and plug it straight into a power amp. This would be full volume all the time. Then, lower the volume in VLC. This will sound different than lowering the volume on an analog preamp, because you’re not lowering the signal, you’re throwing away digital data.

    If you think paying extra for a preamp is stupid, just buy a stereo receiver. You get a good enough preamp along with a good enough DAC & amp and it costs around the same or less than just a preamp from “audiophile” brands.

    Power Amp[edit | edit source]

    • Turns a tiny audio signal into a big audio signal.
    • As long as it’s competently designed and can power a 4 ohm load without turning off, you’re fine.

    Now the Combinations:[edit | edit source]

    Similar to how a modern wireless router is actually a router, a switch, and a wireless access point all in one, the devices below are usually combinations of the devices above:

    Integrated Amp[edit | edit source]

    • Preamp + Power amp in one box
    • Usually does not come with a digital to analog converter, so still needs a DAC if you have a source with a digital output.
    • Usually cheaper than separate components when comparing with others from the same company.

    Receiver[edit | edit source]

    • DAC + Preamp + Power amp all in one
    • Often has digital inputs like optical/HDMI
    • Has radio tuner, (that is why it’s called a receiver)
    • Usually the cheapest all-in-one option
    • Good for most people who just want things to work & sound good without being overly complicated.

    Most modern electronics that are competently designed all sound basically the same. Don’t fall for that “magical preamp” or “warm sounding DAC” garbage. Get something with enough power for your speakers, digital inputs you need, and spend the rest of your money on speakers and room treatment.

    The only time you need separate components is: * Need more power than receivers offer * Want to upgrade one piece at a time * Have some specific feature need * Found a crazy deal on used gear

    For most of the people reading this, a used ten year old receiver with optical input will do everything you need, and cost under $200. Save your money for the stuff that actually matters; good speakers & acoustic treatment. (and your retirement).

    Suggested electronics: 5-10 year old receiver with optical/coax in.[edit | edit source]

    There are two ways in the affordable, consumer realm to transfer digital audio signals.

    • SPDIF using an optical cable(toslink).
    • SPDIF using a coaxial cable.
      • This is like an analog RCA audio cable, same connector, but requires a cable that is manufactured to much stricter specifications.

    Getting a receiver that supports both gives you flexibility in case your motherboard only supports one or the other.

    Laptops rarely have coaxial or optical out. While you can get an audio interface, this is extra money, and often not immediately available. A USB to spdif device requires an online order, while a 1/8” to RCA cable is available everywhere.

    This device above allows you to have the flexibility to use whatever works best for you at the time. Best of all, it’s $187. Used devices like these in good condition from harman kardon(before they cheaped out, go back to 2012-ish era), denon, onkyo, etc. are being sold sub-$200 on eBay & audiogon every day.

    Understanding Room Acoustics[edit | edit source]

    What makes rooms sound bad?[edit | edit source]

    Before we get into the technical setup, let’s talk about room acoustics because it really makes a difference. Two rooms can be identical in shape & size and sound completely different if one is treated and the other is not.

    Some bare-walled, people call “echoey”. Those aren’t echoes, they’re early reflections. An echo is when you yell and then you hear it repeat back a second or two later. An early reflection is when you speak and you hear yourself alongside yourself a few milliseconds later.

    What that means is that you’re not just hearing you, you’re hearing you alongside something else. It creates a totally different sonic experience & it’s annoying. It’s distortion; it’s noise added to the original signal.

    Why the word “audiophile” is a joke[edit | edit source]

    Self proclaimed “audiophiles” will spend $1000 on cables and $5000 on digital to analog converters that claim they reduce inaudible distortion 0.001%. Not ACTUALLY reduce distortion 0.001% - CLAIM TO reduce distortion 0.001%.

    Yet, they won’t spend a few hundred dollars on room treatment that reduces distortion 5% to 15%.

    It’s ridiculous. Walk into many hi-fi dealers and they won’t even mention room treatment, or try to sell you room treatment. But they will upsell you to a $4000 amplifier, or $500 cable, when it sounds the same as a $200 amp and a $5 cable.

    Buying acoustic panels.[edit | edit source]

    Acoustic panels[edit | edit source]

    24”’x48”x2” acoustic panels are the most common. Something like this ATS Acoustic 24 x 48” x 2” panel. Pretend you’re playing pool and put the panels where the sound is going to bounce around your room as it leaves your speakers. Hang these about 2 to 3 feet above the floor, behind the speakers, and behind the listening position as well.

    Bass traps[edit | edit source]

    Bass traps are just bigger acoustic panels. The more insulating material, the lower the frequencies they absorb. It is most obvious when midrange and high frequencies are reduced in volume as this is within the range we are most sensitive to hearing differences in, since these are the frequencies of the human voice.

    You may wonder what the point of a bass trap is. Most people want more bass, not less! Reflections are the enemy of bass.

    If low frequency reflections are only a few milliseconds away from the original sound, they can cause phase issues where they cancel each other out, resulting in giant peaks and nulls in certain areas of the room, at certain frequencies. By absorbing the reflections, you wind up with more, and higher quality net bass.

    Bass traps usually start at 4” of thickness.

    Acoustic foam[edit | edit source]

    Acoustic foam is a much cheaper alternative unless you’re getting ripped off buying Auralex. However, it is way less effective. It mostly absorbs high frequencies, and the darker colors are exceptional at making rooms look depressing.

    Still, this is considerably better than having nothing at all.

    they don’t absorb low frequencies, just the higher ones. Compare that to, which are much more effective.

    • Cheap Foam: Absorbs only high frequencies; ineffective for bass.
    • Owens Corning 703: Absorbs a broader range of frequencies, including low ones.

    Even with subpar treatment, you avoid some early reflections that can muddy up your sound. But trust me, investing in quality acoustic panels is worth it.

    Make your own acoustic panels[edit | edit source]

    Here’s what you need to make your own acoustic panels:

    • Owens Corning 703 fiberglass for the absorption material
    • 2x4 wood to frame the fiberglass.
    • Burlap to hold the fiberglass in place and keep it from falling out
    • Staplegun to attach the burlap to the wood frame.
    • Brackets and drywall/brick anchors to hang them on your wall

    That’s it. My original acoustic panels were all DIY.

    Materials like Owens Corning 703 fiberglass and burlap to create broadband acoustic absorbers. Avoid using generic insulation from Home Depot as it’s not designed for sound absorption. Insulation is too loose, the sound waves move around the fibers but don’t get absorbed into it.

    To recap:

    • Buy some Owens Corning 703 fiberglass or a similar product for sound absorption.
    • Grab some 2x4s and cut them to 2 feet by 4 feet.
    • Purchase burlap and use a staple gun to wrap the fiberglass in burlap.

    Hunting for Deals[edit | edit source]

    To stretch your budget:

    • Browsing eBay and audiogon regularly for deals on high-end speakers and receivers
    • Create bookmarks for searches that fit your criteria for models and brands you like on these websites. Check it every morning for a few seconds so you get a good deal before someone else buys it.

    With patience, you can put together a hi-fi system that outperforms setups priced at $10,000-$15,000 for just about $1,500. Or a setup for $400 that sounds closer to $4000.

    Sourcing your content; 4k blurays with the right drive[edit | edit source]

    Because the MPAA & RIAA are a bag of dicks, they have managed to get almost every bluray drive manufacturer to not allow you to make a backup of your own property. They won’t rip 4k blurays.

    However, there is a way around this; get a Pioneer BDR-2213 running a nice old firmware.

    With this, you can rip your content in the exact same uncompressed quality you got it in. This will look so much better than the garbage low bitrate streaming quality you get from modern streaming services.

    Modern streaming services give you three options:

    1. Use an HDCP compliant processor, HDCP compliant monitor, HDCP compliant operating system, to watch content you paid for. Jump through more hoops when PAYING than you do when pirating.
    2. Use a smarttv, a device that is honest to your face that it spies on you and sells your personal data
    3. Be stuck viewing a 1-2 mbps low resolution stream.
    4. Give a giant middle finger to the MPAA with a Pioneer BDR-2213, running old firmware.

    Option 4 wins every time.

    Elegant Home Theater PC Setup: for people who don’t want a disorganized mess[edit | edit source]

    Why my setup makes no sense[edit | edit source]

    My setup is very strange. It’s disorganized, unwieldly, and not visually appearing.

    My setup:

    You likely want something that looks more like this:

    Sensible setup:

    My weird computer[edit | edit source]

    I don’t have a bedroom computer, home theater computer, office computer, etc. I have one computer that sits in my living room that I use for everything. I lived in an 1100 sq ft studio apartment for twelve years, so I had one PC for my 1 room home. This cube was my work computer, my video editing machine, my personal machine, my home theater PC; all in one.

    What makes a good home theater PC is not what makes a good video editing workstation. For a home theater PC, you should have something like this:

    • Very quiet
    • Very cheap
    • Pre-built, because you have enough on your plate than to take time building a custom computer that rips blurays & runs a pretty version of VLC
    • Optical audio output If you don’t want to buy an external audio interface separately
    • Power efficient so you aren’t taking 150-250 watts to play an mkv file
    • Small pretty form factor that fits in with your living room perfectly

    Above all, you don’t want it to look like a giant mess!

    My stoneage home theater software[edit | edit source]

    I showed you what I use; a computer file explorer to browse to my video & music files, and double click them to play them in VLC. There are several reasons this is horrible:

    • File & folder browsing.
      • If your folders are a mess, it will be difficult to find your stuff.
      • Immich tags photos by face & description; we want something like this that’ll just make sense of our 160 terabytes of stuff.
    • Blinding user interface.
      • Computer operating systems are designed for use with a monitor right next to you, not a TV that is 3 meters away.
      • You can change your display settings & scaling, but making it work with a TV makes it awkward.
    • Manual lookup of info: finding ratings, credits, other info isn’t immediately accessible & requires leaving the file explorer or VLC to find.

    There’s nothing inherently wrong with this setup. It’s just not everyone’s cup of tea, so we’re going to set up something built for a home theater living room system. This can be done quickly and easily - unlike many other things in GNU/Linux!

    Beautiful software made for a living room Television[edit | edit source]

    Kodi is a program that turns your computer into a polished home theater system for your living room TV.

    • User-friendly interface designed for couch viewing. No need to squint or strain your eyes.
    • Automatic library organization. Kodi scans folders & files and turns the biggest messes into beautifully organized library of movies, shows, and music.
    • Metadata integration. Kodi grabs information from online databases & shows detailed summaries, artwork, & ratings for movies, tv, and music.
    • Open-source and offline-friendly. You can run Kodi without an internet connection, ensuring your legally ripped, totally un-copyrighted media collection remains private.
    • Built-in song lyric support. Kodi automatically fetches & displays lyrics for your music.
    • Seamless playback with buffering. Kodi caches files so your media doesn’t stutter or skip, even if your server is slow or under heavy load.
    • Effortless 4K playback. From high-bitrate h.264 to h.265, VC-1, or MPEG-2 files, Kodi can play anything you’ll encounter on the high seas or your personal bluray collection.

    Kodi takes minutes to install & configure[edit | edit source]

    Kodi software is made for a home theater PC; you on the couch, television eight feet away, & it can be installed in 2 minutes or less using a GNU/Linux distribution called LibreELEC. This is not a convoluted installation process. It’s so seamless you’ll almost forget you’re using open source software.

    Doesn’t my TV already do this?[edit | edit source]

    You should be able to trust your television to play television and movies. That is what it is there for.

    The year is 2025, and consumer protection in the United States(& many other countries) is a joke. Many modern televisions come pre-configured to sell your personal data, equipped with the ability to tell who you are and what you’re watching. LG is upfront about it. You will hear the argument that this is necessary to keep televisions affordable; this is made by simps for television manufacturers, or the television manufacturers themselves. Above you’ll see an image of the menu of an LG G3 OLED television. The LG G3 OLED television is configured, by default, to spy on & sell the personal information of its user; even when purchased new, at full $3600 MSRP from an Authorized LG Dealer. You cannot remove these elements of its operating system. You thought you owned the television that you bought, but the television thinks it owns you.

    You can use your television to play back media, but it is often highly restricted. Combine this with the fact that most, if not all, modern televisions come with spyware pre-installed that you cannot remove, and we’re not doing that. My television will go on the internet over my dead body.

    An ASUS Asustor Flashstor mini-pc for a home theater computer[edit | edit source]

    This machine fits all of our above requirements above beautifully.

    Quiet Operation[edit | edit source]

    Dealing with noise is important when setting up a home theater PC. Your gaming PC probably sounds like an annoying $20 amazon drone, and many minipcs aren’t much better. This machine makes little to no noise even when playing back high bitrate h.265 files & fits easily inside a TV stand or on a small shelf.

    Impressive Storage Capacity[edit | edit source]

    We are using our server for storage, not the Asustor Flashstor mini-pc. If you wanted to try using this as a small starter server, here’s where the ASUS Asustor MiniPC shines—storage capacity. Unlike most if not all mini-PCs which offer 1 or 2 slots at best for SATA/NVMe drives, the asustor has six NVMe slots on the cheapest model. This lets you to install up to 24 terabytes of incredibly fast storage on the cheap asustor, or 48 terabytes on the higher end models.

    Cost: $300-$400 on eBay[edit | edit source]

    These can be found under $350 used which gets you a lag-free, quiet machine with six NVMe slots.

    Audio & Video output options[edit | edit source]

    The Asustor Flashstor does 4k60 out of its HDMI port just fine. Some cheap no-name fly by night minipc companies use old HDMI standards for their ports & get stuck at 4k30.

    For high-quality audio, having an optical SPDIF output is important. as mentioned before, this allows the digital-to-analog conversion to be handled by dedicated audio equipment rather than your multipurpose PC, which sidesteps the noisy nonsense you get when you try encoding video or doing CPU intensive things with headphones plugged in. You may not notice this while your headphones are turned up as you’re engaged playing an exciting game, but quiet passages of movies get ruined by this very easily. The Asustor flashstor includes an optical SPDIF audio output jack, allowing you to connect directly to most modern home theater receivers.

    • Benefit: Avoids the need for additional USB audio interfaces.
    • Setup: Use a simple $5 optical cable from Walmart to connect to your stereo system.

    It has optical audio output. This allows you to plug the machine into a stereo receiver’s optical audio input, a discrete digital to analog converter, or an integrated amp’s optical audio input for clean sound output.

    HDMI carries audio, but if you’re like me & have a separate audio setup from your television, you’d have to get an HDMI audio/video splitter to get HDMI video to your TV and SPDIF digital audio to your stereo receiver that goes to your speakers. Some TVs can pass through the audio digitally to your stereo receiver, some don’t, but even if they do this is an added pain in the ass. Having optical audio out makes this easier.

    Powerful, expandable machine[edit | edit source]

    Even the cheapest Asustor flashstor handles 4K video effortlessly. Higher end models are twice as powerful as the server in this guide and only take a fraction of its power, making them suitable as a starter server. Low end models have 6 NVMe solid state drive slots, but you can buy this with up to 12 NVMe drive slots which would give you 48 terabytes of NVMe storage for a server, with 10 gigabit ethernet for fast network transfers.

    The asustor flashstor can be a starter server.[edit | edit source]

    Using an Asustor as a starter server is a great idea. If you know you want a home theater PC, you’re going to buy something like this anyway; and even the low end model is powerful enough for most tasks. You can always demote it to a home theater PC down the line when/if you decide to put together a giant 200 terabyte monster like what is pictured above. like what I have pictured above.

    Don’t use your server as an HTPC at the same time; attack surface & why you should care[edit | edit source]

    The attack surface (or threat surface) refers to all the different points where a hacker could potentially gain unauthorized access to your system. This means that the more you install onto your machine, the greater the likelihood you turn into one of the poor schmucks in /r/asustor who got owned by ransomware. The more things a machine does, the larger its attack surface becomes & the more opportunities attackers have to exploit vulnerabilities.

    If you use the same PC for Kodi and services like Mailcow (mail server), FreePBX (phone system), Immich (photos), or Nextcloud (notes), you’re mixing a home theater interface with mission-critical infrastructure. Bad idea.

    Why?[edit | edit source]
    1. Increased Exposure: Running Kodi means more risk of vulnerabilities from media files, plugins, user interaction, etc. If exploited, it could compromise your entire server & everything running on it.
    1. Conflicting Security Needs: A server for mail and photos requires high uptime, strict access control & limited exposure. A home theater PC is inherently less secure because it’s meant to interact with more devices, networks, & potentially risky media.
    2. Damage Scope: If someone hacks your Kodi system, do you really want that person having backdoor access to your email, phone, or photos? Keep the two separate & isolate them for better security.

    Why Not Use It as a Router?[edit | edit source]

    You might wonder, can your MiniPC double as a router since it has two Ethernet ports? These are 2.5 GbE ports, which is faster than the typical 1 GbE ports. It offers speeds of 250 to 290 MB/s. However, they use Realtek chipsets (likely the RTL8169) & while you can use Realtek for a firewall, you really shouldn’t. This isn’t a meme like running your own self managed mail server. It’s just a bad idea. Don’t ever mix Realtek chipsets with FreeBSD based firewalls(which pfSense is).

    IMPORTANT NOTE: Avoid using Realtek chipsets for firewall purposes. Stick to using your MiniPC as a home theater PC instead.

    Being silly: adding eight 3.5” enterprise class hard drives to the Asustor Flashstor mini-pc.[edit | edit source]

    Let’s say you chose to use this device as a server down the line. It only has NVMe slots for solid state drives. 24 terabytes of flash storage might be too little for you. If you want to use hard drives with it, you can’t plug desktop drives directly into it; but that doesn’t mean you can’t try. :)

    You can actually add eight 3.5” desktop hard drives to an asustor flashstor if you bought one with a USB-C 4.0 port. If you’re looking to expand beyond NVMe, the higher end models with USB-C ports allow this. If you wanted to go crazy, you could get the following hardware. To be clear, this is ridiculous & not recommended; but there’s something fun about doing ridiculous things. The lengths I have gone through to make use of hardware I already own are great, and I feel compelled to share some of what is possible with you.

    • USB-C to PCI Express Card enclosure: This unit allows you to plug a desktop PCI Express card slot into a computer that has a USB-C port. This is needed since the flashstor has no PCI Express card slots fit for desktop PCI Express cards. You might have to cut a hole in it for the SATA cables to come out of.
    • PCI Express Serial ATA card: This lets you plug in another 8 serial ATA desktop hard drives.
    • Mini SAS to SATA cable, SFF-8643: An SFF-8643 adapter cable goes between your PCI Express SATA card and your eight hard drives. You would need two.
    • Power Splitter: Needed for powering multiple drives, something like the startech SATA power splitter.
    • SATA drive power supply: You’d now need to power those SATA drives.
      • Something like this could power 2 drives at a time.
      • Any PC power supply that can do over 10 amps on the 12 volt rail would suffice for eight 3.5” enterprise class serial ATA hard drives, but you see why this is getting silly.
      • Either you are going to have to do some research to find a sleek looking power supply that does 10 amps at 12 volts to reliably power eight 3.5” enterprise class hard drives, OR:
      • Short the green PS_ON wire on a desktop PC power supply to the black wire with a paperclip to turn it on. Desktop PC power supplies only turn on when they are plugged into a desktop computer, and this would only be plugged into the drives.

    Setting Up Your Home Theater[edit | edit source]

    Introduction once you have your mini-pc[edit | edit source]

    Overview of steps involved:

    LibreELEC is a GNU/Linux distribution that takes less than 90 seconds to install that starts up into Kodi(our media center software) software right out of the box. This is so easy; it just works. It’s so good you’ll forget you’re even using GNU/Linux or open source software. The steps below are as follows:

    1. Make LibreELEC install disk to install LibreELEC linux distribution onto our asustor home theater PC.
    2. OPTIONAL: Install NVMe drives into Asustor. My home theater PC does not store any content; that is what my server is for. If you want your Asustor to have local storage as well, you can install NVMe drives into the bottom of it.
      • Use a phillips #0 screwdriver to remove the four screws on the bottom of the Asustor.
      • Be gentle; the clips you have to pull back to fit your NVMe drive in aren’t the most durable. In fact, they remind me of the flimsy MacBook A1181 screen bezel clips that broke if you looked at them the wrong way.
      • Avoid pressing directly on the SSD’s chip when pushing it into the mini-PC. Instead, apply pressure to the pc board of the solid state drive so you don’t put pressure on the solder balls under the SSD’s chip.
      • If at any point you are debating whether to pull back harder on the clips of the asustor that hold the NVMe drive in, or to push harder on the SSD, always elect to pull harder on the NVMe clips on the asustor. The cost of those breaking is nothing; just use a piece of kapton heat resistant tape to hold the SSD in. The cost of breaking the SSD, is several hundred dollars, or random reboots if you cause a crash in a solder ball that will take you months to trace back to that stupid SSD.
    3. Plug Asustor into television, keyboard, & mouse
    4. Disable secure boot/security features in asustor BIOS(UEFI technically) so we can install Linux on it.
    5. Erase Bloatware: Asus’s garbage software will be removed so it can never be used again, even by accident.
    6. Install Libreelec: This provides a clean, efficient operating system specially made for home theater PCs.
    7. Set Up KODI: We’ll use this to catalog media files making them easy to search & access. As soon as you turn the computer on, in less than 30 seconds it will be booted up into KODI so you can access all of your files.

    Note: This setup will automatically pull information from internet databases, giving you detailed descriptions & reviews of your content.

    Installing LibreELEC operating system with KODI[edit | edit source]

    1. Download LibreELEC:
      • Head to the LibreELEC website and download the generic image for your hardware.
      • Generic is what we want; you can download versions for other non-x86 architectures if you want a home theater PC that isn’t based on x86, which is cool, but we’re using an x86 based minipc here.

    Step 1: Creating a Bootable LibreELEC USB Drive[edit | edit source]

    1.1 Download LibreELEC[edit | edit source]

    1. Head to the LibreELEC Downloads page and download the generic image for your hardware.
      • Choose the Generic version for x86-based systems.
      • If you’re feeling adventurous, you can download versions for non-x86 architectures, but we’re focusing on an x86-based mini-PC here.
    2. The file will be in .img.gz format. You will need to unzip it.



    Step 2: Unzip the .gz File[edit | edit source]

    Instructions for GNU/Linux, macOS, and Windows:

    • Linux:

      gunzip LibreELEC-Generic.x86_64-12.0.1.img.gz

      This will extract LibreELEC-Generic.x86_64-12.0.1.img in the same directory.

    • macOS:

      1. Open Terminal and navigate to the directory with the downloaded file:

        cd /wherever/you/downloaded/it/to
      2. Use the gunzip command:

        gunzip LibreELEC-Generic.x86_64-12.0.1.img.gz
    • Windows:

      1. Download and install a tool like 7-Zip.
      2. Right-click the .gz file and select 7-Zip → Extract Here to extract the .img file.



    Step 3: Create a Bootable USB Drive[edit | edit source]

    ⚠ Warning: This process will erase everything on the USB drive.

    1. Insert a USB flash drive (at least 4GB in size) into your computer.
    2. Use one of the methods below to write the LibreELEC image to the USB drive.

    Windows:[edit | edit source]

    1. Download and install Rufus.
    2. Open Rufus and select your USB drive.
    3. Click the “SELECT” button and choose the .img file you extracted.
    4. Click “Start” and let Rufus create the bootable USB.

    macOS or GNU/Linux:[edit | edit source]

    Figure out which is the right USB Drive:[edit | edit source]

    1. Open the terminal and run:

      sudo fdisk -l
    2. Make a note of the connected drives.

    3. Insert your USB flash drive and run the command again:

      sudo fdisk -l
    4. Identify the new drive that appears. It’s usually something like /dev/sdX or /dev/diskX.

    5. Double-check that you’ve identified the correct drive:

      • Unplug the USB drive.
      • Run sudo fdisk -l again. The drive should disappear.
      • Plug it back in and confirm it reappears.
    6. If you’re sure the drive is correct, proceed.

    Write the Image to the USB Drive:[edit | edit source]

    1. Replace /dev/sdX with your USB drive’s path and run:

      sudo dd if=LibreELEC-Generic.x86_64-12.0.1.img of=/dev/sdX bs=4M status=progress
    2. Wait for the process to complete. It may take a few minutes.



    Step 3: Set up the Asustor minipc[edit | edit source]

    Connect to Your TV and Network:

    • HDMI Cable: Connect it from the mini PC to your television.
    • Ethernet Cable: Connect an ethernet cable so it can connect to your server’s ZFS pool.
    • Optical Audio Cable: Use this for audio output to your stereo system. Make sure you insert the optical cable correctly; it is not like a USB-C cable, it fits one way, and there are four possible ways for this to go in. That gives you a 25% chance to plug it in without destroying the jack if you are blindly messing around with it trying to plug it in. Those are bad odds. Pay attention to the plug & the jack!
    • Power Cable: Plug this in last, as the asustor flashstor minipc powers on automatically when connected.

    Step 4: Boot into LibreELEC installation and install it[edit | edit source]

    1. Insert the bootable USB drive into your mini-PC.
    2. Restart the system and enter the BIOS/UEFI settings by pressing F2 over & over again as fast as possible right after the machine turns on.
    3. Go to “boot” menu by using the right arrow key and pressing enter.
    4. Set the USB drive as the primary boot device.
    5. In the BIOS, disable any TPM and secure boot options that interferes with Linux installation. This is similar to what we did on the Intel NUC early in the guide when installing pfSense onto it.
    6. Save changes & reboot. LibreELEC will boot from the USB drive. Hitting F10 will exit the BIOS & save your changes.



    If you managed to erase your entire computer by writing the LibreELEC image to your operating system drive EVEN AFTER reading these instructions, congratulations! You’re almost as stupid as me. Almost. Don’t do that.

    Step 5: Install LibreELEC onto the Asustor[edit | edit source]

    We are erasing all of the Asustor software & replacing it. This process will take less than 90 seconds. and

    Next, we install LibreELEC, which is just enough OS to run Kodi.

    1. Boot and Install: Follow the prompts to install LibreELEC onto the internal eMMC.
    2. Choose the drive you wish to install it onto, which will be the /dev/mmcblk0 device in the case of the Asustor Flashstor. That is the memory that the ASUS software is installed onto; we are erasing it to install LibreELEC & KODI.
    3. You’re done. That’s it. In & out in less than 90 seconds - amazing. :)

    NOTE: If you have not installed any new NVMe drives into the Asustor Flashstor minipc, there should only be one device showing up to install onto, which will be the internal EMMC at /dev/mmcblk0. If you have installed new NVMe SSDs, they will show up qith /dev/nvmexn1 notation with x being the number of the SSD in the machine.

    Step 6: Boot into the LibreELEC system & set it up[edit | edit source]

    After installing LibreELEC, it will boot into the operating system & start KODI. The rest of the setup is a breeze.

    Networking Configuration:

    • Use the default internet connection settings.
    • There is no need to configure a static IP address for a client. Static IP addresses are for servers.
    • If you are using this to watch stuff stored on your server’s ZFS pool, disabling samba server & disabling ssh is the smart way to go. No need to run unnecessary services if you don’t have to.

    Audio Configuration:

    By default, it will output audio via the HDMI cable.

    • If your HDMI cable connects to an audio/video receiver that is hooked up to your speakers, you’re fine.
    • If your HDMI cable connects to your television, you may hear the audio through your TV speakers, which is horrible; we will need to change where Kodi outputs to.

    To change the audio output:

    1. Access System Settings: Navigate to the gear icon for settings, then Audio.
    2. Select the Audio Output Device: Choose ALSA: HDA Intel PCH, ALC888-VD Digital S/PDIF. Yours may look mildly different - we want whatever looks closest to S/PDIF digital optical/toslink output. Experiment to find which one works for you.
    3. Check Display Settings: Make sure it is set to what your television is capable of. in my case, it is set at ***3840 by 2160 and 60 fps.

    Why not a static IP? Didn’t we make a static IP for everything else?

    Static IPs aren’t important for a computer that doesn’t provide services. When we’re running a server, like our machine with the ZFS pool that stores our media files, we are running something where clients(aka our home theater PC) are going to want to know where to access it.

    Think of your server like your favorite store. We are going to tell our home theater PC to always go to the store to get movies(….) at 192.168.5.2 - so our server always NEEDS to be at 192.168.5.2.

    The home theater PC we are setting up right now is the “customer” - it doesn’t have to have a static IP, nor does it always have to be at the same address every day.

    A customer can visit a store from a different address every day; it makes no difference to the shopowner selling goods to the customer. However, if the store’s address changed every single day without notice, the customer would have a very hard time finding the store. They may stop going to that store altogether.

    We can use the default setup where the server grabs an IP address via DHCP(aka, it grabs whatever’s available from the router) without concern here.

    Step 7: Adding Media Content to Kodi[edit | edit source]

    After setup, let’s add some media content to your system.

    1. Click on “Movies” or “TV shows” on the side.
    2. Click “Add Videos”
    3. Click where it says <None> in order to add an address.
    4. Add Samba Share: Use the IP address and share path to add your media content. For our server that we set up, you would use as follows to access the ZFS pool:
    •   smb://192.168.5.2/archive

    SECURITY NOTE: In my personal setup, I like to make a separate read only user when setting up samba for my media directory that I use for clients that will be viewing music, videos, tv, etc.

    The reason for this is that if the software I am using to view has a delete button I accidentally press, my cat walks on my keyboard/remote while I am watching something, the software has a bug/glitch etc., I do not lose my media collection. Here is an example from my own samba configuration:

    [television]
        comment = television shows
        path = /drive1thru8/television
        browseable = yes
        read only = no
        valid users = louis, kodi
        write list = louis
        create mask = 0644
        directory mask = 0755
        force user = louis
        force group = louis
        inherit permissions = yes
        inherit acls = yes
        ea support = yes

    This would be accessible at smb://192.168.5.2/television. My user, louis, can read & write, whereas the user kodi can only read. I would log into the samba share as kodi from my home theater PC, or any client where I solely intend to view content.

    Even if Kodi’s source code were hijacked by some bastard whose goal it was to destroy our entire media library, they would not be able to.

    LESSON: It is good practice to give minimum necessary permissions to everything!

    1. Scan Media: Scan the added directories for movies, TV shows, etc., and organize them in Kodi.
    2. Choose media type: For “This directory contains”, choose the media type so Kodi is able to look things up for you about what you are watching, grab art, reviews, ratings, etc.
    3. Click onto Movies/TV(whatever you just added) & search for something.
    4. Play & enjoy :)

    Performance Testing with High-Quality Media[edit | edit source]

    Once the physical setup is complete, it’s time to test how well this setup handles high-definition content.

    • Video Playback Test: Let’s see how it handles a 4K video file. I’m using a 70-80 GB file of “Batman Begins” to push the limits. See if it is able to seek within the file quickly, and if there’s any lagging on action scenes or very dark-shot areas (this is where bitrate is usually going to be highest, and therefore most difficult for cheap hardware to play back properly)
    • Audio Performance: Listen for any distortions or skipping in the digital audio output, digital scratching noises.

    Noise Levels and Setup[edit | edit source]

    My custom water-cooled desktop with Noctua fans is noisy. I provided comparisons witha DPA 4065 omnidirectional mic in the video, in a normal living room, for you to hear; between that and the Asustor Flashstor minipc. This is not a completely passive device, but for most; it does amazingly well.

    “Piracy”[edit | edit source]

    We can’t talk about home theater PCs without delving into Piracy.[edit | edit source]

    We live in a world where companies are trying to normalize the idea that you don’t own what you bought & paid for anymore. Piracy is no longer an immoral act; in many cases, it is a necessity to retain what you have rightfully purchased from companies that think that word means something different than what was written in the Oxford English Dictionary 700 years ago.

    The Death of Digital Ownership[edit | edit source]

    Sony & Discovery’s Content Removal Scam[edit | edit source]

    Sony & discovery tried to remove customer content from their libraries AFTER they purchased it. The word PURCHASE - not rent, was used to describe the transaction.

    Discovery Entitlements Affected Titles

    As of 31 December 2023, due to our content licensing arrangements with content providers, you will no longer be able to watch any of your previously purchased Discovery content and the content will be removed from your video library.

    We sincerely thank you for your continued support.

    Thank you,

    PlayStation Store

    They might as well be telling you to go gargle their balls.

    The Hidden Redefinition of “Purchase”[edit | edit source]

    Sony’s claim is that their terms of service were changed to redefine the word “purchase” to mean something new so they could still CLAIM you were “buying” something when you were not. See their terms of service below:

    10.1. All intellectual property rights subsisting in PSN Content, including all software, data, services, and other content subsisting in or used in connection with PSN, the Online ID and access to content and hardware used in connection with PSN belong to SIE, its affiliates, and its licensors. Use of the terms “own,” “ownership”, “purchase,” “sale,” “sold,” “sell,” “rent” or “buy” in this Agreement or in connection with PSN Content does not mean or imply any transfer of ownership of any content, data or software or any intellectual property rights from SIE, its affiliates or its licensors to any user or third party.

    10.2. Except as stated in this Agreement, all Content provided through PSN is licensed on a non-exclusive and revocable basis to you for your personal, private, non-transferable, non-commercial, limited use on a limited number of PlayStation Devices or other devices in the country in which your Account is registered.

    They use the word “PURCHASE” on their website, but then hide behind this garbage buried into page 21 of their terms of service. The word “PURCHASE” has had a specific meaning since the 14th century, when Oxford English Dictionary defined the word “purchase” as meaning “to acquire in exchange for payment; to buy” or “obtaining something in exchange for payment in money or an equivalent; buying.”

    A History of Anti-Consumer Behavior[edit | edit source]

    Because consumer protection in the United States is a joke, they are allowed to redefine the meaning of a 14th century word to justify taking away your personal property without refunding your money.

    If they were honest, they would put this “new” definition of the word “purchase” on their front page next to the “Add to cart” button. They don’t do that. They hide it on page 21 of a legalese terms of service they know damn well you will never read.

    They know what it would do to their sales if they said “purchase actually means we can take it back from you at any time without refunding you” in the same font size they use next to the “Add to cart” button. They’re not.

    This is the same company that installed malware & rootkits on people’s computers when they legally paid for content that expects you to be an honest upstanding citizen who buys content & allows them to take it back.

    Right.

    Streaming Services: Paying More for Less[edit | edit source]

    Netflix’s 4K scam[edit | edit source]

    Modern streaming services are equally dishonest when they try to upsell customers to a higher priced plan for higher quality video. Modern media companies are obsessed with control and want you to view and listen to content on completely locked down platforms. This is to the point where you have to build a special computer or use a television that is blatant spyware to watch the content you paid for in the advertised bitrate & resolution.

    I don’t use the Netflix application on my LG television to watch Netflix because my television attempts to collect & sell my personal data without my consent from the moment I turn it on. I find this unacceptable. I am happy to pay to watch content; but I am not going to give up my data & my privacy to do it, nor do I wish to trust such an unscrupulous piece of hardware that opts me into this by default.

    Netflix will upsell you to 4k, but nowhere on their plans page, pricing page, or help page do they tell you that you will receive a low bitrate, 720p stream if you use firefox on GNU/Linux - or a very low bitrate 1080p stream in chrome The only way to get a high bitrate, 4k stream is as follows:

    The requirements to actually get 4K streaming working on a PC are buried in documentation and frankly absurd. You need:

    • Windows 10 or newer (not necessary when pirating)
    • HDCP 2.2 compliant monitor and GPU (not necessary when pirating)
    • GPU with HEVC hardware decoder (not necessary when pirating)
    • 4K monitor (even if you just want higher bitrate, or to view 1080p content on a 1080p monitor, which is not a problem when pirating)
    • 4K HDR monitor (some services won’t deliver 4K without HDR, which is not a problem when pirating)
    • PlayReady 3.0 support (not necessary when pirating)
    • Microsoft Edge or the Windows Store app (pirating is cross platform & operating system agnostic)
    • Intel processor with SGX enabled (AMD processors are unsupported, enjoy your oxidating 14th gen intel CPU; by the way, piracy is processor/platform agnostic & works on all)
    • No DisplayLink products or similar display adapters (piracy plays on any display product)

    As someone who repairs motherboards professionally, I find it unreasonable to expect average consumers to verify all these requirements before subscribing to a service that prominently advertises “4K streaming” as a feature. For comparison, game publishers clearly list their system requirements right next to the purchase button.

    Even more frustrating is that these restrictions exist purely due to DRM requirements, not technical limitations. The same content can stream perfectly fine at full quality to smart TV apps, proving the bandwidth and technology exists to deliver high-quality streams to any capable device.

    The Hidden Quality Gap[edit | edit source]

    Just as Sony doesn’t have the balls(or the integrity) to place their re-defined concept of what it means to “purchase” something on their product page; Netflix doesn’t have the balls to list their series of limitations on 4k playback on the plans & pricing page.

    Netflix KNOWS conversions will go down if consumers understand the hoops they’d have to jump through to get a higher quality stream, ESPECIALLY if they knew that they could pay for the higher quality plan and get an even WORSE VIDEO QUALITY than they’d get on the normal plan just because they weren’t using hardware anointed by netflix to properly f the user in the ass.

    In the words of Eteel from reddit:

    Publishers advertise the requirements needed to run the game, and they do it freely—literally next to or below the button you press to buy the game. And that’s even though no one actually, realistically expects to run Cyberpunk 2077 in 4k with raytracing on GTX 1080 TI. That in no way compares to the reasonable base-level expectation that you’d be able to play a video in 4k using Chrome.

    Netflix advertises no such information. They do have a help page listing which broswers support which resolution, but in order to get to it (or to even find out that it exists), you need to search for why you’re not getting 4k in the first place, and in order to search for why you’re not getting 4k in the first place, you already need to have bought the service thinking you’re going to get 4k using Chrome…

    And here’s what Netflix doesn’t tell you even on this help page: they don’t tell you that while Chrome does support 1080p, it does not support high-bitrate 1080p. Playing Vikings: Valhalla on 1080p on Edge gets you 3000 bitrate while on Chrome gets you 1000 bitrate. That’s a significant difference they don’t advertise.

    There’s still more to say about this, but I digress.

    And to address this comment of yours:

    What would that be? I understand criticism of DRMs but endorsing piracy as a solution for consumer issues would set a crazy precedent. Would it be okay for me to shoplift items that are too expensive for me to purchase? What if the cheaper ones quality doesn’t meet my demands?

    It’s not okay to shoplift items that are too expensive, but it is also not okay for Ubisoft Connect to advertise buying a license to play a game as buying the game. It literally has a button that says “Buy the game,” but when you read the fine print, it tells you that you don’t actually own the game even if you buy it (in contrast to buying a shirt which you own.) In actuality, if you buy one of their games, they still withhold the right to remove it from the store, in which case you’d be unable to play it even though you bought it. In essence, they redefine the word “buy” to not mean “own” despite the fact that in common language usage, we have always understood the two terms to exist in connection to each other.

    Keep in mind that even the 3 mbps 1080p stream is garbage. When you pirate, you have the option to download a full bitrate video file. You can download movies & television shows that have 50 mbps bitrate with high quality encoding settings, and often completely uncompressed blurays.

    Or, you can stream a piece of media using netflix at 3 mbps. 1 mbps if you’re using the wrong web browser. Or processor. Or screen cable. Or whatever.

    And download button? screw your download button - you can watch it until you can’t, and if you want a higher quality copy, sorry pal - you’re stuck with 1 mbps in 2024, even though a 50 mbps copy was available on usenet 14 years ago.

    Hardware & Format Restrictions[edit | edit source]

    Bluray Limitations[edit | edit source]

    Let’s not even get started on the limitations regarding 4k blurays. If you want to rip a 4k bluray, you can’t - you’re stuck at 1080p unless you buy a drive like the Pioneer BDR-2213 with older firmware that allows you to back up a copy of what you legally bought & paid for.

    Digital Books: Another Broken Promise[edit | edit source]

    Kindle Purchases and Country Restrictions[edit | edit source]

    Amazon Kindle thinks that moving to a new country means you should lose all your Kindle books. Imagine paying to buy a book and then having it disappear as your flight leaves your country’s border. Welcome to 2024.

    It gets better. Amazon has instructions on their website to “transfer” your account, but their own customer service reps are clueless on how any of it works.

    Amazon hides behind licensing agreements and geo-restrictions to justify this anti-ownership garbage. While you’re given the impression you’re “purchasing” a book, you’re actually just getting a temporary license tied to the country you bought it in. Move countries? f you, buy the book again.

    This isn’t about technical limitations. This is about control. Amazon and companies like it are obsessed with locking down what you own. They know you won’t read the fine print until you’re angry, but by then, it’s too late.

    This isn’t just about Kindle. It’s about digital purchases everywhere. You don’t actually own what you buy. Whether it’s Kindle books, movies on Amazon, or games on Sony, the story is the same: they sell you the illusion of ownership & lock you down with restriction after restriction after they’ve pocketed your money. You should consider yourself lucky if they even allow you to keep using what you bought in a restricted manner; sometimes they just take it away & leave you nothing at all.

    The Broken System of Consumer Protection[edit | edit source]

    No Real Consequences[edit | edit source]

    In the United States, consumer protection & our congress/senate no longer create laws that protect the rights of consumers. Everything I described above is disgustingly unethical; if I advertised as deceptively as Sony, Netflix, Disney, or Discovery did, I would be fined out of business if my customers hadn’t ransacked my store & broken the window. But they get away with it.

    Technically; legally, these companies are in the right for what they’re doing. and even if they weren’t, when they do something horribly illegal & unethical, what our joke of a government fines them is amounts to 0.37% of their net profit for the year

    Piracy is how you take back ownership when the government that exists to protect your property rights takes 37% of your paycheck & allows your $3600 television to roofie you & steal your personal data & your content providers to sell your location to bounty hunters or kill your wife & get away with it due to forced arbitration agreement in a video streaming app terms of service. Forced arbitration agreements like the one that Disney tried to use to justify shielding themselves from any liability for a person’s death are still legal in America today.

    A Personal Note: Supporting Content While Rejecting Control[edit | edit source]

    I pay for content; you should too.[edit | edit source]

    I buy & pay for what I find valuable. Whether it’s my bluray copy of Tori Amos’ Live at Montreux concert from 1992 or my 22 year old copy of SuSE Linux Professional 8.1 I bought at Best Buy for $79.99. Not only do I pay for 32 year old concerts, I pay for software you can legally download for free if I think it’s worthwhile. For all the trouble I give open source software, I paid for a copy of GNU/Linux back when you had to compile your own kernel to burn a CD.

    I believe in paying for what I find valuable. It empowers me to ask for what I am worth when I provide value to others. I believe in fair exchange of value.

    That being said: I will never let someone else tell me what I CAN or CANNOT do with what I bought and paid for; nor will I ever tolerate being provided a worse experience as a paying customer than what I get as a non-paying customer. The limitations placed on your experience when you buy media are not worse due to scarcity or technological limitations; rather, the technological limitations are PURPOSELY PUT IN PLACE BY THE PERSON YOU ARE PAYING.

    At my business, we see our customers as partners; not adversaries. If a business I am seeking a service or good from treats me like the enemy after I’ve given them money; I will treat them in kind.

    Piracy is how you retain control over what you bought and paid for. Never feel guilty about that. But remember that it is on us to pay for what we find valuable, & demonstrate that we are willing to pay for what we find valuable, if we want to live in a world where non-abusive business models win.

    A Nuanced View of Digital Rights & Piracy[edit | edit source]

    Not all situations where customers choose piracy are equal. Here’s a hierarchy of scenarios that I’ve ordered from most to least “justifiable” or “ok” to make the point. When you hear the words “piracy” or “copyright infringement”, they are often used to paint anyone who does not accept being bent over by companies that wish to re-define what it means to “own” something; if not take away the concept of ownership completely. Do not accept the premise of assholes, or laws, that pretend that each of the following scenarios below are the same.

    Legitimate Ownership Issues[edit | edit source]

    1. Degraded Physical Media - No Replacement Available[edit | edit source]

    You paid for physical media that has degraded. The content is no longer for sale anywhere, and you need a way to access what you rightfully & legally purchased. It is still protected by copyright, but you are literally incapable of purchasing it again due to lack of availability that is not your fault.

    2. Degraded Physical Media - Replacement Available[edit | edit source]

    You paid for physical media that has degraded. While you could buy it again, you’ve already paid the creators once for lifetime access.

    3. Lost Digital Purchase - No Repurchase Option[edit | edit source]

    You purchased digital media that was accidentally erased/lost, and it’s no longer available for sale anywhere.

    4. Lost Digital Purchase - Repurchase Available[edit | edit source]

    You purchased digital media that was accidentally erased/lost. While it’s still for sale, you’ve already paid once for what was advertised as a “purchase.”

    Corporate Deception & Control[edit | edit source]

    5. The “Purchase” That Wasn’t[edit | edit source]

    You PURCHASED digital media; using the commonly understood definition of PURCHASE that existed from the 14th century that 99% of customers understand - permanent ownership. but it stopped working because someone you never met decided “fuck you”.

    6. Bait & Switch Streaming[edit | edit source]

    You paid for a streaming service specifically advertised with certain content. That content was removed with no refund option, & now requires a second subscription to a different service to access.

    7. Rental vs. Purchase Confusion[edit | edit source]

    The distinction between rental & purchase was unclear or deliberately obscured so you’d think you were PURCHASING something.

    Technical Restrictions & Quality Issues[edit | edit source]

    8. The 4K Lockout[edit | edit source]

    You paid for higher quality content (like 4K) but received lower quality (720p/1080p) or same resolution but with radically reduced, horrible bitrate, due to artificial DRM restrictions that were buried at the end of a bs 30 page EULA; if made available to you at all. Your hardware is fully capable, but artificial limitations put in place by the content distributor keep you from using what you bought & paid for.

    9. DRM Workarounds While Supporting Creators[edit | edit source]

    You purchase physical media to support creators but use a pirated copy to avoid DRM restrictions or long shipping delays.

    Random Shitty Scenarios[edit | edit source]

    10. Region Lock Issues[edit | edit source]

    Content is completely unavailable in your region with no legal purchase option, even though you’re willing to pay.

    11. DRM Protest Without Support[edit | edit source]

    You reject DRM-restricted content but also choose not to purchase available DRM-free options when they exist, turning a blind eye.

    12. Selective Support[edit | edit source]

    You support creators directly but won’t acknowledge how distributors & other parts of the content creation pipeline process add value (paying for studios, people who support the recording & making of content, etc.)

    Indefensible Positions[edit | edit source]

    13. False Justification[edit | edit source]

    Using DRM and middlemen as excuses while never actually supporting creators in any way.

    14. Empty Protests[edit | edit source]

    Claiming DRM opposition while pirating even when DRM-free options exist.

    15. Simply being an asshole[edit | edit source]

    Taking content with no intention to ever support creators; even the ones you truly enjoy, even when you have the money to pay for it, while using a litany of excuses to justify the behavior.

    16. "I just want free stuff."[edit | edit source]

    No justification, no excuse, no attempt to support creators—just pure entitlement.

    Conclusion[edit | edit source]

    While many of these scenarios are brought about via legit grievances with the current state of you-own-nothing-digital-media with spyware on top, in my opinion, there’s a clear ethical distinction between retaining access to content you’ve purchased versus never intending to support people who have provided you value. The higher items on this list represent what I find to be genuine consumer rights abuses, while the lower items represent entitled cunts hiding behind moral superiority who lack the honesty to say they just don’t want to pay for anything. Even an “I hate that industry and want to bleed them dry & don’t care about the consequences” would be more acceptable to me, for at least it’s honest.

    When I advocate for having full control over what you buy & pay for, I’m specifically addressing the upper scenarios where customers have made good-faith attempts to support creators but are getting screwed left & right by content companies & distributors by artificial restrictions, deceptive practices, and technical limitations.

    Final Thoughts[edit | edit source]

    The joy of this process is in making it your own. I gave you a rough outline here of what you can do; a guide that shows you what is possible, so that you could have those little kicks of dopamine that show up when something works. Those kicks of dopamine are imperative to you feeling good & moving forward. Without them, most people give up & stop trying. If you stop trying, you never learn.

    The purpose of this guide wasn’t to tell you this is the only way to do all of these things. Rather, it was to provide you a framework that I 100% know works since I followed it myself. I’ve already set up a system like this, one chunk at a time, over 14 years. I can tell you what to do, but putting together instructions that actually work is only possible if I run through the process in realtime to ensure everything I am telling you to do actually works for me. By going through the guide as I write it, if I leave something out of this guide - then what I am doing won’t work.

    My hope is that once you are un-encumbered by the Linux-isms & open source-isms & RTFM elitist forum assholes that link you to documentation that is wrong or makes no sense, that you’ll feel empowered to make something that kicks ass on your own. This is not the only way to do this, nor is this even the “right” way. There’s no such thing as the “right” way (although there are many WRONG ways!).

    You don’t have to clone this setup. Figure out what works for you, build something cool in small pieces & baby steps. You don’t have to do it all at once. Enjoy the journey! I can’t wait to see what you build. That’s it for today, & as always, I hope you learned something!