Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Help about MediaWiki
FUTO
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Introduction to a Self Managed Life: a 13 hour & 28 minute presentation by FUTO software
(section)
Main Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= Setting up ZFS for data storage = <span id="how-were-storing-our-data"></span> == How we’re storing our data: == We’re not keeping your 40 terabytes of GNU/Linux ISOs on solid state storage. That is a waste of money & resources (unless you’re insanely rich). I set up the system drives on SSDs so that my photos, documents, mail, and android backups would be quickly accessible and these services highly responsive. I don’t need that level of responsiveness for my collection of GNU/Linux ISOs, though. This is where ZFS pools come into play. <span id="what-is-zfs"></span> === What is ZFS? === ZFS is a complete storage management system that combines: * File system functionality * Volume management * RAID capabilities * Data integrity checking * Automatic repair features It’s like having a RAID controller, Linux LVM, and a file system all in one. <span id="why-zfs"></span> === Why ZFS? === <span id="data-integrity-built-in"></span> ==== 1. Data Integrity Built-In ==== * ZFS constantly checks for corruption using checksums * ZFS automatically repairs corrupted files if you have redundancy * ZFS saved me twice from the consequences of my bad decisions when I bought Seagate products. <span id="snapshots-that-actually-work-although-im-not-getting-into-that-here"></span> ==== 2. Snapshots That Actually Work (although I’m not getting into that here) ==== * Take instant snapshots that don’t eat up space * Roll back changes when you inevitably mess something up * Keep multiple versions of files without doubling storage needs <span id="dynamic-stripe-sizes"></span> ==== 3. Dynamic Stripe Sizes ==== * Unlike hardware RAID, ZFS can adjust stripe size on the fly <span id="zfs-encryption"></span> == ZFS Encryption: == <span id="setting-up-encryption"></span> === Setting Up Encryption === You have two choices: # '''Pool-wide encryption''': #* Everything in the pool is encrypted, or # '''Dataset-level encryption''': #* Encrypt only specific datasets #* Different keys for different datasets #* More confusing, not necessary IMO here. <blockquote>'''NOTE''': If you’re encrypting a pool for home use, pool-wide encryption is usually the way to go. Keep it simple unless you have a specific reason not to. </blockquote> <span id="whats-a-zfs-pool"></span> == What’s a ZFS Pool? == * Traditional setup: Disk → Partition → Filesystem * ZFS setup: Disks → Pool → Datasets The pool: * Manages all your physical drives * Handles redundancy (like RAID) * Provides a storage “pool” that datasets can use It’s like having a fish pond (the pool) that different fish (datasets) can draw from, rather than a different water tank for each koi fishy. <span id="understanding-zfs-redundancy"></span> == Understanding ZFS Redundancy == ZFS has built-in redundancy options that are similar to RAID but better implemented. Here are the main types. You choose what works for you: <span id="mirror-similar-to-raid-1"></span> === Mirror (Similar to RAID 1) === <pre>Disk 1 ───┐ ├── Identical copies Disk 2 ───┘</pre> * Writes data to multiple disks * Can lose any disk and still work * 50% storage efficiency (2 drives = 1 drive’s worth of storage) <span id="raidz1-similar-to-raid-5"></span> === RAIDZ1 (Similar to RAID 5) === <pre>Disk 1 ───┐ Disk 2 ───┼── Distributed data + parity Disk 3 ───┘</pre> * Can lose one drive * ~67-75% storage efficiency * Minimum 3 drives needed <span id="raidz2-similar-to-raid-6"></span> === RAIDZ2 (Similar to RAID 6) === <pre>Disk 1 ───┐ Disk 2 ───┤ Disk 3 ───┼── Distributed data + double parity Disk 4 ───┤ Disk 5 ───┘</pre> * Can lose ANY two drives * ~60-80% storage efficiency * Minimum 4 drives needed <span id="key-differences-from-hardware-raid"></span> === Key Differences from Hardware RAID: === # '''No RAID controller needed''' # '''Self-healing''' #* Detects & fixes corruption automatically #* Hardware RAID only handles drive failures though. #* ZFS handles drive failures AND data corruption! <blockquote>'''HINT''': '''ZFS IS NOT A BACKUP!''' ZFS redundancy protects against drive failures, but it’s NOT a backup. If you accidentally delete a file or your server dies in a fire, redundancy won’t help you. This is PART of a proper backup solution, it is not in & of itself THE backup solution! Always have proper backups! </blockquote> <span id="step-1-choose-hard-drives-that-wont-send-you-to-rossmann-data-recovery-using-backblaze-data"></span> == Step 1: Choose Hard Drives That Won’t Send you to [https://rossmanngroup.com/hard-drive-data-recovery-service/ Rossmann Data Recovery] using [https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data Backblaze Data] == If you spend nine hours setting this server up only to put your data on a Seagate rosewood, I will come through your television like Samara from the ring and pull you down a well. You could either <ol style="list-style-type: lower-alpha;"> <li><p>trust [https://www.youtube.com/watch?v=qZCMislL6_I&t=49s amazon reviews].</p></li> <li><p>use data from a company that runs over 260,000 hard drives & publishes their failure rates quarterly</p></li> <li><p>Use a seagate EXOS or rosewood</p></li></ol> In order of bad ideas, C, A, then B. We will be doing B. <span id="find-backblazes-drive-stats-here"></span> === Find Backblaze’s Drive Stats [https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data here] === When Backblaze publishes failure rates, they’re telling you what drives cost them money to replace. They don’t care which manufacturer looks good. They are honest about which drives are trash, they run them 24/7 in actual mission-critical server environments. <span id="tips-for-reading-their-reports"></span> === Tips for reading their reports: === When you look at their quarterly reports, focus on: # '''Annualized Failure Rate (AFR)''' #* Under 1% = Great #* 1-2% = Acceptable #* Over 2% = No. #* Over 3% = Probably a seagate rosewood or grenada, you might as well be giving your data to a [https://www.youtube.com/watch?v=qFVwQCFhKSE NYS tax collector] # '''Drive Age & Sample Size''' #* A 0% failure rate is useless if they only have 10 drives, Look for models with 1,000+ samples <div class="figure"> <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:lu67917r1ezu_tmp_a8d16e37.png </gallery> </div> <div class="figure"> <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:lu67917r1ezu_tmp_5c0f8fea.png </gallery> </div> * Pay attention to how long they’ve been using the drive you’re looking at. '''Remember: The goal isn’t to spend five hours figuring out what drives are the best, it’s to spend a few minutes to learn which are the worst. A 0.32% vs 0.34% failure rate difference doesn’t matter, a 0.32% to 3.2% difference is what we’re looking to avoid.''' <span id="step-1.5-label-your-drive-bays-as-you-plug-them-in."></span> == Step 1.5: Label your drive bays as you plug them in. == I like to put the serial number of the drive on my bays, or if not possible to do this without blocking airflow, on the bottom or top of the case in-line with the drive bay. This way if I need to take a drive out I don’t have to guess which is which. The ''[https://www.rosewill.com/rosewill-rsv-l4412u-black/p/9SIA072GJ92847?seoLink=server-components&seoName=Server%20Chassis Rosewill RSV-L4412U server case]'' is a very nice case for this purpose. <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:lu67917r1ezu_tmp_49d72764.png File:lu67917r1ezu_tmp_1f3f2e5c.png </gallery> <span id="step-2-installing-zfs-on-ubuntu-server"></span> == Step 2: Installing ZFS on Ubuntu Server == We are setting up ZFS on our host system that all of our virtual machines are running on, which is <code>happycloud.home.arpa</code> at <code>192.168.5.2</code>. <span id="update-system-packages"></span> ==== 2.1 Update System Packages ==== First, make sure your system is up to date: <pre>sudo apt update && sudo apt upgrade -y</pre> <span id="install-zfs-drive-monitoring-packages"></span> ==== 2.2 Install ZFS & Drive Monitoring Packages ==== Install the ZFS utilities: <pre>sudo apt install zfsutils-linux smartmontools -y</pre> <span id="load-zfs-kernel-module"></span> ==== 2.3 Load ZFS Kernel Module ==== ZFS should load automatically, but make sure it’s loaded: <pre>lsmod | grep zfs</pre> If you don’t see output, load it manually: <pre>sudo modprobe zfs</pre> <span id="configure-system-for-zfs"></span> ==== 2.4 Configure System for ZFS ==== '''Adjust ARC (Adaptive Replacement Cache) Size:''' Create a new sysctl configuration file: <pre>sudo nano /etc/sysctl.d/10-zfs.conf</pre> Add these lines to limit ZFS memory usage to 50% of RAM: <pre># ZFS Maximum ARC Size (50% of RAM) vm.swappiness=1 vm.min_free_kbytes=1524288 vm.watermark_scale_factor=200</pre> <span id="apply-sysctl-settings"></span> ==== 2.5 Apply Sysctl Settings ==== <pre>sudo sysctl -p /etc/sysctl.d/10-zfs.conf</pre> <span id="set-up-automatic-module-loading"></span> ==== 2.6. Set Up Automatic Module Loading ==== Create a new file to make sure ZFS loads at boot: <pre>sudo nano /etc/modules-load.d/zfs.conf</pre> Add this line: <pre>zfs</pre> <span id="make-sure-install-worked"></span> ==== 2.7 Make Sure Install Worked ==== Run a quick check of ZFS commands: <pre># Check ZFS command availability zfs list zpool list # Both commands should work (though they'll show no pools yet)</pre> <span id="best-practices"></span> === Best Practices: === * Set <code>vm.swappiness=1</code> (use swap only when necessary) * Keep around 1 gigabyte of RAM per 1TB storage for basic usage * Use separate boot drive from ZFS pool * Set up notifications if something dies (we’ll cover this later) * Plan regular scrub schedule <span id="step-3-identify-your-hard-drives-in-ubuntu-server"></span> == Step 3: Identify Your Hard Drives in Ubuntu Server == <span id="quick-commands-to-list-drives"></span> === Quick Commands to List Drives === <span id="list-basic-drive-info"></span> ==== 3.1 List Basic Drive Info ==== <pre>lsblk</pre> Example output: <pre>louis@happycloud:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 232.9G 0 disk ├─sda1 8:1 0 512M 0 part ├─sda2 8:2 0 1G 0 part │ └─md127 9:127 0 1022M 0 raid1 /boot └─sda3 8:3 0 231.4G 0 part └─md126 9:126 0 231.3G 0 raid1 └─dm_crypt-0 252:0 0 231.2G 0 crypt └─ubuntuinstall-ubunturoot 252:1 0 231.2G 0 lvm / sdb 8:16 0 7.3T 0 disk sdc 8:32 0 232.9G 0 disk ├─sdc1 8:33 0 512M 0 part /boot/efi ├─sdc2 8:34 0 1G 0 part │ └─md127 9:127 0 1022M 0 raid1 /boot └─sdc3 8:35 0 231.4G 0 part └─md126 9:126 0 231.3G 0 raid1 └─dm_crypt-0 252:0 0 231.2G 0 crypt └─ubuntuinstall-ubunturoot 252:1 0 231.2G 0 lvm / sdd 8:48 0 7.3T 0 disk sde 8:64 0 7.3T 0 disk sdf 8:80 0 7.3T 0 disk sdg 8:96 0 7.3T 0 disk sdh 8:112 0 7.3T 0 disk </pre> <span id="show-more-detailed-info-including-serial-numbers"></span> ==== 3.2 Show More Detailed Info (including serial numbers) ==== <pre>lsblk -o NAME,SIZE,MODEL,SERIAL</pre> Example output: <pre>louis@happycloud:~$ lsblk -o NAME,SIZE,MODEL,SERIAL NAME SIZE MODEL SERIAL sda 232.9G Samsung SSD 870 S61VNJ0R413909T ├─sda1 512M ├─sda2 1G │ └─md127 1022M └─sda3 231.4G └─md126 231.3G └─dm_crypt-0 231.2G └─ubuntuinstall-ubunturoot 231.2G sdb 7.3T ST8000VN004-2M21 WSD5720G sdc 232.9G Samsung SSD 870 S61VNG0NC09403N ├─sdc1 512M ├─sdc2 1G │ └─md127 1022M └─sdc3 231.4G └─md126 231.3G └─dm_crypt-0 231.2G └─ubuntuinstall-ubunturoot 231.2G sdd 7.3T ST8000VN004-2M21 WSD5725W sde 7.3T WDC WD80EFZX-68U VKJ28YJX sdf 7.3T WDC WD80EFZX-68U VKJ02D0X sdg 7.3T WDC WD80EFZX-68U VKHZVJ7X sdh 7.3T WDC WD80EFZX-68U VKJ1N8KX louis@happycloud:~$ </pre> <span id="check-drive-health-and-additional-info"></span> ==== 3.3 Check Drive Health and Additional Info ==== <pre>louis@happycloud:~$ sudo smartctl -i /dev/sdd smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.0-47-generic] (local build) Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate IronWolf Device Model: ST8000VN004-2M2101 Serial Number: WSD5725W LU WWN Device Id: 5 000c50 0e3407989 Firmware Version: SC60 User Capacity: 8,001,563,222,016 bytes [8.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Form Factor: 3.5 inches Device is: In smartctl database 7.3/5528 ATA Version is: ACS-4 (minor revision not indicated) SATA Version is: SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Wed Oct 23 21:10:14 2024 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled louis@happycloud:~$ sudo smartctl -a /dev/sdd | grep -E 'Command_Timeout|Error_Rate'; echo ""; 1 Raw_Read_Error_Rate 0x000f 074 064 044 Pre-fail Always - 26263737 7 Seek_Error_Rate 0x000f 089 060 045 Pre-fail Always - 766811756 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0</pre> <blockquote>'''HINT''': Write down the serial numbers of your drives and which ports they’re connected to. If a drive fails, you’ll want to know exactly which physical drive to replace. </blockquote> <span id="understanding-the-output"></span> ==== 3.4 Understanding the Output: ==== * In this case, <code>/dev/sda</code> and <code>/dev/sdc</code> are the two SSDs that comprise the RAID 1 array that Ubuntu Linux Server is installed on. * <code>sdb</code>, <code>sdd</code>, <code>sde</code>, <code>sdf</code>, and <code>sdg</code> are the hard drives we plugged in. * The letters go in order of how they’re connected to the motherboard (sometimes). * Numbers after letters (like <code>sda1</code>) represent partitions Now you know which drive is which, so let’s set up a ZFS pool. <span id="step-4-creating-an-encrypted-zfs-pool-with-single-drive-redundancy"></span> == Step 4: Creating an Encrypted ZFS Pool with Single-Drive Redundancy == '''What We’re Setting Up''' * 6 drives in a RAIDZ2 configuration (similar to RAID6) * Full encryption with password * Two drives worth of redundancy * Ability to survive one drive failure <span id="verify-our-drives"></span> === 4.1 Verify Our Drives === First, let’s double-check we’re using the right drives: <pre>lsblk -o NAME,SIZE,MODEL,SERIAL</pre> You should see your two operating system drives listed, and the six hard drives we plugged in. Let’s make absolutely sure they’re empty: <pre># Check if drives have any existing partitions sudo fdisk -l /dev/sd[bdefgh]</pre> If you see any partitions, you might want to clear them: <pre># Only run these if you're SURE these are the right drives # THIS WILL ERASE ALL DATA ON THESE DRIVES sudo wipefs -a /dev/sdb sudo wipefs -a /dev/sdd sudo wipefs -a /dev/sde sudo wipefs -a /dev/sdf sudo wipefs -a /dev/sdg sudo wipefs -a /dev/sdh</pre> <span id="create-the-encrypted-pool"></span> ==== 4.2 Create the Encrypted Pool ==== We’ll create a RAIDZ2 pool (similar to RAID6) with encryption: <pre>sudo zpool create -o ashift=12 -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase mediapool raidz2 /dev/sdb /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh</pre> What do these commands do? * <code>-o ashift=12</code>: Optimizes for 4K sector drives * <code>-O encryption=aes-256-gcm</code>: Enables strong encryption * <code>-O keylocation=prompt</code>: Tells ZFS to ask for password * <code>-O keyformat=passphrase</code>: Use a password instead of keyfile * <code>raidz2</code>: Two drive redundancy * <code>mediapool</code>: Name of your pool (can be whatever you want) You’ll be prompted for a password. '''USE A STRONG PASSWORD AND DON’T FORGET IT!''' <span id="set-good-pool-properties"></span> ==== 4.3 Set Good Pool Properties ==== After creation, let’s set some good default properties: <pre># Enable compression sudo zfs set compression=lz4 mediapool # Disable atime updates (better performance)</pre> <pre>sudo zfs set atime=off mediapool # Set correct recordsize for general media storage sudo zfs set recordsize=1M mediapool</pre> <span id="verify-pool-creation"></span> ==== 4.4 Verify Pool Creation ==== Check that everything is set up correctly: <pre># Check pool status sudo zpool status mediapool # Check pool properties sudo zpool get all mediapool # Check encryption is enabled sudo zfs get encryption mediapool</pre> The <code>zpool status</code> output should show something like: <pre>louis@happycloud:~$ sudo zpool status mediapool pool: mediapool state: ONLINE config: NAME STATE READ WRITE CKSUM mediapool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 errors: No known data errors</pre> <span id="create-the-datasets-for-your-data-virtual-machine-backups"></span> ==== 4.5: Create the Datasets for your data & virtual machine Backups ==== Set permissions: <pre># Set ownership (replace 'louis' with your actual username) sudo chown louis:louis /mediapool # Set permissions (only you can access it) sudo chmod 700 /mediapool</pre> <span id="test-pool-importexport"></span> ==== 4.6 Test Pool Import/Export ==== Let’s make sure we can properly mount/unmount the pool: <pre># Export (unmount) the pool sudo zpool export mediapool # Import it back sudo zpool import mediapool</pre> You’ll have to enter the password with <code>sudo zfs load-key mediapool</code> in order to do anything with it, but we will do that later. You will be prompted for the password again when importing. <span id="important-notes"></span> ==== Important Notes ==== # '''BACKUP YOUR POOL PASSWORD!''' #* If you lose it, your data is GONE #* Store it in a password manager (that you don’t self-host) #* Consider a paper backup in a secure location that is not a post-it-note on your monitor. # '''Space Available''' #* Total raw capacity: 6 × 8TB = 48TB #* RAIDZ2 uses 2 drives for parity, so you lose 2 drives worth of capacity #* Usable space is 4 × 8TB = 32TB # '''What Redundancy Gives You''' #* Can survive one drive failure #* Can survive two drive failures #* '''Not a backup! Still need proper backups''' <span id="step-5-setting-up-zfs-pool-mount-points-and-permissions"></span> == Step 5: Setting Up ZFS Pool Mount Points and Permissions == <span id="creating-the-base-dataset-structure"></span> ==== 5.1 Creating the Base Dataset Structure ==== First, let’s create our main dataset and its subdirectories: <pre># Load the encryption key so we can work: sudo zfs load-key mediapool # Create mount points if they don't exist # Create the virtual machine backup dataset where we'll store VM images sudo zfs create -o mountpoint=/mediapool/vmbackups mediapool/vmbackups # Create the storage backup dataset where we'll store Linux ISOs and cooking recipes sudo zfs create -o mountpoint=/mediapool/archive mediapool/archive</pre> <span id="setting-permissions-for-regular-user-access"></span> ==== 5.2 Setting Permissions for Regular User Access ==== Set ownership for the main archive directory: <pre># Set ownership of the main archive directory to louis sudo chown louis:louis /mediapool/archive # Set base permissions (rwx for owner, rx for group and others) sudo chmod 755 /mediapool/archive</pre> <span id="securing-vmbackups-directory-for-root-only"></span> ==== 5.3 Securing vmbackups Directory for Root Only ==== Set restricted permissions on the vmbackups directory: <pre># Set vmbackups to be owned by root sudo chown root:root /mediapool/vmbackups # Set permissions to allow only root access (rwx for root, none for others) sudo chmod 700 /mediapool/vmbackups</pre> <span id="verify-the-settings"></span> ==== 5.4 Verify the Settings ==== Check that everything is set correctly: <pre># Check ZFS mountpoints zfs get mountpoint mediapool/archive zfs get mountpoint mediapool/vmbackups # Check permissions ls -la /mediapool/archive ls -la /mediapool/vmbackups # Verify dataset properties zfs get all mediapool/archive zfs get all mediapool/vmbackups</pre> Expected output for permissions check, note that user <code>louis</code> cannot list the <code>vmbackups</code> directory without sudo. <pre>louis@happycloud:~$ zfs get mountpoint mediapool/archive NAME PROPERTY VALUE SOURCE mediapool/archive mountpoint /mediapool/archive local louis@happycloud:~$ zfs get mountpoint mediapool/vmbackups NAME PROPERTY VALUE SOURCE mediapool/vmbackups mountpoint /mediapool/vmbackups local louis@happycloud:~$ ls -la /mediapool/archive total 21 drwxr-xr-x 2 louis louis 2 Oct 23 21:45 . drwxr-xr-x 4 root root 4096 Oct 23 21:45 .. louis@happycloud:~$ ls -la /mediapool/vmbackups ls: cannot open directory '/mediapool/vmbackups': Permission denied louis@happycloud:~$ sudo ls -la /mediapool/vmbackups total 21 drwx------ 2 root root 2 Oct 23 21:44 . drwxr-xr-x 4 root root 4096 Oct 23 21:45 .. </pre> <span id="test-access"></span> ==== 5.5 Test Access ==== Test the permissions are working: <ol style="list-style-type: decimal;"> <li><p>As user ‘louis’:</p> <pre># Should work touch /mediapool/archive/testfile # Should fail touch /mediapool/vmbackups/testfile</pre></li> <li><p>As root:</p> <pre># Should work sudo touch /mediapool/vmbackups/testfile</pre></li></ol> If any of these tests don’t work as expected, double-check the permissions and ownership settings above. <span id="frigate-camera-footage-storage"></span> ==== 5.6 frigate camera footage storage ==== Earlier in the guide, we set up '''frigate''' for recording security camera footage. We left it recording to the frigate installation folder. '''This is bad. Recording to the main solid state drive is a waste of space & SSD life.''' Archived camera footage belongs on a giant hard drive, not an expensive SSD. If you’d like, you can now go back to the frigate config section and change these two lines: <pre> - ./storage:/media/frigate - ./database:/data/db</pre> to something like: <pre> - ./storage:/mediapool/archive/camerafootage/media/frigate - ./database:/mediapool/archive/camerafootage/data/db</pre> Of course, make the directories first: <pre>mkdir -p /mediapool/archive/camerafootage/data/db mkdir -p /mediapool/archive/camerafootage/media/frigate</pre> If you want to keep things separate, you could create a third dataset called <code>camerafootage</code>, mount it to <code>/mediapool/camerafootage</code>, and then edit the <code>docker-compose.yml</code> file to look like this: <pre> - ./storage:/mediapool/camerafootage/media/frigate - ./database:/mediapool/camerafootage/data/db</pre> And make sure the directories have been created before running frigate: <pre>mkdir -p /mediapool/camerafootage/data/db mkdir -p /mediapool/camerafootage/media/frigate</pre> The full file is provided below, with the assumption that you decided to make a <code>camerafootage</code> dataset that is mounted on <code>/mediapool/camerafootage</code> <pre>version: "3.9" services: frigate: container_name: frigate privileged: true # This may not be necessary for all setups restart: unless-stopped image: ghcr.io/blakeblackshear/frigate:0.13.2 # Last good version shm_size: "64mb" # Update for your cameras based on requirements devices: - /dev/bus/usb:/dev/bus/usb # USB Coral, modify for other hardware - /dev/apex_0:/dev/apex_0 # PCIe Coral, modify based on your setup - /dev/video11:/dev/video11 # For Raspberry Pi 4B - /dev/dri/renderD128:/dev/dri/renderD128 # Intel hwaccel, update for your hardware volumes: - /etc/localtime:/etc/localtime:ro - ./config:/config - ./storage:/mediapool/camerafootage/media/frigate # Changed media directory to ZFS pool - ./database:/mediapool/camerafootage/data/db # Changed database directory to ZFS pool - type: tmpfs # Optional: Reduces SSD wear target: /tmp/cache tmpfs: size: 1000000000 ports: - "8971:8971" - "5000:5000" # Internal unauthenticated access. Be careful with exposure. - "8554:8554" # RTSP feeds - "8555:8555/tcp" # WebRTC over TCP - "8555:8555/udp" # WebRTC over UDP environment: FRIGATE_RTSP_PASSWORD: "password"</pre> <span id="step-6-setting-up-samba-to-share-zfs-pool-directories"></span> == Step 6: Setting Up Samba to Share ZFS Pool Directories == <span id="installing-samba"></span> ==== 6.1 Installing Samba ==== First, let’s install Samba and its utilities: <pre># Update package list sudo apt update # Install Samba packages sudo apt install samba samba-common-bin -y</pre> <span id="backup-original-samba-config"></span> ==== 6.2 Backup Original Samba Config ==== Always backup before making changes: <pre>sudo cp /etc/samba/smb.conf /etc/samba/smb.conf.backup</pre> <span id="configure-samba-share"></span> ==== 6.3 Configure Samba Share ==== Create a new Samba configuration: <pre># Clear existing config (but keep our backup) sudo bash -c 'echo "" > /etc/samba/smb.conf' # Edit the config file sudo nano /etc/samba/smb.conf</pre> Add this configuration to <code>smb.conf</code>, and change the <code>realm</code> to the domain you chose in <code>pfsense</code> under <code>system ---> general setup</code> <pre>[global] # Network settings workgroup = HOME realm = home.arpa netbios name = happycloud server string = ZFS Archive Server dns proxy = no # Security settings security = user map to guest = bad user server signing = auto client signing = auto # Logging log level = 1 log file = /var/log/samba/%m.log max log size = 1000 # Performance optimization socket options = TCP_NODELAY IPTOS_LOWDELAY read raw = yes write raw = yes use sendfile = yes min receivefile size = 16384 aio read size = 16384 aio write size = 16384 # Multichannel support server multi channel support = yes # Disable unused services load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes # Character/Unix settings unix charset = UTF-8 dos charset = CP932 [archive] comment = ZFS Archive Share path = /mediapool/archive valid users = louis invalid users = root browseable = yes read only = no writable = yes create mask = 0644 force create mode = 0644 directory mask = 0755 force directory mode = 0755 force user = louis force group = louis veto files = /._*/.DS_Store/.Thumbs.db/.Trashes/ delete veto files = yes follow symlinks = yes wide links = yes ea support = yes inherit acls = yes hide unreadable = yes</pre> <span id="verify-samba-configuration"></span> ==== 6.4 Verify Samba Configuration ==== Check if your config is valid: <pre>testparm</pre> <span id="create-samba-user"></span> ==== 6.5 Create Samba User ==== Add your GNU/Linux user to Samba and set a password: <pre># Add Samba password for user 'louis' sudo smbpasswd -a louis # Enable the user sudo smbpasswd -e louis</pre> <span id="start-and-enable-samba"></span> ==== 6.6 Start and Enable Samba ==== <pre># Restart Samba services sudo systemctl restart smbd sudo systemctl restart nmbd # Enable them to start at boot sudo systemctl enable smbd sudo systemctl enable nmbd</pre> <span id="step-7-connecting-to-your-samba-share"></span> == Step 7: Connecting to your Samba Share == What’s the point of this if we can’t access it from other systems? <span id="windows-systems"></span> ==== Windows Systems ==== Connect using one of the following in the address bar of Windows Explorer: * <code>\\happycloud.home.arpa\archive</code> <span id="linux-systems"></span> ==== GNU/Linux Systems ==== Connect in a file manager like Thunar (my personal favorite) by putting this in the address bar: * <code>smb://happycloud.home.arpa/archive</code> '''File Manager Navigation:''' # Press <code>Ctrl+L</code> to open location bar # Enter the SMB URL # Enter credentials when prompted <span id="macos-systems"></span> ==== macOS Systems ==== Connect using Finder by selecting <code>Go</code> > <code>Connect to Server</code> and entering the SMB URL. Connect using: * <code>smb://happycloud.home.arpa/archive</code> '''Finder Navigation:''' # Press <code>Cmd+K</code> # Enter the SMB URL # Click ‘Connect’ # Enter credentials when prompted <span id="mounting-from-command-line-linux"></span> ==== Mounting from Command Line (GNU/Linux) ==== If you want the share to show up as if it were just another directory on your system, you could do this: <pre># Create mount point mkdir -p ~/archive # Mount by entering credentials when prompted sudo mount -t cifs //happycloud.home.arpa/archive ~/archive -o username=louis,uid=1000,gid=1000,vers=3.1.1,seal # Check that the `testfile` we made earlier shows up here. If you see the following, congratulations, you did not mess it up!! [louis@studiobauer ~]$ ls -la ~/archive total 13 drwxr-xr-x 2 louis louis 0 Oct 23 18:11 . drwx------ 48 louis louis 12288 Oct 23 18:14 .. -rwxr-xr-x 1 louis louis 0 Oct 23 18:11 testfile</pre> <blockquote>'''HINT''': If you can’t connect via VPN, try from local network first. If that works, then troubleshoot VPN/remote access issues afterwards. </blockquote> <span id="security-notes"></span> == Security Notes == # The share is only accessible to authenticated users # Files created will be owned by ‘louis’ # The VMBackups directory remains inaccessible (root only) # Password is stored separately from system password # All traffic is unencrypted - use VPN for remote access! Now you should be able to access your ZFS pool’s archive directory from any device on your network, with proper authentication as user ‘louis’. <span id="step-7-backing-up-virtual-machines"></span>
Summary:
Please note that all contributions to FUTO may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
FUTO:Copyrights
for details).
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following hCaptcha:
Cancel
Editing help
(opens in new window)