Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Help about MediaWiki
FUTO
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Introduction to a Self Managed Life: a 13 hour & 28 minute presentation by FUTO software
(section)
Main Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
= Understanding the basics of Docker = <span id="feel-free-to-skip-this-section-scroll-down-to-configuring-our-servers-networking-for-virtual-machines-section"></span> == FEEL FREE TO SKIP THIS SECTION & SCROLL DOWN TO ''“Configuring Our Server’s Networking for Virtual Machines”'' section == '''You do not need to read this section to install the software in this guide. You can simply copy & paste along commands as I provide them to you, or follow the documentation from the program’s developers. This section is not required reading, but rather here to help you understand the ''how'' and the ''why'' behind the installation methods for the programs we’re installing so you learn as you go - ''if you’re interested.'' If not, skip ahead to ''“Configuring Our Server’s Networking for Virtual Machines”'' ''' We are going to use docker to install a program called '''mailcow'''. Before getting started installing mailcow, I want to go over what docker is & how it works. '''You do not need to be a genius linux sysadmin at creating your own docker containers & setups to use it, but you should have some clue what it is or what happens when you type <code>docker compose up</code> to run something!''' Docker massively changed how sysadmins run & deploy software. It’s the engine behind many modern self-hosted solutions like ''Mailcow'', ''Immich'', ''Bitwarden'', ''Frigate'', & ''OnlyOffice''. It gets rid of one of the single largest pain points of releasing''(or using, or installing)'' software on Linux: dependencies. Before getting into what Docker is, let’s go over dependency hell. [http://www.mandrake.tips.4.free.fr/review2006.html <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:image-20241202004656889.png </gallery>] <span id="what-are-dependencies-and-why-do-they-cause-problems"></span> == What Are Dependencies and Why Do They Cause Problems? == <span id="understanding-dependencies"></span> === Understanding Dependencies === A '''dependency''' is software/libraries/frameworks that have to be installed for the program you are installing to work. Let’s say you’re installing a web application written in PHP; it might need a specific PHP module or a specific version of PHP. * If you don’t have that version of PHP installed, the application won’t work. * If you don’t have PHP installed, the application won’t work. * If you want to use an application that requires a different version of PHP on the same machine…. and so on & so forth. <div class="figure"> <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:800px-Netherlandwarf.jpg </gallery> </div> <span id="the-dependency-hell-of-the-1990s"></span> === The Dependency Hell of the 1990s === Before modern package managers like <code>apt</code> used by Debian(and 6+ years later, ubuntu) or <code>emerge</code> (Gentoo), installing software on GNU/Linux would require '''manually finding & installing specific dependencies.''' Here’s what this hell was like: # You downloaded a <code>.tar.gz</code> file that was the source code of the program you wanted to install, called <code>rabbitholetohell</code>. # You ran <code>./configure</code> & it told you you’re missing <code>libshit</code>. # You found<code>libshit</code>, downloaded it, and discovered ''it'' required (<code>libpiss</code>). # You found <code>libpiss</code> but learned that <code>libpiss</code> needed version 1.2 of <code>libpuke</code> and your computer had version 1.3 of <code>libpuke</code> installed. # Downgrading from version 1.3 of <code>libpuke</code> to version 1.2 of <code>libpuke</code> breaks your entire system. # User throws keyboard at wall & switches back to windows and says forget GNU/Linux for life. # If the user is a sysadmin, they curse and figure out how to make it work because this is their job, wasting tons of time. This was called '''dependency hell''', where each dependency needed more dependencies. it’s what eli the computer guy would correctly call the [https://www.youtube.com/watch?v=I-N_iQC1Uhk rabbit hole to hell] Tools like <code>apt</code> came along in the late 90s. Instead of dependency hell, you typed <code>apt install rabbitholetohell -y</code> & it just installed <code>rabbitholetohell</code>. It installed all the dependencies, & their dependencies, and it installed the right ones. It was beautiful… Yet, even with tools like <code>apt</code> to make installs simpler, problems came up if multiple applications needed '''different versions''' of the same dependency. For example: * '''PHP Example:''' Suppose you wanted to run two applications: ** App 1 requires PHP 7.4. ** App 2 requires PHP 8.1. ** Your system can only have one version of PHP installed at a time, and switching between versions was a [https://www.youtube.com/watch?v=I-N_iQC1Uhk rabbit hole to hell] <div class="figure"> <gallery mode="packed-hover" heights=250 widths=400 perrow=2> File:800px-Netherlandwarf.jpg </gallery> </div> <span id="why-this-is-a-nightmare-for-software-maintenance"></span> === Why This Is a Nightmare for Software Maintenance === Dependencies can become a serious problem over time: # '''Conflicting Requirements:''' If program A needs <code>libshit</code> version 1.2 & program B needs <code>libshit</code> version 2.0, your system can break when one application upgrades. # '''Complex Upgrades:''' Updating dependencies for one application can & will cause another application to stop working. This is called '''dependency breakage''' and they are another common cause of chasing rabbits all the way to hell. # '''System Decay:''' Over time, manually managing dependencies can lead to a bloated, unstable system full of broken packages, outdated libraries, & leftover files. # '''Version pinning misery:''' <code>apt</code> lets you install specific versions of packages but managing version conflicts becomes timewasting, dangerous, & difficult when dependencies span dozens of packages with intricate relationships. As a newbie, you are likely going to break your system. As an experienced sysadmin… they still broke their systems…. <span id="how-docker-solves-this-mess"></span> === How docker solves this mess === Docker containers solve these problems by '''isolating dependencies for each application.''' Here’s how it works: # '''Per-Application Environments:''' Each Docker container includes everything an application needs to run from the application code, runtime, & all dependencies. These are packaged together in the Docker '''image'''. #* Example: If one application needs PHP 7.4 and another needs PHP 8.1, you can run both simultaneously in separate containers without conflict, on the same computer. #* I am not talking about on separate virtual machines. I mean on the SAME HOST OPERATING SYSTEM. Two versions of PHP; or ten if you wanted. and no issues. no conflicts. No rabbit, & no hell :) # '''Immutable(unchangeable):''' Docker images are immutable snapshots. Once built, the dependencies in an image don’t change, so the application runs consistently every time. It’s not like an operating system update where package A may not be updated but package B is, and package A depends on a specific version of package B so everything breaks. # '''No System-Wide Conflicts:''' Docker containers don't mess with each other on the host system. The PHP version inside the container for <code>nextcloud</code>doesn’t affect the PHP version on the host, or in the container for <code>magento</code>. # '''Simple Upgrades:''' If you need to update an application you just type <code>docker compose pull</code> when it’s not running & it just updates… seamlessly. If it fails or the dev messed something up, you can go back to a previously installed image without messing up other applications. # '''Portable:''' Docker makes sure that the program & its dependencies work the same way on ANY system; whether it’s your personal server, a cloud provider, or your friend’s gaming PC. <span id="why-docker-has-exploded-in-popularity-for-small-open-source-projects"></span> == Why docker has exploded in popularity for small open source projects == <span id="developers-get-less-complaints-from-users"></span> === Developers get less complaints from users === The biggest complaint by far that many open source projects get is ''I tried to install abc program & get xyz error.'' It is the bane of open source software developer’s existence, until they stop caring about their users entirely. This is often the only way to stay sane in a world where ''“users”''(NOT ''“customers”''), who pay the developers $0, expect unlimited tech support & handholding as well as a one year lesson in GNU/Linux systems administration so they can install a photo gallery. This sucks. With Docker, for a developer to hand off a program running on their server to your server, the dev only has to provide you the following: # Docker image of your application # The associated Docker Compose <code>docker-compose.yml</code> file # Instructions or files to set up storage & networking. # If you want to copy the files over that the service was saving that are unique to you, the docker volume. # Tell you to edit xyz content in a <code>docker-compose.yml</code> file so the software is set to your specific need. # Tell you to type <code>docker compose pull</code> & <code>docker compose up -d</code> # Never hear complaints from you again. The Docker image contains the program & its environment, which makes sure the software runs the same on their server as it does on yours. '''AKA, the developers can provide me a COMPLETELY IDIOTPROOF copy of their software that is so easy to install even I can’t screw it up. Once they get it to install on THEIR system - they know it’ll install on mine.''' The docker-compose.yml file explains to docker & your computer how to “deploy” the program & has details about Docker networks (e.g., how the containers communicate) & Docker volumes (for storing data that persists outside the container). <span id="docker-makes-what-used-to-be-miserable-very-easy"></span> === Docker makes what used to be miserable very easy === * You can run '''Mailcow''' (which uses PHP 7.4 for its web interface) alongside '''OnlyOffice''' (which needs PHP 8.1) on the '''same server without issues.''' * When setting up something like '''Immich''', you don’t need to worry about Node.js versions conflicting with other apps. The devs use Docker to bundle the correct version for you. You don’t have to RTFM to figure out the right version of <code>libshit</code> to install anymore - the developer does that once, and then it’s set for all of their users. * If a new version of '''Bitwarden''' requires updated dependencies, you update the Docker container, leaving the rest of your system untouched. Docker turns what used to be a nightmare into a manageable, predictable process that isn’t miserable. <span id="how-docker-works"></span> == 1. How Docker Works == Docker simplifies running software by packaging everything the software needs into one neat bundle. It does this using '''containers''' which are lightweight standalone environments that share the host system’s resources but remain isolated. This is like a virtual machine, but without the baggage of needing its own operating system. Docker containers run on a shared kernel, making them much faster and lighter. If you ever enter a docker container, you will notice that almost no programs or commands are available besides the ''bare minimum'' necessary to do the job. See below: <pre>louis@ultimatebauer:~$ docker exec -it frigate bash root@174eb3845d50:/opt/frigate# nano file.log bash: nano: command not found root@174eb3845d50:/opt/frigate# vi file.log bash: vi: command not found root@174eb3845d50:/opt/frigate# vim file.log bash: vim: command not found root@174eb3845d50:/opt/frigate# emacs file.log bash: emacs: command not found root@174eb3845d50:/opt/frigate# ip addr show bash: ip: command not found root@174eb3845d50:/opt/frigate# you really don't have shit in here besides exactly what you need to run the application, do you? run nano you prick!</pre> <blockquote>root@174eb3845d50:/opt/frigate '''I’m afraid I can’t do that, dave''' </blockquote> <span id="what-are-docker-images"></span> == 2. What Are Docker Images? == A '''Docker image''' is a blueprint on how to install the program. It has the instructions, files, & dependencies necessary to create a working environment for a piece of software. Think of it like a frozen dinner if they weren’t poisonous to your health. Everything you need is pre-packaged, & all you have to do is microwave it ''(or, in this case, “run” the image; please don’t try to microwave a GNU/Linux computer, as tempting as it might be when it doesn’t work)'' to get the app running. * Example: A <code>Nextcloud</code> Docker image includes the Nextcloud app, its web server, and everything else it needs to limp. I won’t use the word ''“run”'' to describe nextcloud… <span id="what-are-docker-containers"></span> == 3. What Are Docker Containers? == A '''Docker container''' is a running instance of a Docker image. Using the frozen dinner analogy, if the image is a boxed meal in a freezer, a container is a meal served hot & ready to eat. You can run many containers from the same image just like you could cook multiple identical dinners from the same recipe. For instance, mailcow is not a mail ''“program”'' so much as it is an amalgamation of a bunch of programs necessary to run a mailserver. On my mailserver, you can see a list of all the different containers that run for mailcow when I run <code>docker ps -a</code> <span id="example-mailcow-container-guide"></span> === Example: mailcow container guide === <span id="mail-processing"></span> ==== Mail processing ==== * '''postfix''': The program that sends emails * '''dovecot''': The program that receives emails & stores them & categorizes them by user, inbox, email address, folder, etc. * '''rspamd''': anti-spam controls * '''clamd''': scans attachments for viruses <span id="web-interface"></span> ==== Web & Interface ==== * '''sogo''': webmail dashboard for checking email/calendar/contacts in browser * '''phpfpm''': for web interface <span id="security-monitoring"></span> ==== security & monitoring ==== * '''watchdog''': The health monitor * '''acme''': Handles SSL certificates * '''netfilter''': Blocks bad actors * '''unbound''': helps route messages correctly <span id="helper-services"></span> ==== Helper Services ==== * '''solr''': Makes searching through your email faster * '''olefy''' * '''dockerapi''': Think of Docker containers like having separate tiny computers inside your main computer that are barebones and only include the minimum necessary for each function to work. They each work independent of each other to minimize the likelihood of something screwing up while also allowing you the ability to experiment without destroying your entire system. Containers are not '''persistent.''' This means what happens in the containers stays in the container until you restart it. Once you restart the container, any changes to files you have made are ''GONE''. '''PERSISTENT''' storage occurs in docker ''volumes''. Each container has its own: * Space to run programs * Network connection * File storage * Settings * Installed programs Unlike full virtual machines (which are like having complete separate computers), containers share the main operating system’s foundation ''(the host’s operating system kernel)'', making them much lighter and faster to start up. For example, in mailcow: * The postfix container only knows about sending/receiving mail * The rspamd container is only for filtering junk * The clamd container is only there to scan for viruses They can’t interfere with each other, but they can communicate through specific “doorways” (network ports) when needed. If something goes wrong with one container, it doesn’t affect the others - just like one apartment’s plumbing problem doesn’t affect the other apartments (hopefully). If you need to upgrade or fix something, you can work on one container without messing with everything else. <pre>louis@mailserver:~$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES aca88eab00b0 mailcow/watchdog:2.05 "/watchdog.sh" 11 days ago Up 24 hours mailcowdockerized-watchdog-mailcow-1 012debb1f557 mailcow/acme:1.90 "/sbin/tini -g -- /s…" 11 days ago Up 24 hours mailcowdockerized-acme-mailcow-1 d33aa2bb976b nginx:mainline-alpine "/docker-entrypoint.…" 11 days ago Up 24 hours 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp mailcowdockerized-nginx-mailcow-1 7bc85825c0b1 mailcow/rspamd:1.98 "/docker-entrypoint.…" 11 days ago Up 24 hours mailcowdockerized-rspamd-mailcow-1 958d3ba45877 mcuadros/ofelia:latest "/usr/bin/ofelia dae…" 11 days ago Up 24 hours mailcowdockerized-ofelia-mailcow-1 a99f82d2b36a mailcow/phpfpm:1.91.1 "/docker-entrypoint.…" 11 days ago Up 24 hours 9000/tcp mailcowdockerized-php-fpm-mailcow-1 b8c6df6a7303 mailcow/dovecot:2.2 "/docker-entrypoint.…" 11 days ago Up 24 hours 0.0.0.0:110->110/tcp, :::110->110/tcp, 0.0.0.0:143->143/tcp, :::143->143/tcp, 0.0.0.0:993->993/tcp, :::993->993/tcp, 0.0.0.0:995->995/tcp, :::995->995/tcp, 0.0.0.0:4190->4190/tcp, :::4190->4190/tcp, 127.0.0.1:19991->12345/tcp mailcowdockerized-dovecot-mailcow-1 e3b09c799a7c mailcow/postfix:1.77 "/docker-entrypoint.…" 11 days ago Up 24 hours 0.0.0.0:25->25/tcp, :::25->25/tcp, 0.0.0.0:465->465/tcp, :::465->465/tcp, 0.0.0.0:587->587/tcp, :::587->587/tcp, 588/tcp mailcowdockerized-postfix-mailcow-1 faece81357e3 mailcow/solr:1.8.3 "docker-entrypoint.s…" 11 days ago Up 24 hours 127.0.0.1:18983->8983/tcp mailcowdockerized-solr-mailcow-1 76c9f63fa50d mariadb:10.5 "docker-entrypoint.s…" 11 days ago Up 24 hours 127.0.0.1:13306->3306/tcp mailcowdockerized-mysql-mailcow-1 930a7e0acff6 redis:7-alpine "docker-entrypoint.s…" 11 days ago Up 24 hours 127.0.0.1:7654->6379/tcp mailcowdockerized-redis-mailcow-1 8bbcbe5ebefb mailcow/clamd:1.66 "/sbin/tini -g -- /c…" 11 days ago Up 24 hours (healthy) mailcowdockerized-clamd-mailcow-1 9070a5ba3fb0 mailcow/olefy:1.13 "python3 -u /app/ole…" 11 days ago Up 24 hours mailcowdockerized-olefy-mailcow-1 893f2ff1f952 mailcow/dockerapi:2.09 "/bin/sh /app/docker…" 11 days ago Up 24 hours mailcowdockerized-dockerapi-mailcow-1 6781988f3409 mailcow/sogo:1.127.1 "/docker-entrypoint.…" 11 days ago Up 24 hours mailcowdockerized-sogo-mailcow-1 464ca438b4c2 mailcow/unbound:1.23 "/docker-entrypoint.…" 11 days ago Up 24 hours (healthy) 53/tcp, 53/udp mailcowdockerized-unbound-mailcow-1 373c1b7c5741 mailcow/netfilter:1.59 "/bin/sh -c /app/doc…" 11 days ago Up 24 hours mailcowdockerized-netfilter-mailcow-1 6931fc976572 memcached:alpine "docker-entrypoint.s…" 11 days ago Up 24 hours 11211/tcp mailcowdockerized-memcached-mailcow-1 louis@mailserver:~$ </pre> <span id="what-are-docker-networks"></span> == 4. What Are Docker Networks? == Docker allows containers to communicate with each other & the outside world using '''networks'''. By default, the containers can access the internet. Custom networks allow you to connect certain containers while keeping them separate from others. For instance, in '''mailcow''' docker networks make sure the mail server can talk to the database container securely without exposing the database to the entire internet. <span id="what-are-docker-volumes"></span> == 5. What Are Docker Volumes? == A '''Docker volume''' is where data generated by a container is stored. Think of a docker container like a computer booting up from a read only floppy disk. Whatever you ran in your programs is gone the second you reboot the computer. The docker volume is the second disk in the computer that you can write to so that you can save things. Containers are where programs run (postfix, dovecot), and volumes are where things are stored (emails, pictures, videos, etc.). Volumes make sure that important data persists even if the container is removed or restarted. <span id="volume-examples-with-different-programs"></span> === Volume examples with different programs: === The <code>docker-compose.yml</code> file is what tells docker how to set up everything. In frigate, we are not creating docker volumes. Rather, we tell docker to map a directory on the host computer inside the docker container. Look here: <span id="docker-program-that-does-not-use-docker-volumes"></span> ==== docker program that does not use docker volumes ==== In this file, the container '''“frigate”''' specified on line 4 by ''container_name'', we do not have any docker volumes specified. Under <code>services</code> we specify our containers. There are no docker volumes specified here. We have told the system that whatever is in <code>/home/louis/Downloads/programs/frigate/config</code> on the host system should show up inside the <code>frigate</code> container on the directory <code>/config</code>. Without this, the <code>config.yml</code> file within the <code>/home/louis/Downloads/programs/frigate/config</code> directory would not show up inside the container. Even if I logged into the container using <code>docker exec -it frigate bash</code> and created a <code>config.yml</code> file in <code>/config</code>, it would be gone when I restarted the container. <pre>version: "3.9" services: frigate: container_name: frigate privileged: true # this may not be necessary for all setups restart: unless-stopped image: ghcr.io/blakeblackshear/frigate:stable shm_size: "2048mb" # update for your cameras based on calculation above devices: - /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions - /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/doc> - /dev/video11:/dev/video11 # For Raspberry Pi 4B - /dev/dri/renderD128:/dev/dri/renderD128 # For intel hwaccel, needs to be updated for your hardware volumes: - /etc/localtime:/etc/localtime:ro - /home/louis/Downloads/programs/frigate/config:/config - /drive1thru8/securitycam:/data/db - /drive1thru8/securitycam:/media/frigate - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear target: /tmp/cache tmpfs: size: 1000000000 ports: - "8971:8971" - "5000:5000" # Internal unauthenticated access. Expose carefully. - "8554:8554" # RTSP feeds - "8555:8555/tcp" # WebRTC over tcp - "8555:8555/udp" # WebRTC over udp environment: FRIGATE_RTSP_PASSWORD: "password"</pre> <span id="docker-program-that-does-use-docker-volumes"></span> ==== docker program that DOES use docker volumes ==== Check out mailcow. This is not the full <code>docker-compose.yml</code> configuration file, just a part of it. Look at lines 25-28. For the container <code>mysql-mailcow</code>, we have two docker volumes. The docker volume <code>mysql-vol-1</code> will show up inside the <code>mysql-mailcow</code> container'' (which is a tiny virtual computer that runs our programs, in this case, the mysql database. mysql databases usually contain data on users, configurations, product orders, etc)''. Whatever is in the <code>mysql-vol-1</code> docker volume will show up inside the <code>mysql-mailcow</code> container at <code>/var/lib/mysql</code>. It is using a docker volume instead of the main computer/operating system’s file system to store its files. However, on line 28, we have <code>- ./data/conf/mysql/:/etc/mysql/conf.d/:ro,Z</code> which means that whatever is in the subfolder of our mailcow folder''(where the <code>docker-compose.yml</code> file is that we used to install mailcow)'' under <code>data/conf/mysql/</code> will show up inside the docker container at <code>/etc/mysql/conf.d/</code> <pre>services: unbound-mailcow: image: mailcow/unbound:1.23 environment: - TZ=${TZ} - SKIP_UNBOUND_HEALTHCHECK=${SKIP_UNBOUND_HEALTHCHECK:-n} volumes: - ./data/hooks/unbound:/hooks:Z - ./data/conf/unbound/unbound.conf:/etc/unbound/unbound.conf:ro,Z restart: always tty: true networks: mailcow-network: ipv4_address: ${IPV4_NETWORK:-172.22.1}.254 aliases: - unbound mysql-mailcow: image: mariadb:10.5 depends_on: - unbound-mailcow - netfilter-mailcow stop_grace_period: 45s volumes: - mysql-vol-1:/var/lib/mysql/ - mysql-socket-vol-1:/var/run/mysqld/ - ./data/conf/mysql/:/etc/mysql/conf.d/:ro,Z environment: - TZ=${TZ} - MYSQL_ROOT_PASSWORD=${DBROOT} - MYSQL_DATABASE=${DBNAME} - MYSQL_USER=${DBUSER} - MYSQL_PASSWORD=${DBPASS} - MYSQL_INITDB_SKIP_TZINFO=1 restart: always ports: - "${SQL_PORT:-127.0.0.1:13306}:3306" networks: mailcow-network: aliases: - mysql</pre> <span id="mailcow-docker-volume-descriptions"></span> ===== mailcow docker volume descriptions ===== Here are some docker volumes used for mailcow: <pre>louis@mailserver:/opt/mailcow-dockerized$ docker volume ls DRIVER VOLUME NAME local mailcowdockerized_clamd-db-vol-1 local mailcowdockerized_crypt-vol-1 local mailcowdockerized_mysql-socket-vol-1 local mailcowdockerized_mysql-vol-1 local mailcowdockerized_postfix-vol-1 local mailcowdockerized_redis-vol-1 local mailcowdockerized_rspamd-vol-1 local mailcowdockerized_sogo-userdata-backup-vol-1 local mailcowdockerized_sogo-web-vol-1 local mailcowdockerized_solr-vol-1 local mailcowdockerized_vmail-index-vol-1 local mailcowdockerized_vmail-vol-1 </pre> <span id="main-data-storage"></span> ====== main data storage ====== * <code>vmail-vol-1</code>: The emails & attachment files * <code>mysql-vol-1</code>: Database stuff like user accounts/settings * <code>redis-vol-1</code>: Temporary data for faster load times <span id="email-processing"></span> ====== email processing ====== * <code>postfix-vol-1</code>: Mail server configuration & logs * <code>rspamd-vol-1</code>: spam filter rules & training data * <code>clamd-db-vol-1</code>: Virus scanning database <span id="webmail-user-data"></span> ====== webmail & user data ====== * <code>sogo-userdata-backup-vol-1</code>: Backups of user settings & data * <code>sogo-web-vol-1</code>: Web interface files * <code>vmail-index-vol-1</code>: Helps search through old email quickly <span id="random-technical-volumes"></span> ====== random technical volumes ====== * <code>crypt-vol-1</code>: Encryption-related data * <code>mysql-socket-vol-1</code>: This assists database communication * <code>solr-vol-1</code>: Search engine data <span id="this-seems-like-a-lot"></span> == This seems like a lot == If this is too much, realize this. 99% of installing programs that are packaged with docker means doing the following: # Downloading a <code>docker-compose.yml</code> file # Running the command <code>docker compose pull</code> to grab program # Running the command <code>docker compose up -d</code> to start program. # You’re done. # If an idiot like me can do it, then so can you. '''YOU DO NOT NEED TO BECOME AN EXPERT SYSTEMS ADMINISTRATOR OVERNIGHT.''' The best way to learn is to try and understand things one part at a time. You do it like this: # Set something up, have it work. # Have no idea what you did. # Mess around with it & enjoy it. # Use the kick of dopamine from it working & enjoying it to get motivated. # Read a piece of a config file just for the hell of it and see if it maps to anything in the program/what you’re doing. # If it makes no sense, don’t worry about it, keep enjoying the program & increasing your stock of dopamine & happiness & satisfaction. # Come back to it again later. # Read a little bit. # Read something on a forum/manual/guide that makes little sense to you, but maybe 1% more sense now than it did a week ago. # Pat yourself on the back for understanding it even though you think this is kindergarten level & you’re an idiot & everyone else knows way more than you. # Enjoy program more. # Don’t crap on yourself because you don’t get everything. # When bored sitting in a meeting you have no business wasting your time in, alt-tab over to your <code>docker-compose.yml</code> file. # Google random parts & see what they do. # Think about how that piece of software works. Google what the different words inside of the file do, what those programs are for, & how they relate to the program working as a whole. # See if you understand 1% more now than before. # Each percent you understand is not '''cumulative''' - it is '''''exponential!''''' Learning this stuff is a parabola. In the beginning, it is insanely slow. Once you get started & understand the foundation, learning increases at an exponential pace. # You need to overcome that period where you feel like an imposter & a total idiot in order to get better. # Realize that even complete experts know [https://wiki.futo.org/index.php/FUTO:General_disclaimer 0.0001%] of what there is to know about all of this and usually specialize in one specific area, because to understand how everything works is damn near impossible. <span id="configuring-our-servers-networking-for-virtual-machines"></span>
Summary:
Please note that all contributions to FUTO may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
FUTO:Copyrights
for details).
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following hCaptcha:
Cancel
Editing help
(opens in new window)