• 0 Posts
  • 147 Comments
Joined 3 months ago
cake
Cake day: August 25th, 2025

help-circle

  • I want to start by saying I am not suggesting you use any of the products these companies offer, but I’m linking to the standard strategy - 3-2-1.

    https://www.backblaze.com/blog/the-3-2-1-backup-strategy/

    https://www.acronis.com/en/blog/posts/backup-rule/

    https://www.techtarget.com/searchdatabackup/definition/3-2-1-Backup-Strategy

    • 3 copies (original and two backups)
    • 2 forms of media
    • 1 copy off site.

    For me, I have two boxes for NAS. One is the prod, one is the backup of anything I can’t replace (or can’t replace easily). I have another at the home of a member of my family, which gets a weekly diff. I also backup an encrypted set to cloud storage I got some time ago. So I actually have 4 sets of data (1 prod + 3 backups), two off-site locations. The media portion is treated differently today - it used to be tape, DVD backups, whatever, but today I consider different devices and cloud storage to fit that bill. In which case I have an abundance of forms of storage media

    Mine goes a slight bit past what’s needed for 3-2-1 which is appropriate for me. I consider 3-2-1 the minimum for any data considered critical or irreplaceable.

    For me, that includes home movies, family photos, financial records, etc. It does not include my rips of my DVD collection. It does include config files and backups of services I run though.

    The right backup strategy depends on your own concern about data. If I lost the photos/videos of my kids, I’d be devastated. If I lost the rips of VHS tapes my dad recorded, I’d be devastated.

    If I lost the iso for a random esoteric piece of hardware that has its drivers, I’d be disappointed but its not a big deal.

    Prioritize your data. Absolutely critical, important, preferred to keep, annoying but replaceable, and who cares I’ll just download it again if I have to.

    Once you know how much you need to store for each of those, add a bit to plan ahead, and see what backup strategy fits as you move down the priority list, and go from there.




  • It definitely is, especially if you get a cluster going. FWIW, my media is all on a synology NAS (well technically two, but one is a backup) that I got used through work, so your setup isn’t the wrong approach (imo) by any stretch.

    What it comes down to in the connection is how you look at it - with a VM, its a full fledged system, all by its lonesome, that just happens to live inside another computer. A container though is an extension of that host, so think of it less like a VM and more like resource sharing, and you’ll start to see where the different approaches have different advantages.

    For example, I have transcode nodes running on my proxmox cluster. If I had JF as a VM, I’d need another GPU to do that - but since its a container for both JF and my transcode node, they get to share that resource happily. Whats the right answer is always going to depend on individual needs though.

    And glad I could be of some help!


  • Also may be good for c/jellyfin, but what I’d see if you could do is leverage a backup tool. Export and download, then import, all from the web. I know there is a built in backup function, and I recall a few plugins as well that handled backups.

    Seems to me that might be the most straightforward method - but again, probably better with a more jellyfin focused comm for that. I have moved that LXC around between a bunch of machines at this point, so snapshots and backups via proxmox backup server are all I need.





  • Ok lets start with that rendering - seeing those is good! You should only need to add some group access, so run this:

    groups jellyfin
    

    The output should just say “jellyfin” right now. Thats the user thats running the Jellyfin service. So lets go ahead and…

    usermod -a -G video,render jellyfin
    groups jellyfin
    

    You should now see the jellyfin user as a member of jellyfin, video, and render. This gives access for the jellyfin user to make use of the gpu/hardware acceleration.

    Now restart that jellyfin and try again!




  • Great!

    Transcoding we should be able to sort out pretty easily. How did you make the lxc? Was it manual, did you use one of the proxmox community scripts, etc?

    For transferring all your JF goodies over, there are a few ways you can do it.

    If both are on the NAS, I believe you said you have a synology. You can go to the browser and go to http://nasip:5000/ and just copy around what you want if its stored on the NAS as a mount and not inside the container. If its inside the container only its going to be a bit trickier, like mounting the host as a volume on the container, copying to that mount, then moving around. But even Jellyfin says its complex - https://jellyfin.org/docs/general/administration/migrate/ - so be aware that could be rough.

    The other option is to bring your docker container over to the new VM, but then you’ve got a new complication in needing to pass through your GPU entirely rather than giving the lxc access to the hosts resource, which is much simpler IMO.


  • That usually means something has changed with the storage, I’d bet there is a lingering reference in the .conf to the old mount.

    The easiest? Just delete the container, start clean. Thats what nice about containers by the way! The harder would be mounting the filesystem of the container, and taking a look at some logs. Which route do you want to go?

    For the VM, its really easy. Go to the VM, and open up the console. If you’re logging in as root, commands as is, if you’re logging in as a user, we’ll need to add a sudo in there (and maybe install some packages / add the user to the sudoers group)

    1. Update your packages - apt update && apt upgrade
    2. Install the nfs tools - apt install nfs-common
    3. Create your directory where you’re going to mount it mkdir /mnt/NameYourMount
    4. Lets mount it to test - sudo mount -t nfs 192.168.1.100:/share/dir /mnt/NameYourMount
    5. List out the files and make sure its working - ls -la /mnt/NameYourMount. If you have an issue here, pause and come back and we’ll see whats going on.
    6. If it looks good, lets make it permanent - nano /etc/fstab
    7. Add this line, edited as appropriate 192.168.1.100:/share/dir /mnt/NameYourMount nfs defaults,x-systemd.automount,x-systemd.requires=network-online.target 0 0
    8. Save and close - ctrl+x then y
    9. Reboot your VM, then login again and ls -la /mnt/NameYourMount to confirm you’re all set


  • Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.

    • Shut down 101 and 102
    • In the Web GUI, go to the JF container, go to resources, and remove that mount point. Take note of where you mounted it! We’re going to mount it back in the same spot.
    • Go to the web GUI, go to Storage, select the SMB mount of the NAS, and select Edit - then uncheck Enable.
    • With it selected, go ahead and click remove
    • For both 101 and 102, lets make sure they aren’t set to start from boot for now. Go to each of them, and under the options section, you’ll see “Start at Boot”. If they say yes, change it to No (click edit or double click and remove the check from the box).
    • Reboot your server
    • Lets check that the mounting service is gone, go to the host then shell, and enter systemctl list-units "*.mount"
    • If you don’t see mnt-pve-thenameofthatshareyoujustremoved.mount, its removed.

    That said - I like to be sure, so lets do a few more things.

    • umount -R /mnt/pve/thatshare - Totally fine if this throws an error
    • Lets check the mounts file. cat /proc/mounts - a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line. nano /proc/mounts, find the line if its still there, and remove it. ctrl+x then y to save.

    Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.

    Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again: ls -la /mnt/pve/thenameofyourmount/

    Is your data showing up? If so, great! If not, lets find out whats going on.

    Now lets add back to your container mount. You’ll need to add that mount point back in again with: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media (however you had it mounted before in that second step).

    Now start the container, and go to the console for the container. ls -la /whereveryoumountedit - if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable “Start at Boot” if you’d like it to.

    Onto the VM, what distribution is installed there? Debian, fedora, etc?


  • For the record, I prefer NFS

    And now I think we may have the answer…

    OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we’d mount NFS directly inside the VM.

    Did you make an LXC or a VM for 102?

    If its an lxc, we can work out the command and figure out what’s going on.

    If its a VM, we’ll get it mounted with NFS utils, but how is going to depend on what distribution you’ve got running on there (different package names and package managers)