I have SSHFS on my server and would like to have it automatically mounted and store all of the documents, desktop, downloads, etc. on a couple computers. I am able to get it to all work except for mounting on startup. The server is Debian 12 and both clients are Tumbleweed. Nothing in fstab seems to work. When I add x-systemd.automount, well, at best programs that try to use it crash and at worst I have to go through recovery mode to get the system to boot properly. I am using ed25519 keys with no passwords for authentication. Does anyone know how I could get this to work?
Using systemd is way better than fstab even though it requires way more work. Systemd can automatically mount it when the server is available.
Also why are you using sshfs? Wouldn’t it be simpler to go the Samba route?
Autofs works with sshfs. I use it for mounting anything over a network. It automounts on demand and disconnects a mount after a period of inactivity.
Plus one for autofs, works so well that I often forget that certain files are actually remote resources
Write a systemd service, that’s how my computers mount my nas. you just need to have it run under your user instead of root or point it towards the right keys manually.
And consider adding a timeout, or else all your devices will take and additional 2 minutes to boot if the server is offline and the mount fails.
I just have them check for my wifi. If that’s what they’re connected to, then the nas is available. If not, then there’s no point trying to mount it. However, one day every month my nas pulls updates and shuts down afterwards, so that I need to boot it again when I get home from work. But my laptop boots just as fast as it normally does, even though it fails to connect to the nas.
You can add it to your fstab, then it will be mounted on boot. I think Archwiki has some guidance on this, however the autofs solutions sounds better since, if you directly mount the sshfs via fstab, then if you boot the device without a connection to that sshfs, it will hang during boot for a while as it tries to connect.
store all of the documents, desktop, downloads, etc. on a couple computers
Why use SSHFS for that? I recommend using Syncthing, it’s great for synchronizing stuff across multiple PCs (local and remote).
Why use SSHFS for that?
So that you don’t have copies of files everywhere.
What do you mean “everywhere”?
everywhere you want to use the files.
I don’t really know what that is. I could try it though.
Edit: I don’t really like having the files on all computers, I would rather just have them all in a central place where everything can access them.
I’m on OpenRC, so I can’t say anything about systemd, but I have several SSHFS mounts (non-auto) listed in my
fstab
:sshfs#root@192.168.0.123:/random-folder/ /mnt/random-folder fuse noauto,uid=1000,gid=100,allow_other 0 0
Is that similar to what you’ve tried in your fstab? I’d assume replacing
noauto
withauto
should just work, but then again, I haven’t tried it (and rebooting my system right now would be very inconvenient, sorry).It also might require you to either use password-based login and specify the password or store the SSH keys in the
.ssh
directory of the user doing the mount (should be root withauto
set).I already have SSH keys set up but auto doesn’t work. I think fstab mounts things before network is up.
I do this already using systemd service (well technically OpenRC script, but I doubt you’re using openRC. Systemd is equivalent).
I run Debian 12 and ran into similar issues trying to automount NFS, even down to having to use an alternative console to log in and undo f stab edits
My solution was simple and so hopefully it helps you as well. In fstab, the backslashes don’t cancel out spaces. Since my directory path had a space in it, that would break fstab:
Path/to/your/Video\ Files
breaks fstabPath/to/your/VideoFiles
good