sudo’s Hall of pain

  • @brokenlcd@feddit.it
    link
    fedilink
    29
    edit-2
    8 months ago

    More than a sys admin thing is a general linux user thing; i switched from an emmc computer with no drives marked as sdx; to one with nvme and hdd; i was used to my old pc where sda was whatever usb stick i plugged in; and i used dd with sda1… I nuked 1tb of data and i am still running photorec to try and recover something at least; fml.

    • Atemu
      link
      fedilink
      208 months ago

      Am I the only one around here who does backups?

      • @brokenlcd@feddit.it
        link
        fedilink
        208 months ago

        Unfortunately to do backups i need the money to buy a drive to do backups on; most of my pcs are literally made out of scrap parts from broken machines.

        • Atemu
          link
          fedilink
          88 months ago

          I use scrapped drives for my cold backups, you can make it work.

          Though in case of extreme financial inability, I’d make an exception to the “no backup, no pity” rule ;)

          • @brokenlcd@feddit.it
            link
            fedilink
            18 months ago

            I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

            For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

            • Atemu
              link
              fedilink
              08 months ago

              I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.

              It’s possible to make that work; through discipline and mechanism.

              You’d need like 12 of them but if you’d carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.

              Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.

              What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it’s quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.

              For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.

              Another good practice is to attempt any changes on a test model. You’d create a sparse test image (truncate -s 1TB disk.img), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.

              • @Sethayy@sh.itjust.works
                link
                fedilink
                08 months ago

                Or mount it in RAID0/whatever the zfs equivalent is.

                The downside over one disk is many have more possible points of failed, taking out the whole array - so ideally another RAID would be best

                • Atemu
                  link
                  fedilink
                  08 months ago

                  That would require all of those disks to be connected at once which is a logistical nightmare. It would be hard with modern drives already but also consider that we’re talking IDE drives here; it’s hard enough to connect one of them to a modern system, let alone 12 simultaneously.

                  With an Index, you also gain the ability to lose and restore partial data. With a RAID array it’s all or nothing; requiring wasting a bunch of space for being able to restore everything at once. Using an index, you can simply check which data was lost and prepare another copy of that data on a spare drive.

                  • @Sethayy@sh.itjust.works
                    link
                    fedilink
                    18 months ago

                    I’m just talking prebuilt solutions here, but how would you use an index’d storage base if the drives weren’t connected? Sounds like that’s an issue regardless

              • @brokenlcd@feddit.it
                link
                fedilink
                0
                edit-2
                8 months ago

                The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

                As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

                • Atemu
                  link
                  fedilink
                  18 months ago

                  The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.

                  For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/.

                  As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.

                  I’d highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.

        • @billgamesh@lemmy.ml
          link
          fedilink
          48 months ago

          I’m in a similar boat, but tend to mirror my important files across a lot of my drives. Also, whenever I move hard drives computer to computer, I first look at the drive and copy everything I don’t wanna lose, just in case… Basically, learned to be careful the hard way a few times lol

        • @Appoxo@lemmy.dbzer0.com
          link
          fedilink
          38 months ago

          You can buy (or get) cheap 1tb ssds or bigger 2tb hdds for sub 100€ where I am from.
          Pairing that with extreme conpression from veeam, not installing all programs in C:\ (or whatever system directory for linux) and either doing volume or file level backups should give you plenty of space to do those.

          • @Sethayy@sh.itjust.works
            link
            fedilink
            28 months ago

            Linux its just / then from there you can mount other drives at whatever directory you want

            But also 100€ ain’t all too cheap for some of us

            • @Appoxo@lemmy.dbzer0.com
              link
              fedilink
              28 months ago

              Linux its just / then from there you can mount other drives at whatever directory you want

              I know root / exists but I didn’t know a good analog to C:. Thank you though as some other members might not know it yet :)

              But also 100€ ain’t all too cheap for some of us

              Certainly. But nowadays even reputable brands bring out >1TB SSD/HDDs for little money.
              They should suffice for backup purposes.
              If the money is that tight when even a dedicated drive + backupsoftware won’t fit then you can only bridge the time until then with a USB drive or something else with a high enough capacity.
              And any (working) backup is better than none.

      • @null@slrpnk.net
        link
        fedilink
        58 months ago

        I have plenty of non-critical Linux ISOs that I don’t back up (because that’d be like 12 TB).

        But I’d still be pissed if I accidentally wiped them.

      • Rikudou_SageA
        link
        18 months ago

        I solved the problem by treating all data as ephemeral. Seriously, there’s nothing on my computer that I would miss. All my projects are on git, each of which contains a shell.nix, so I don’t have to worry about my PC software, the occasional document is on my NAS and so on.