• TimeSquirrel
    link
    fedilink
    49
    edit-2
    6 months ago

    If every time an OS had to delete something it had to fill the space with zeros or garbage data multiple times just to make extra sure it’s gone, we’d all be trashing our flash chips very fast, and performance would be heavily degraded. There really isn’t a way around this.

    The solution to keep private files private is to put them into an encrypted container of some sort where you control the keys.

    • @5too@lemmy.world
      link
      fedilink
      English
      64
      edit-2
      6 months ago

      Step away from hardware constraints for a moment, and consider the OS:

      If the OS says a file is deleted, under no circumstances should the OS be able to recover it. Sure, certain tools may exist to pull it back; but it should be unavailable to the OS after that. And yet, apparently a software update was enough to recover these files. Thus, the concerns about data safety in an environment where the OS cannot be trusted to remove data when it says it has been removed.

      • TimeSquirrel
        link
        fedilink
        226 months ago

        So let’s stop calling it “deleted” then, and call it what it is. “Forgetting”.

        I’m not sure what you actually want the OS to do about it other than as I said, fill it with random data.

        • borari
          link
          fedilink
          English
          10
          edit-2
          6 months ago

          I think this is just semantics at this point, but to me there is a difference between “deleted” and “erased”. I see deleted as the typical “moved to trash” or rm action, with erased being overwritten bits, or like microwaving a drive.

          Edit - If i remember correctly deleting something in most OS’s/File Systems just deletes the pointer to that file on disk. The data just hangs out until new data is written to that sector. The solution, other than the one you mentioned about encrypting stored data and destroying the key when you want the data “deleted”, would be to only ever store data in volatile memory. That would make for a horrendous user experience though.

          • Hildegarde
            link
            fedilink
            English
            66 months ago

            You can delete files by overwriting the data. On Linux its shred -zu [file]. Its slow but good to do if you are deleting sensitive data.

            Its good its not the standard delete function.

            • Liz
              link
              fedilink
              English
              26 months ago

              Question: what fraction of bits do you need to randomly flip to ensure the data is unrecoverable?

              • @barsoap@lemm.ee
                link
                fedilink
                English
                5
                edit-2
                6 months ago

                Information theory aside: In practice all because you can’t write bit-by-bit and if you leave full bytes untouched there still might be enough information for an attacker to get information, especially if it’s of the “did this computer once store this file” kind of information, not the actual file contents.

                If I’m not completely mistaken overwriting the file once will be enough to prevent recovering with logical means, that is, reading the bits the way the manufacturer intended you to, physical forensics can go further by being able to discern “this bit, before it got overwritten, was a 1 or 0” by looking very closely at the physical medium, details on how much flipping you need to defeat that will depend on the physical details.

                And I wouldn’t be too terribly sure about that electro magnet you built into your case to erase your HDD with a panic button: It’s in a fixed place, will have a fixed magnetic field, it’s going to scramble everything sure but the way it scrambles is highly uniform so the bits can probably be recovered. If you want to be really sure buy a crucible and melt the thing.

                Also, may I interest you in this stylish tin-foil hat, special offer.

              • Hildegarde
                link
                fedilink
                English
                36 months ago

                If you delete normally, only the index of the files are removed, so the data can be recovered by a recovery program reading the “empty” space on the disk and looking for readable data.

                If you do a single pass erase, the bits will overwritten one time. About half the bits will be unchanged, but that makes little difference. Any recovery software trying to read it will read the newly written bits instead of the old ones and will not be able to recover anything.

                However, forensic investigation can probably recover data after a single pass erase. The shred command defaults to 3 passes, but you can do many more if you need to be even more sure.

                Unless you have data that someone would spend large sums on forensics to recover, 1 to 3 passes is probably enough.

              • Natanael
                link
                fedilink
                English
                06 months ago

                If it’s completely random then 50%, that’s how stream ciphers works.

    • Kairos
      link
      fedilink
      English
      1
      edit-2
      6 months ago

      Well, the storage device should handle that then. And modern NVMEs do. Self-encrypted drives are used to hide deleted information from an attacker that desolders the storage chips.

      Edit: there are NVMEs that dont use self encryption, BUT they should still recognize a deleted sector.

      • TimeSquirrel
        link
        fedilink
        56 months ago

        That would apply in my “encrypted container of some sort” solution, yes.

      • Natanael
        link
        fedilink
        English
        06 months ago

        Deletion commands are unfortunately not very reliable on many SSDs