Here is the text of the NIST sp800-63b Digital Identity Guidelines.

  • @essteeyou@lemmy.world
    link
    fedilink
    English
    3121 hours ago

    The place that truncates passwords is probably not the place to look for best practices when it comes to security. :-)

    • @orclev@lemmy.world
      link
      fedilink
      English
      520 hours ago

      Hashing passwords isn’t even best practice at this point, it’s the minimally acceptable standard.

      • @frezik@midwest.social
        link
        fedilink
        English
        1
        edit-2
        15 hours ago

        Sorta. Not really.

        Key derivation algorithms are still hashes in most practical ways. Though they’re derived directly from block ciphers in most cases, so you could also say they’re encrypted. Even though people say to hash passwords, not encrypt them.

        I find the whole terminology here to be unenlightening. It obscures more than it understands.

        • @orclev@lemmy.world
          link
          fedilink
          English
          214 hours ago

          A KDF is not reversible so it’s not encryption (a bad one can be brute forced or have a collision, but that’s different from decrypting it even if the outcome is effectively the same). As long as you’re salting (and ideally peppering) your passwords and the iteration count is sufficiently high, any sufficiently long password will be effectively unrecoverable via any known means (barring a flaw being found in the KDF).

          The defining characteristic that separates hashing from encryption is that for hashing there is no inverse function that can take the output and one or more extra parameters (secrets, salts, etc.) and produce the original input, unlike with encryption.

          • @frezik@midwest.social
            link
            fedilink
            English
            1
            edit-2
            12 hours ago

            OK. How do you reconcile that with “Hashing passwords isn’t even the best practice at this point”? Key derivation functions are certainly the recommended approach these days. If they are hashes, then your earlier post is wrong, and if they aren’t hashes, then your next post was wrong.

            • @orclev@lemmy.world
              link
              fedilink
              English
              112 hours ago

              The rest of that sentence is important. Hashing passwords is the minimum practice, not best practice. You should always be at least hashing passwords. Best practice would be salting and peppering them as well as picking a strong hashing function with as high a number of iterations as you can support. You would then pair that with 2FA (not SMS based), and a minimum password length of 16 with no maximum length.

        • @pivot_root@lemmy.world
          link
          fedilink
          English
          8
          edit-2
          20 hours ago

          Use a library. It’s far too easy for developers or project managers to fuck up the minimum requirements for safely storing passwords.

          But, if you are wanting to do it by hand…

          • Don’t use a regular hashing algorithm, use a password hashing algorithm
          • Use a high iteration count to make it too resource-intensive to brute force
          • Salt the hash to prevent rainbow tables
          • Salt the hash with something unique to that specific user so identical passwords have different hashes
          • @Buddahriffic@lemmy.world
            link
            fedilink
            English
            112 hours ago

            I remember hearing to not layer encryptions or hashes on top of themselves. It didn’t make any sense to me at the time. It was presented as if that weakened the encryption somehow, though wasn’t elaborated on (it was a security focused class, not encryption focused, so didn’t go heavy into the math).

            Like my thought was, if doing more encryption weakened the encryption that was already there, couldn’t an attacker just do more encryption themselves to reduce entropy?

            The class was overall good, but this was still a university level CS course and I really wish I had pressed on that bit of “advice” more. Best guess at this point is that I misunderstood what was really being said because it just never made any sense at all to me.

            • @orclev@lemmy.world
              link
              fedilink
              English
              211 hours ago

              It’s because layering doesn’t really gain you anything so it only has downsides. It’s important to differentiate encryption and hashing from here on since the dangers are different.

              With hashing, layering different hashing algorithms can lead to increased collision chance and if done wrong a reduced entropy (for instance hashing a 256 bit hash with a 16 bit hashing algorithm). Done correctly it’s probably fine and in fact rehashing a hash with the same algorithm is standard practice, but care should be taken.

              With encryption things get much worse. When layering encryption algorithms a flaw in one can severely compromise them all. Presumably you’re using the same secret across them all. If the attacker has a known piece of input or can potentially control the input a variety of potential attack vectors open up. If there’s a flaw in one of the algorithms used that can make the process of extracting the encryption key much easier. Often times the key is more valuable than any single piece of input because keys are often shared across many encrypted files or data streams.

              • @Buddahriffic@lemmy.world
                link
                fedilink
                English
                110 hours ago

                With the hash one, it doesn’t look like that could be exploited by an attacker doing the bad hashing themselves, since any collisions they do find will only be relevant to the extra hashing they do on their end.

                But that encryption one still sounds like it could be exploited by an attacker applying more encryption themselves. Though I’m assuming there’s a public key the attacker has access to and if more layers of encryption make it easier to determine the associated private key, then just do that?

                Though when you say they share the same secret, my assumption is that a public key for one algorithm doesn’t map to the same private key as another algorithm, so wouldn’t cracking one layer still be uncorrelated with cracking the other layers? Assuming it’s not reusing a one time pad or something like that, so I guess context matters here.

          • @Laser@feddit.org
            link
            fedilink
            English
            215 hours ago

            Salt the hash with something unique to that specific user so identical passwords have different hashes

            Isn’t that… the very definition of a Salt? A user-specific known string? Though my understanding is that the salt gets appended to the user-provided password, hashed and then checked against the record, so I wouldn’t say that the hash is salted, but rather the password.

            Also using a pepper is good practice in addition to a salt, though the latter is more important.

            • @frezik@midwest.social
              link
              fedilink
              English
              315 hours ago

              Some implementers reuse the same salt for all passwords. It’s not the worst thing ever, but it does make it substantially easier to crack than if everything has its own salt.

              • @orclev@lemmy.world
                link
                fedilink
                English
                314 hours ago

                That’s a pepper not a salt. A constant value added to the password that’s the same for every user is a pepper and prevents rainbow table attacks. A per-user value added is a salt and prevents a number of things, but the big one is being able to overwrite a users password entry with another known users password (perhaps with a SQL injection).