Apollo wrote:maybe some Chinese freak invents a vulnerability in 3 year from now by abusing some unforeseen bit correlation occurring from repeatedly feeding H's output to itself. We really can't tell.
The thing is, you don't need to worry about that
if the underlying hash algorithm is collision resistant, but how do you know it is? You don't. So, yes, I could agree to certain extent, but if I am to discover a collision nonresistance of a hash function, I would no longer use the hash function.
Apollo wrote:By using a longer salt (e.g. several KBs, or even MBs if you want) you can slow down just as much (computational time of H linearly depends on length of salt) without losing any entropy for sure.
I agree, although that is a bit pointless. If you want to slow down a brute force attack, hash iteratively. If you do not care about client side security that much, hash once. Iterative hashing will slow down attacks more than lengthy salts, which also make your hash generation slower especially because the generation of KBs (not to mention MBs) of random data is very, very inefficient...
Apollo wrote:I was referring here to brute forcing H(salt.unknownpassword), i.e. one specific password. Or, actually, one specific hash value for which we're trying to find a matching password string. Without salt, you could use rainbow tables. With salt, you can't, and brute forcing becomes longer depending on salt's length. So you'd have to calculate H(salt.bruteforceattemptX) every time. But since the salt is always the first part, you can partially perform the hash algorithm up to the salt once, and just calculate the rest for every brute force attempt. As opposed to H(unknownpassword.salt), where this is not possible.
I know what you are saying, but I think it's just a side effect. I would not worry about having my salt in the start of the mix, because I use the salt to prevent rainbow table attacks and same passwords becoming the same hash value. If I am to slow down brute force attacks (for the sake of client side security), I would hash iteratively, which makes a lot more sense.
The same thing can be seen in the HMAC standard. You can precompute the ipad and the opad if you want, but that (slowing down cracking) is not the purpose of them.
If I want to slow down brute force attacks, I would probably iteratively hash 10 000 times. The question is, how large salt is needed to achieve the same effect? The answer is 2 MB in the case of SHA-256. Now imagine a site of 100 000 users, that would be 200 GB of database storage just for the salts and imagine the amount of bandwidth it consumes, the execution time it eats not to mention the heck a lot of time it takes to generate that much of random data. By the way, have you ever tried generating 200 GB of random data? You are lucky if you can do that within your life time...
I agree with you in theory, but in practice it is unusable to use salts to slow down anything. You can place the salt in the beginning if you want, that's not a problem.