I think the SO answer is pretty dismissive. While it's certainly true that, in practice, neural networks aren't good for this analysis, the deeper question is can this method be applied at all. To that, I'm not sure there's a definitive answer.
We're still trying to understand what domains neural networks are good for and where they fail. There's been some theoretical work (that I know about) that talk about getting closer to an underlying "theory" of why neural networks work and what domains they work on. I don't claim to have any deep knowledge about this but from what little I've seen, there seems to be an idea that neural networks are good at problems that are composable (that is f(x,y,...) = g0(g1(...gn(x,y...)))) and perhaps continuous in some sense. Hash functions are certainly composable but fail on the continuous part, pretty much by construction.
See here, here, here and here for some attempts at a more fundamental (and heavily physics-focused) attempts at understanding neural networks from a more theoretical domain. My naive take on this is that if you have something you're trying to train a neural network on, you can kind of think what steps you would need to take in order to get the answer out you're looking for (take a face, normalize it to the center of the image, look for certain high level patterns, etc. etc.) that give you some rough bounds on the entropy and/or number of function compositions that can then be translated into neural network layers and how wide each layer is.
Before thinking that hash functions don't have this property of continuity, there have been some recent attacks on "differential fault analysis" of some classical hash functions (see here) that, to me at least, have a flavor of applying these types of "classical techniques" in the analysis of hash functions.
You quickly run into even more theoretical questions of "do one way hash functions exist" (where the inverse operation is exponential aka "super polynomial gap") and, if I'm understanding it right, this gets into even deeper questions of whether NP = co-NP and other issues (see here and the referenced theoretical work here).
In my opinion, we're still in our infancy of understanding these deeper issues with NP problems. So the existence of one way functions, be it factoring, elliptic curves or hash functions like SHA256 etc. is an empirical statement of "there might be ways to efficiently invert, they just mostly look hard to us" rather than "there's a theoretical backing of why these functions are one way".
reply
This is kind of a silly question given that hashes are designed to be randomly distributed and hence hopefully without discernible patterns, but it's a an interesting question.
I find silly questions a great way to learn. For instance, in an entertaining way this motivates a deeper discussion about what a hash function is and how it could be compromised.
reply
Yes; it's an interesting question, with a very good and intelligent answer. I (for one) am very happy that there are people, 'hackers', for want of a better term, that hammer on things such as SHA256, just to prove how robust things are.
reply