I was going to call this "Perfidity or Stupidity", until I saw the lean number of Google hits for "perfidity", and that Chambers Dictionary can't find the word. In any case, it may be better to avoid the perjurative aspects of "stupidity" :-)
We perform the Turing Test every day (and often lose) whenever we have to consider whether material is from a human (e.g. email from a user) or bot (e.g. email form a user's infected computer). This is a generalized identity/category test, similar to "is this my bank's site, or is it a phishing site?"
When we find something that sucks (or is downright dangerous) we also ask ourselves; were they stupid and did this by accident, or are they perfidious and did this to further a hidden and possibly malicious agenda?
This question runs as a vertical slash through the Trust Stack. Things that would be errors in the lower levels of the stack if there by mistake, would in fact be a failure in the top levels of the stack if they were there intentionally. This applies particularly at the safety and desgn layer of the stack, which may be where most exploitability occurs.