Annotations
As some context on the author:
Dietrich Bonhoeffer was, among other things, a Lutheran pastor and anti-Nazi dissident who was accused of involvement in a plot to assassinate Adolf Hitler in July of 1944, roughly 8 months before Hitler killed himself. He was convicted by the Third Reich, and executed three weeks before Hitler’s death and the fall of the regime.
A quick return from later in the document: here, he is not referring to stupidity in terms of lack of intellect or some kind of “the average person is stupid and I’m better than them” mentality. The core of this is talking about a willingness to conform for the sake and ease of conforming, and the willingness to take on a theory at face value without considering it at a deeper level.
Not to tie everything back to software development, but…
This is very true in new technologies. I, ultimately, don’t think that there will ever be an adequate way to fuel the adoption of a new technology by mandate. In a “carrot and stick” framing, there’s pretty much no stick available to get someone to want to try a new solution, or adopt a new way of thinking about a problem.
Adoption — of facts, frameworks, or solutions — has to come from a positive, promotional place, or it just won’t stick. It falls back to arguments I have with people who make a claim that “If everybody just blank, the problem would be solved!” I invite anybody reading to send me even one example in history where a big group of people “just did” something without some positive incentive to do so.
there’s the crux of the definition — that making a habit of not incorporating additional information, or that not seeking to find deeper truths that incorporate new or updated World 1 information, is an abdication, not an inherent quality in a person.
from above: the abdication of responsibility in establishing a grounded position.
I’m not sure this is exactly what he’s going for, but I do think it’s a good distinction between the desire to be right, and the desire to actually inspire positive change.
I’ll hijack the citation, here, for notes on my dad’s actual presentation:
Conspiracy: Oddly enough, conspiracy in itself is not an actual crime. Let’s say you wanted to hire a hitman to murder your spouse. Is talking with somebody about that a crime?
Not quite. The crime is not in the planning — it’s in an overt act made to further the plan. For example, let’s say you are hatching a murder-for-hire plot. You talk to a hitman. You agree on the plan for the murder. So far, you haven’t committed a crime.
However, you eventually have to pay the person, or send them the victim’s schedule. That, then, is an overt act — a material contribution to further the plan.
Conspiracism, then, is the formation of a conspiracy theory — the theory that there is a conspiracy occurring. If we think about this logically — i.e. (If A and B (World 1) are true, then C (World 3) must also be true — then these are the patterns that enforce the theory C against new information that may undermine it.
A superstructure of a conspiracism is self-reference. If you have some theory, Theory A, that has been built by three pieces of external evidence, Exhibits A, B, and C, then the shape of Theory 1 is a downstream dependent of A, B, and C. However, self-reference is the introduction of a cyclical dependency into this structure. While it could be a single-length closed loop (Theory 1 is true because Theory 1 is true), the cycles may be longer.
To use a stronger example, let’s say that you are accusing a government of a specific act of corruption. Let’s call it bribes. We’ll say that Theory 1 is “The government is generally corrupt” and that Theory 2 is “The government is taking illegal bribes”.
A cyclical dependency, then, could occur between these two theories. The argument would go like this:
- A government taking bribes is a corrupt government; and
- The government is corrupt because they are taking bribes
This would be a closed loop that reinforces itself. Let’s say that the accusation of bribes ends up being baseless — evidence comes up that seems to show clean accounting, with no bribes.
In a linear case, the logic would go:
- The evidence shows that bribes were not occurring (Exhibit A); therefore
- The government was not taking bribes (updated Theory 1); therefore
- The government was not corrupt (updated Theory 2)
That’s the track back through the lineage, where a new piece of evidence iterates downward through the dependent chain and updates all dependent theories based on the new evidence. However, if you believe in the “They are taking bribes and must be corrupt, and must be corrupt because they are taking bribes” self-reinforcing cycle, then the evidence’s update chain may be:
- The evidence shows that bribes were not occurring (Exhibit A); however 2a. The government is corrupt (Theory 2); and 3a. Corrupt governments take bribes (Theory 1); therefore 2b. The government is corrupt (Theory 2); and 3a. Corrupt governments take bribes (Theory 1); therefore …
- Exhibit A must be wrong.
This is the self-referencing cycle in action to reject evidence.
A more insidious pattern, though, is how this implicitly strengthens Theories 1 and 2 to the point where they can feel like, and be used, as evidence for other theories. If it is reinforced by self-reference that a government is corrupt, there’s a lot of other theories you could hook onto it. It’s a more mathematical definition of “If they’re lying about X, who knows what else they could be lying about?!”
In thinking about my dad’s job — investigation of facts — I think that it’s a good illustration of the interaction between function of an executive and a judiciary. In terms of the investigatory bodies of the executive branch — FBI, DEA, etc — the goal is to gather the body of World 1 (non-theory) materials. The goal of attorneys, then, is to construct theories (World 3) out of that evidence. The prosecution assembles evidence into a theory, and the defense assembles evidence into a theory. The role of the judiciary is to take and evaluate which theory satisfies the criteria for conviction.
When we talk about the branches of government — legislative, executive, and judicial — you could summarize them in three functions:
- Legislative: Define the values of the population — what do we consider to be wrong?
- Judiciary: Something happened — how do we feel about it, according to our values about what are wrong?
- Executive: Something has happened that requires action — what action do we need to take?
Tying back to this specific paper, then, we can categorize these three branches within a single person. As a human being, an agent in society, you also engage in these functions:
- Legislative: What do you, personally, believe is right and wrong?
- Executive: What information have you gathered, and what actions have you taken?
- Judiciary: With the information you have and beliefs you carry, what verdicts are you reaching?
I think that “stupidity”, in Bonhoeffer’s framework, could be described as a dysfunction between the executive and judiciary within a person. It is a judicial dysfunction to construct sufficient theories from information, causing an executive dysfunction to not seek out or accept contradictory information, causing a judicial dysfunction to construct sufficient theories — rinse, repeat.
As with the Programming as Theory Building, we switch authors at this point. There’s a bit less information to go off of here, though. I thought that, perhaps, the source of this follow-on would’ve been Bonheiffer’s published Letters and Papers from Prison, but considering that was the citation, I doubt that’d be the case. Unfortunately, I can find almost no other source for this specific portion. We might assume it is the same author of On the Wing.
I won’t be speaking to that body of work, here.
I do want to take this annotation, though, to address something that irks me about the paper. I’m hesitant to do so, since offering critique of a man who wrote this letter from a concentration camp from the comfort of my couch feels in somewhat poor taste. However, this additional portion takes what I feel is an odd aftertaste of the original text and dials it up to 11, and I have no reservation criticizing… let’s call them “the other author.”
Ultimately, if you look for sources about the Theory of Stupidity texts online, you’ll find a long series of sources talking about this — PhilosophEasy, Big Think, Psychofuturia, etc. — that track with a general trend of “Oh, other people are stupid. I could never be stupid!”
This bugs me. A lot.
That fault absolutely does not fall at Bonhoeffer’s feet; a man who faced and expedited execution in 1945, obviously, wrote this with no knowledge or say where PDFs of his work would end up online 80 years after his death. Additionally, I’d say he had every right, given his circumstances as of writing, to call Nazis and those who he felt didn’t do enough to resist the Nazis, idiots. All things considered, this is actually a pretty level-headed take on the situation.
However, I do take issue with any analysis from this paper that tries to imply that some group — “us” — are incapable of any rationality, while other groups — “them” — are entirely incapable of rationality. This is particularly sharp in whatever author wrote the follow-on in this document. This section deeply upsets me — the attitude of “Us smart people have to deal with those stupid people!” feels like such a tragic misinterpretation of the original work, going as far as to be insulting to Bonhoeffer’s memory.
Even in looking up Carlo Cipolla’s cited work, The Basic Laws of Human Stupidity (best open-access source I could find, there — symptomatic of the above point about what attitudes and circles have co-opted this specific work), I find it difficult to separate out the core of the message (that it is important to measure the impact of irrational actors in an economic system, as a model that does not take into account irrational action is of limited real-world use) from the framing of the work (that there are two inherent types of people: stupid, irrational devils, and smart, rational angels).
To be irrational is to be human — to believe you are incapable of irrationality is to be an asshole. Bonhoeffer doesn’t specifically emphasize that, but I feel the last paragraph of the letter at least has the taste of it. Any person who believes that they are a perfectly rational actor and that those they deem irrational are, as the second author puts it, “…these stupid creatures,” has completely lost the plot.
Maybe they’re right! Maybe, I just don’t know enough about philosophy to understand that those who truly believe they’re above irrationality are true incarnations of Aristotle, cursed as a modern Cassandra to rant about those “other creatures” in shitty unread blog posts.
What I do know, though, is that I pray I’ll never find myself trapped at dinner with them.
This annotation is a bit different, as I have almost no idea what it contains going into it. This was a recommendation by my dad — in an effort, I think, to ground my ramblings from Programming as Theory Building.
This was a reference my dad provided prior to a guest presentation he did (or, as of the time I’m writing this, will be doing) as a section of a larger course on the history of conspiracy theories.
For close to 30 years — and throughout my entire childhood — my dad was a special agent at the FBI. His exact jobs varied, but all of his specialties — counter-terrorism, bank robberies, human trafficking, WMD investigations — ultimately came down to investigations and fact-finding.
Talking with him prior to reading this, it does feel like there’s an overlap with some of the World 1, 2, and 3 philosophy noted in Programming as Theory Building. Ultimately, the job of an investigation is to find and collect pieces of World 1 (physical) evidence. Taking that evidence to court, then, is a matter of going from World 1 to World 3: what theory can be built from the evidence?
In the context of conspiracy theories, it falls down to how evidence can be selectively rejected when constructing a theory to come up with something that is partially grounded in evidence; enough evidence to be a convincing theory, but exclude enough to be reasonable when all of the data is accounted for.