Deconfusion-oriented typology of concepts
Here are a few categories of concepts that may be useful to keep in mind when doing deconfusion-like work.
Tin concepts
"Tin" standing for "theoretically thin".
A tin concept is a concept that is perfectly valid, is not just a grab bag of examples having some vague similarity. It is pointing at something in the world. However, the thing is pointing at is so general or weakly specified that there's not any or much theory to be made about it.
Plausible example: (generalized) selection, optimization(?).
Tick concepts
"Tick" standing for "theoretically thick".1
A tick concept is a concept that is capable of producing non-trivial insights or inferences or predictions that can be tested. A tick concept can serve as a foundation for a theory/model. It is pointing at a nexus of inductive-inference.
Tinness/tickness is a spectrum.
Suitcase concepts
In his book The Emotional Machine, the late Marvin Minsky coined the term 'suitcase word' to describe words into which people attribute—or pack—multiple meanings. … His observation was that many of the words commonly used to describe the human mind—such as consciousness, intuition, intelligence, or learning—were suitcase words whose embedded intricacies presented a real challenge when attempting to apply experimental and computational science to a system as complex as the human brain.
Suitcase words can become hazardous when we assume that everyone else attributes the same meaning. For example, to some, the term “social justice” might imply a need for more social welfare programs and a more progressive tax code; but for others, the term may imply the exact opposite!
(Source.)
Conflationary alliance concepts
When X is a conflated term like "consciousness", large alliances can form around claims like "X is important" or "X should be protected". Here, the size of the alliance is a function of how many concepts get conflated with X. Thus, the alliance grows because of the confusion of meanings, not in spite of it. I call this a conflationary alliance. Persistent conflationary alliances resist disambiguation of their core conflations, because doing so would break up the alliance into factions who value the more precisely defined terms. This resistance to deconflation can be deliberate, or merely a social habit or inertia. Either way, groups that resist deconflation tend to last longer, so conflationary alliance concepts have a way of sticking around once they take hold.
See also: All hail the Omnicause!
Accidentally load-bearing (alb) concepts
Inspired by Jeff Kaufman's post:
A few years ago I was rebuilding a bathroom in our house, and there was a vertical stud that was in the way. I could easily tell why it was there: it was part of a partition for a closet. And since I knew its designed purpose and no longer needed it for that anymore, the Chesterton's Fence framing would suggest that it was fine to remove it. Except that over time it had become accidentally load bearing: through other (ill conceived) changes to the structure this stud was now helping hold up the second floor of the house. In addition to considering why something was created, you also need to consider what additional purposes it may have since come to serve.
See also: Hyrum's Law. (H/t LW user Andrew Antes.)
An accidentally load-bearing ("alb" for short) concept is a concept on top of which you've built a lot of structure/understanding/mental models even though [participating in structures of that type] wasn't this concept's purpose. You used the concept "just because it was there" and you had to found the structure/understanding/mental models you were building on something.
Alternative name: "implicitly assumed crucial invariant".
Footnotes
-
Also, they tick like a working clock ticks because they do very useful work. ↩