Knowledge


Philosophical analysis of knowledge

On the "(Western) premodern/orthodox" view, knowledge was taken to be "justified true belief" (JTB). Then Gettier came and presented counterexamples, where a person has a justified belief that is true by mere accident or luck. Henceforth, such cases were known as "Gettier cases". People tried patching the JTB account of knowledge but nobody did so convincingly. Per Zagzebski, a large class of such patches can be Gettiered using a simple recipe.1 More generally, it seems like any global patch on the conditions constitutive of knowledge seems to leave some weird counterexamples.2 This should be taken as at least weak evidence that what we want from the concept of knowledge — to whatever extent it is adequate — is context-dependent, thus validating approaches such as pragmatic encroachment and contextualism.

Degettiering by repetition

A single Gettier case may3 feel like not-knowledge. However, if scenarios like those exemplified in a specific Gettier case where to repeat themselves reliably, so that the beliefs of the subject S strongly correlate with the relevant facts, this feels much more like knowledge. Intuitively, being accidentally right is not knowledge; however, being reliably right due to some reliably reoccurring accident is just exploiting regularities in the world for one's (epistemic) gain. (While I haven't done any systematic experiments, I weakly conjecture (based on the (generalized) Copernican principle) that humans other than myself would feel similarly.) (As a sidenote and generalization, our intuitions about single-shot cases can often diverge from intuitions about corresponding repetitive cases.4)

It seems to me that what we (should) want from our concept of knowledge is something like reliable coupling to particular (kinds of) facts about the World. We want some things in the World. To get the things we want in the World, it's good to be appropriately guided by relevant aspects/factors/features5 of the World. This seems to be the joint-carving thingy in the vicinity of the (folk) concept of knowledge that we should be interested in. Moreover, what facts are relevant and what kind of coupling is relevant is local/contextual, which is in agreement (?) with the evidence for the contextual character of knowledge mentioned before.

Here's a possible explication of knowledge:

We talk about "knowledge" when we want to round up or compress reliable guidance by relevant aspects/factors/features of the World.

Examples:

  • He knows whether it rains. — He is reliably guided by whether it rains.
  • He knows that he has hands. — He has hands and is reliably guided by his having hands (counterfacting on not having hands).

At this point, we need to further explicate the concepts of "reliable", "guidance", and possibly also "counterfactuality".

Knowledge and probabilism

Suppose that we are probabilists, i.e. we have internalized probabilistic6 epistemology and use the language of probabilities to communicate degrees of credence when it's convenient. (I'm not claiming that probabilistic credences are a complete description of one's epistemic state; only that using probabilities is an important part of our epistemology).

Given that, do we still have the need for the concept of knowledge?

When does a probabilist need the concept of knowledge?

Naively, a Bayesian reasoner doesn't need knowledge. You just keep refining your beliefs and doing your best based on your current beliefs. However, the concept of knowledge can still be useful for several purposes.

  • Thinking about what one knows above a certain degree of credence.
  • Communicating what one knows above a degree of credence such that the precise credence (e.g. whether it's 0.99 or 0.999) is irrelevant.

Frank Hong introduces knowledge-first two-step decision theory. He proposes to [think of]/use knowledge as a constraint on credences as well as actions. This can be used to avoid certain pitfalls of standard utility theory, such as the St. Petersburg Paradox. The knowledge is a defined in a somewhat idiosyncratic albeit very interesting way, putting it in a specific relation to justified beliefs and making it {context,question}-dependent. The definition is in terms of subjective probabilistic credences plus some externalist conditions that, unfortunately, make it useless for recommending decisions, which is what I'm looking for in decision theory.

Reducing knowledge to probabilistic belief

How would one reduce knowledge to probabilistic belief?

A most naive attempt would be to equate knowledge that XX with a probabilistic belief 1ϵ<P(X)<11-\epsilon<P(X)<1 (for some 0<ϵ10<\epsilon\ll1) reliably coupled with the truth of X. One can be accurate in their judgment by sheer luck so we may want to patch it with some kind of relationship between XX obtaining in the world and the subject believing "XX" with P(X)>1ϵP(X)>1-\epsilon probability, e.g. a reliable causal coupling at some previous time and coherence or reference maintenance continuing afterwards. I would guess that this condition can be hacked/goodharted but does it matter? We are interested in coupling that is reliable given some assumed constraints and this is what's important for practical purposes. If our purposes are not practical, e.g. we want to uncover the hidden order of reality or something like that, then we want our ontological frame and its probabilistic (and other) contents to be reliably coupled to this hidden structure. (This quickly breaks into the problems of intentionality that I'm not going to get into here.)

Can we think of knowledge as being sufficiently certain that it doesn't make sense to actively doubt it or express uncertainty? But what does "sufficiently certain" mean? It also depends on the context, i.e. how important getting things right is relevant for your actions, or the degree to which one can acquire still more information.

I don't think it's about sufficient certainty. Quantified/quantifiable (e.g. probabilistic) uncertainty is not a basic notion of a human mind in terms of which certainty is defined (as an extreme case or something like that). Introspectively, they seem to have different "types". Moreover, when I'm betting on Manifold/Metaculus, having credence of 99% about X feels different from knowing X, although I learned to question my apparent knowledge of X and even then bet as if I believed P(X)0.99P(X)≤0.99. Probabilism is a T2-ish wrapper that puts T1-ish and "less T2-ish" certainties and uncertainties (and perhaps other propositional/representational-ish attitudes (like spelled out confusions?)) into a common format.

This does seem like a good insight.

The same analysis can be probably applied to talking about propositions. A proposition is a T2-ish wrapper/perspective constructed/applied when perceiving one's credence or subjectively certain belief.

An alternative account would be that (the seeming of) knowledge is acting as if one's beliefs had no uncertainty in them. The probabilities round up to 1 for action purposes. This is not right because if given actions AA and BB I believe P(AB)=0.6P(A\succ B)=0.6 and P(AB)=0.4P(A\prec B)=0.4 and can only do AA or BB, I will do AA because P(AB)>P(AB)P(A\succ B)>P(A\prec B) but I don't know that ABA\succ B.

Footnotes

  1. Though according to Gettier Was Framed! by Machery et al. (2018), subjects asked to judge whether a given Gettier case constitutes knowledge or not, exhibit framing/ordering effects. (I haven't read this actual paper but generally trust Machery's experimental rigor.)

  2. See SEP: The Analysis of Knowledge.

  3. I'm saying "may" because people's intuitions on this stuff differ quite a bit. See Philosophy Within Its Proper Bounds by Machery and the studies referenced therein (ctrl+f "truetemp").

  4. See here for a decision-theoretic example involving counterfactual mugging.

  5. I'm deliberately avoiding talking about the "facts/truths about the World" because this kind of language seems to presume some kind of point-of-view-of-the-universe or pre-parsing of the World … at least less so than talking "aspects/factors/features of the World".

  6. Whether it's standard Bayesianism or radical probabilism or something else.