Scope


(Epistemic status: Meditative speculation, gesturing at some vague similarity/commonality that I'm inclined to see in many contexts.)

A common theme or aspect featuring that seems to feature in many concepts relating to minds is scope or scopedness. (Closely related to: ontos, parsing, figureā€“ground, ā€¦)

Here are some kinds of things that seem to have (something like) scope:

  • Beliefs, aliefs, etc
    • Hypotheses are partial, they don't specify the world fully.
      • A naive first-pass guess would be to say that the scope of a belief/alief/etc (B-thing?) is the "space of possibilities" it's trying to "narrow down"/"condense". The problem with this is that
    • Relatedly, plenty of background assumptions (many of them inexplicit) that go into making the belief behave properly. E.g. if I believe that I'm hungry, I'm taking my human-typical physiological state as a given. These assumptions are also delimiting the scope.
      • This is especially visible if you're trying to translate a belief into (express it as) a proposition. You are assuming that the recipient of the proposition will share the same assumptions, leading them to translate the proposition into an appropriate belief, corresponding/referring to the World in the same way that your belief that you translated into this proposition refers to the world.
        • This point actually goes beyond beliefs, it naturally extends to thoughts.
  • V-stuff
    • The domain over which any given V-thing is supposed to be instantiated is scoped, restricted. E.g. if I want to eat, then I want to eat now or at least in the near future.
    • For a V-thing to be ambitious, it is to have a large scope. It is to pursue being instantiated as much as possible in the World, exploit all available influence channels, perhaps even via some acausal cooperation shenanigans. We can also talk about a mind being ambitious if its V-things are generally ambitious/unscoped.
    • Deontic side constraints (e.g. "I want to save as many people as possible but I'm not going to kill a person to use their organs to save five other people, come on!") are a common [mechanism for]/[manifestation of] scope restriction.
    • Utilitarianism is about expanding the scope, eliminating such constraints. On the other hand, it restricts the scope of what is cared about (total aggregated utility, hedon, welfare, or something like that).
    • So there may be a kind of dual view on the scopedness of a V-thing. First, what it cares about is scoped. Second, the extend over which it cares about is scoped. The two are tied together, they are kind of like projections of a big thing onto lower-dimensional representations.
    • Whether you care about what is distant in space/time/etc. How much (and how) you discount far-away achievements of the V-thing.
    • Attainable utility preservation (and impact regularization more broadly) is about scoping the pursuits more safely so as to not destroy too much of future value or potential for V-satisfaction. In this way, a single V-thing can constrain/scope itself.
    • Multiple V-things can also constrain each other, e.g. via cooperation/trading but also conflict. (See The cosmopolitan-Leviathan enthymeme.)
    • A V-thing that seems very ambitious/unscoped on first pass, is constrained in that if it encounters an arcane (roughly, something about which it needs to make up its normative mind), it really wants to get things right.
  • Optimization, expected utility maximization
    • What is the thing being optimized/maximized? Over what spatiotemporal horizon? How is the (expected) success quantified?
    • What is the resource being expended for that purpose?
  • Thought (in the sense of "thinking about X" not "thinking that X")
    • There is the figure and the ground to thought. The ground is the assumptions (many of them implicit, easier or harder to explicate) that scope the meaning of the figure.
    • Many of the bullet points written in the context of belief etc apply here as well (e.g. the one about translating into propositions).
  • Understanding
    • Understanding a phenomenon is relative to background knowledge, understanding of other phenomena, that is used to understand the phenomenon in question. If you're trying to understand some X, your background understanding scopes the possible understandings you may develop about X.
  • Framing a problem ("small-worlding")
    • I.e. reducing/abstracting a lot of messy stuff, so that you can think about it in a way that is more precise/formal and/or have some traction on solving a problem.
      • Example: If you have a practical problem, you may benefit from solving it by binning available actions and outcomes into a small number of categories, estimating P(oāˆ£a)P(o|a) for each outcome oo and action aa and then doing standard expected value maximization or something like that.
    • Coherence theorems
      • From John Wentworth's comment: the general pattern of a coherence theorem is:
        • Assume some arguably-intuitively-reasonable properties of an agent's decisions (think e.g. lack of circular preferences).
        • Show that these imply that the agent's decisions maximize some expected utility function.
      • However, all coherence theorems also make use of the problem being already nicely scoped, they come with a pre-packaged ontos. I.e. they assume a fixed set of actions, states of the world, propositions.
      • The world is "big". It's complicated. Coherence theorems assume a "small" world with clear rules where optimization is tractable.
  • (Doubly speculative) Basic/unspecific human values/drives (~AKA the player-character model)
    • According to these models, humans1 have a set of basic in-build needs/drives (player) that are pursued by cognitive strategies developed/elaborated on top of them (character).
      • Related: The Elephant in the Brain. However, player/character doesn't quite correspond to elephant/mahout. The player is (assumed to be) ~constant whereas the elephant can change. It seems like the elephant corresponds to the player plus some parts of the character with the mahout corresponding to the rest of the character (i.e. the consciously available, communicable "secretary"/"interpreter" parts).
    • The in-built things are relatively unscoped, generic. In combination with some other in-built-ish things (like what a given human finds inherently interesting/rewarding, their proclivities) and individual experience (what they got reinforced for in the past), they are given more scope. E.g. Autonomy ("having control over one's life") develops into more specific V-things like "the way for me to have control over my life is to be a competent entrepreneur".
      • Wanting (on the side of the player) to be coherent (or appear to other people as coherent) can be one factor that fixates the character, making it less mutable. (Vaguely analogous to rationality of precommitment?)

Commonalities

What do all these cases have in common?

The thing that is most salient to me is something like figure-ground organization. When we consider (the "content" of) a thought, belief, value, etc, we typically think about the figure, taking the ground as a given. But the ground is what grants the figure/content the possibility of having meaning. Also, the ground doesn't come for free. It's not given by God.

It can require work. To understand X, you need to first understand A, B, and C, and understanding A, B, and C requires work.

It is contingent. To believe X, you first need to have some background assumptions that make it possible to believe X.

The thing that all of them seem to have in common is something like:

The context/ground in which a thing is embedded grants it (the possibility of) being what it is.

Changing constraints/ground changes the meaning of the figure

Constraints composeā€¦ or perhaps they interact in some interesting ways.

Depending on expectations, the same smile can be taken as conveying joy, compersion, empathy, pithy, irony, sarcasm, Schadenfreude.

The word "totally" can convey complete agreement or kinda opposite, depending on the context. This can be disambiguated by the tone of voice in spoken speech but is often left to be inferred from the context in texting.

Here are some examples on more directly cognitive stuff from the first section of this writeup:

  • Belief: The woman down the road is a witch
  • V-stuff: The value-relevant assertion "All men are created equal." changes its meaning significantly when the notion of "men" is expanded beyond "white males" (although arguably the spirit of the assertion is retained; one might even claim that this extrapolation gets closer to some kind of Truth).
  • Discovering an arcane

The base can change

Where by "base" I mean something like the "substrate" of both the figure and the ground #-/TODO

evolution of humans? Evolution as Optimization

Why care about scope?

If scope(dness) is a Thing, understanding scopedness may help understand many mind-y phenomena at once.

What would be the most general framing of this putative phenomenon?

An element (an intentional Thing?) is about some stuff but not about some other stuff. Moreover, for it to be what it is, it needs to be specified what it is over.

I think I may have conflated several things:

  • Ā«optimization simpliciterĀ» being underspecified
  • intentionality
  • context-dependence / figure-and-ground-ness of elements

Footnotes

  1. Plausibly excluding psychopaths and the like. ā†©