Optimization


Questions about optimization

Question

What are the criteria for the concept of "optimization"?

  • Relationship to agency
    • Agents are (a kind of) optimizers?
    • Agency as a special kind of optimization?
    • Optimization as a [building block]/grounding for utility function or something like that?
    • We want to measure (potential) optimization power of a process/agent/thing?
    • We want to measure the optimization of a state.
  • Relationship to "optimization" as used in non-agentic contexts, like
    • Optimization of a function.
    • Selection processes viewed as optimizers, e.g. evolution.
    • Local optimization of a function is kinda both/in-between.
Question

What are the default ways of talking about "optimization"? What are the problems with them?

  • Reliably leading the world into a small (improbable?) set of states.

  • Reliably pushing the world up some preference ordering.

  • Problems with all of them (I guess):

    • Not clear what is the timeline/[temporal horizon] (?) over which you want to measure optimization.
    • What does "reliably" mean? If you use probabilistic measures, this comes down to something like reward maximization.
    • Imputing a ghost in the machine:
      • Everything can be viewed as an optimization process for whatever it's doing.
        • Something something lower K-complexity yadda yadda.

It seems to me that the concept of "optimization" is underspecified. To make it meaningful, we need to give it scope. But scope plus objective/[coherent trend/pressure towards something] is not enough (in my books) because then a ball rolling downhill still passes the test.

Question

Why do I care about something in the neighborhood of optimization?

  • I have a moderately strong intuition that something like optimization is a Thing.
    • Statements like "Ted Roosevelt was a stronger optimizer than his brother Elliott" are meaningful.
      • Although is this the best way to phrase this? (Best for what purpose though?) We could say that Ted was more agentic, effective, etc. The common thing/intention to be expressed behind all of them is that Ted got what he wanted more reliably than Elliott.
  • What feels wrong to me (anti-intuitions):
    • Talking about reliably pushing the world up some preference ordering.
    • Using K-complexity to select between various possible/plausible/coherent objects/criteria/targets/goals/objectives of the optimization process.
    • Overly broad/permissive definitions of "optimization" that include a rock rolling down a hill.
    • Strictly nominalist/instrumentalist positions that postulate this to be merely some kind of useful lens to view the world, without trying to explain the commonalities of these phenomena that make it appropriate to view them through this lens. (Dennett's intentional stance has the same problems.)
  • What feels right to me:
    • Including some kind of self-correction in the definition of optimization but this self-correction needs to be stronger than something like "if you nudge the rock a little bit to the left, it still ends up in the same place because of the structure of the hill".
    • "Ontological" commitments:
      • Representation of the objective within the system?
        • This feels vaguely directionally on the right track but also too strong. First, I'm still confused about what a representation even is. Second, I want to say that the liver is optimized for its ~500 (!) functions, even though they're not represented anywhere/anyhow.
        • We need to find/construct a narrow path between the Scylla of "everything can be viewed as optimization as long as its useful" and the Charybdis of "a thing is (categorically!) an optimization iff its objective is written on a telopheme".
    • Optimization is an optimization for something. For some goal, objective, purpose, whatever you want to call it. I'll settle for "objective".
      • The objective can be complex, multifaceted, composed of smaller objectives, malleable/re-negotiable/changing/changeable (including due to reinterpretation).
    • I want to be able to distinguish between "X was optimized for Y" and "X looks as if it was optimized for Y but it was actually optimized for Z or not optimized for anything at all".