Playbook:

Getting the Words Right

A practical playbook to naming work, modeling endurants and perdurants, and making complex product operations legible without harmful flattening.

Chapter 3

Static vs. Dynamic Optimization

Fixed objectives versus search, learning, and adaptation

Summary: Treat product development as a dynamic optimization problem, not a static one. The difference is not academic. It changes how you define success, how you make decisions, and how you structure work. The previous note showed why product systems are harder to model: the relevant things evolve, the processes overlap, and time and state matter more. This note adds a second lens. Once the landscape itself is moving, the problem is not just harder to represent. It is also a different kind of optimization problem. The Core Distinction A static optimization problem assumes:

•the objective is known •the constraints are fixed •the solution can be computed A dynamic optimization problem assumes:

•the objective evolves •constraints can be introduced, removed, or reshaped •the solution is discovered over time In product work, you are usually not solving for a fixed answer. You are navigating a moving landscape. In more practical terms, static framing assumes you can define the goal, lock the constraints, make a plan, and then execute. Dynamic framing assumes you will learn what matters while doing the work, revise constraints to shape the search, and update decisions as new information

appears. Aspect More static optimization More dynamic optimization DefinitionThe objective and constraints are mostly known up front and stay stable long enough to solve against them. The objective, constraints, or both may shift as the work unfolds and new information appears. Objective functionUsually treated as fixed during the work. Often revised as the team learns what matters or what is feasible. ConstraintsMostly known in advance and expected to hold. Can be introduced, relaxed, tightened, or reinterpreted over time. Solution approachMore planning-heavy, analytic, and compute-oriented. More adaptive, iterative, and learning-oriented. Nature of the problem Clearer boundaries and a more stable landscape. Higher uncertainty, weaker initial definitions, and a changing landscape. Decision makingDecisions are more front-loaded. Decisions are revisited as signals, feedback, and conditions change. Best fitStable environments with relatively dependable inputs. Evolving environments where search and adjustment are part of the work. FlexibilityLower once the problem is framed correctly. Higher because the framing itself may need to change. ExampleTune a manufacturing line with known capacity and safety limits. Explore product strategy or optimize a portfolio as conditions keep changing.

Other Names You May Hear

Different domains describe similar ideas in different ways: Optimization framingRoughly similar ideas Static optimizationplanning, prediction, upfront design, deterministic systems Dynamic optimizationlearning, adaptation, exploration, control systems Objective functionsuccess metric, goal, value function Constraintsguardrails, policies, enabling constraints Solutionplan, roadmap, implementation These are not exact matches, but they help translate between theory and practice. Operational Difference Use this test:

•If you can define the objective, constraints, and solution before starting, and you have very high confidence they will all hold, you are in a more static problem.

•If any of those change meaningfully as you proceed, you are in a more dynamic problem. Here is a rough gradient: CaseObjectiveConstraintsSolution shape Very static: manufacturing throughput tuning Increase throughput by a known amount Mostly fixed: machine capacity, labor, safety limits Choose and tune from a fairly well-understood solution space

CaseObjectiveConstraintsSolution shape More mixed: service operations improvement Improve incident response and reduce customer pain Some fixed, some redesignable: tooling, handoffs, ownership, alerting Partly known upfront, partly discovered through investigation Highly dynamic: new product or strategy work The goal itself may shift as you learn Useful constraints may need to be introduced gradually Emerges through exploration, interpretation, and adjustment Most consequential product work sits closer to the second and third cases than the first. Example: Amazon’s Weekly Business Review A good example of dynamic optimization in practice is Amazon’s WBR. It is often described as a metrics meeting, but it is more accurate to think of it as a system for continuously updating a model of the business.

•leaders review hundreds of metrics weekly “The metrics in a WBR are not static — you are expected to add or discard controllable input metrics as they stop working ...” As Cedric Chin argues in his piece on the Amazon WBR, it is best understood as a process control tool: a way to uncover and distribute the causal structure of a business so teams can act with better understanding. From a modeling perspective:

•the business is treated as something unfolding over time •metrics act as signals of that unfolding •weekly review creates a learning loop This is dynamic optimization in practice.

Where Teams Get Into Trouble

Most issues come from treating dynamic problems like static ones. Common failure modes:

•locking the objective too early -> optimizing for the wrong thing •treating constraints as fixed -> losing the ability to explore •over-indexing on planning -> under-investing in learning •chasing precision over direction -> local optimization, global confusion •expecting one “right” solution -> ignoring viable alternatives Back to the Framework This connects directly back to the earlier notes in the series. In clearer, more stable systems, the relevant endurants are easier to identify, the constraints hold more consistently, and the perdurants are easier to model as bounded flows. Static optimization makes more sense there because the landscape is not moving very much while you work. In product work, the relevant endurants are often still being clarified, the constraints may be chosen or revised to shape the search, and the perdurant is the unfolding process of learning, deciding, shipping, interpreting signals, and adjusting. That is why the optimization problem is dynamic. The object of optimization is not fully settled when the work begins. Seen this way:

•learning is part of the optimization process •enabling constraints shape the search space •flexible objectives acknowledge that success criteria evolve •long-term thinking means optimizing across time, not just for the next local gain •multiple viable solutions are normal, not a sign that the team has failed to choose the one right answer This is not just a different set of tactics. It is a different class of problem. As Things Become More Dynamic:

•Focus on unfolding perdurants:Model the things that happen over time, not just the things that exist. That usually means execution, discovery, learning cycles, and decision changes. These are

not secondary details. They are often the primary mechanism by which the system evolves.

•Model enabling constraints explicitly:Constraints are not just static rules. They are introduced, adjusted, and sometimes removed to shape how the system learns. Model when a constraint was introduced, what it was intended to do, how it changed over time, and what effect it had.

•Focus less on labels, more on trajectories:Whether something is called anepic, initiative, orprojectis often less important than how it unfolds. Shift from categorizing work toward understanding how it evolves, how it branches or converges, and how decisions shape its path over time.

•Treat objectives as evolving, not fixed:An objective is not always a stable target. Model how the objective was defined, how it changed, and what triggered the change. This preserves the context behind decisions and avoids rewriting history.

•Preserve change as first-class:When something changes, do not overwrite the past. Model the transition, the before and after, and the reason for the change. This allows the system to represent learning, not just current state. Why This Matters If you model a dynamic problem as static:

•your plans become brittle •your metrics become misleading •your teams optimize the wrong things If you model it more accurately:

•you design for learning •you adapt without thrashing •you make better long-term decisions Try This Now:

  • Write down a current product initiative.
  • Ask: which parts of this are actually fixed?
  • Then ask: what are we pretending is fixed that is not?

Next

Continue reading

RAG Status, Endurants, and Perdurants

Download this playbook as a PDF