← Home
petri-nets ode game-theory tic-tac-toe rate-constants

The Incidence Reduction

In Tic-Tac-Toe Model, we ran ODE simulations with mass-action kinetics to evaluate board positions. The scores for the empty board came back as floats:

Center (1,1):  1.27
Corners:       0.95
Edges:         0.63

These values emerged from the net topology alone — no hand-coded heuristics, no game tree search, no training data. The ranking (center > corners > edges) matches every game AI textbook.

But the values are suspiciously proportional. Divide each by the smallest:

1.27 : 0.95 : 0.63 ≈ 4 : 3 : 2

The ODE is tracking three integers. Not computing them exactly — the ratios are approximate, within a few percent — but recovering the same ranking and grouping that the integers predict.

Uniform Rates as a Probe

The key technique: set every transition's rate constant to 1.0 and run the ODE. With uniform rates, mass-action dynamics have no bias from tuning — the only signal is topology. Whatever structure the net has gets expressed as differential flow toward terminal states.

For tic-tac-toe, the terminal states are win transitions — sinks in the net. Every move transition feeds tokens toward pattern collectors, and pattern collectors feed tokens into WinX or WinO. With all rates equal, the steady-state token accumulation in the win places reflects how many paths feed each position into a win.

The integers behind the ODE are the incidence degrees — the number of arcs from each position's history place to downstream win transitions:

 3  2  3
 2  4  2
 3  2  3

The center participates in 4 win patterns (row + column + 2 diagonals). Corners participate in 3 (row + column + 1 diagonal). Edges participate in 2 (row + column). These are exact — they're arc counts in the graph, not simulation results.

The Same Predictive Behavior at Any Scale

The incidence degree formula generalizes to any N×N board. Each position at (i, j) gets:

So the only possible degrees are {4, 3, 2}, and degree 4 only occurs at the center of odd-sized boards. Running the ODE with uniform rates from 3×3 to 7×7 confirms that the same predictive behavior holds at every scale:

N=3:  degree 4 ratio 1.960 (expected 2.000, err  -2.0%)
      degree 3 ratio 1.445 (expected 1.500, err  -3.6%)

N=5:  degree 4 ratio 1.615 (expected 2.000, err -19.2%)
      degree 3 ratio 1.279 (expected 1.500, err -14.7%)

N=7:  degree 4 ratio 1.734 (expected 2.000, err -13.3%)
      degree 3 ratio 1.355 (expected 1.500, err  -9.7%)

The ODE ratios approximate the integer ratios closely for small boards and less closely for larger ones, because more positions compete for flow through shared win transitions. But three properties hold at every size tested:

  1. Same degree, same score — positions with equal incidence degree get equal ODE scores
  2. Correct ranking — higher incidence degree always produces higher ODE score
  3. Move selection works — the ODE always recommends the same move as arc counting

The incidence degree is an exact predictor of ODE ranking. The ranking is what matters for move selection — and it never breaks.

Reverse-Engineering Rate Constants

This is where the technique becomes practical. For tic-tac-toe, we already know the model — we built it. But for models extracted from event logs via process mining, or for complex models where the strategic structure isn't obvious, the uniform-rate ODE probe reveals the topology's inherent weighting.

The algorithm:

  1. Set all rates to 1.0 — remove any manual tuning
  2. Run the ODE to steady state — let the topology speak
  3. Read the terminal accumulations — token flow into goal/win/completion places
  4. Normalize to integers — divide by the smallest, round to nearest whole number
  5. Assign as rate constants — these are the topology-derived rates

For the tic-tac-toe model, this recovers:

x_play_11 (center): rate = 4
x_play_00 (corner): rate = 3
x_play_01 (edge):   rate = 2

These are the same values that the rate auto-derivation algorithm in go-pflow computes by traversing the bipartite graph directly. For simple nets with independent terminals, the ODE probe and the graph traversal converge to the same answer — the ODE just takes the scenic route. For larger or more complex nets, the ODE probe still gives a useful starting point for rate constants, even if the ratios don't round to clean integers.

What the Integers Mean

The incidence degree has a precise interpretation: it counts the number of independent paths from a candidate action to a goal state. In game terms, it measures how many ways this move can contribute to winning.

For tic-tac-toe:

For a resource model with completion states, the same count measures how many production paths a resource participates in. For a workflow with terminal states, it measures how many routes to completion pass through a given activity.

The uniform-rate ODE probe generalizes: for any Petri net with designated terminal transitions, it recovers the connectivity structure that determines strategic or operational value.

Absorbing the Search Tree

Here's the thing that's hard to see at first: the Petri net has absorbed the search tree.

In classical game AI, you evaluate a position by searching. Build a tree of all possible move sequences. Propagate values from leaf nodes (wins, losses, draws) back up to the root. Minimax, alpha-beta, MCTS — they all walk the tree, one branch at a time, accumulating evidence about which moves lead to good outcomes.

The analysis net doesn't walk anything. It encodes all the constraint structure — every win line, every path to victory — as topology. The search tree's branching factor becomes arc multiplicity. The tree's depth becomes path length through the net. The tree's leaf evaluations become terminal transitions. The entire combinatorial structure that search would explore is baked into the graph before a single token moves.

Then mass-action kinetics does the rest. Tokens flow from sources through accumulators into drains, all in parallel, all simultaneously. Every path is "explored" at once — not as a sequential traversal but as concurrent fluid flow. The ODE doesn't choose which branch to investigate. It pressurizes all of them, and the steady state reflects the aggregate result.

This is why the center cell scores highest. It's not that the ODE "figured out" the center is best. It's that the center has 4 drain transitions (4 win lines) pulling tokens out of its accumulator, while edges have only 2. More drains mean faster depletion, lower equilibrium concentration, and — after inversion — higher strategic value. The topology is the evaluation.

The decoupling lemma from the paper is what makes this exact rather than approximate. Because each accumulator's ODE depends only on its own concentration and its drain count — ẋᵢ = 1 - nᵢxᵢ — no information leaks between accumulators. The continuous relaxation doesn't blur the discrete structure. You can go from integers (arc counts) to reals (ODE concentrations) and back to integers (strategic values) without losing anything. The search tree's dynamics are fully absorbed into the net's structure, and the ODE is just reading them back out.

Think of it this way: minimax compresses a search tree into a single value by exploring it. Incidence reduction compresses the same tree into a bipartite graph by encoding it — and then reads the value directly from the encoding. The graph is the answer. The ODE is just the proof that the graph is right.

The Net as a Lens

In category theory, a lens is a bidirectional map between a complex structure and a simpler view of it. The analysis net with strategic values in place is a lens — it sits between the game's full state space and a vector of strategic values.

Get: given any marking (board state), read strategic values directly from the net's topology. No search required. The drain arc counts are the view.

Put: make a move — update the marking — and the net's structure constrains the new values automatically. Blocked win lines drop out, remaining drain counts shift, new values emerge. The lens focuses on the updated state without reconstructing the search tree.

A search tree is an opaque structure you traverse. A lens is a structure you look through. The topology does the work that traversal used to do.

The decoupling property gives this lens a clean product structure. Each position's equation ẋᵢ = 1 - nᵢxᵢ is independent — a lens per place, composed in parallel. Updating one position's view doesn't touch the others. This is why dynamic evaluation works: inject a new state, and each position's lens independently resolves its new value from its remaining drain connections.

The net types are essentially different lens configurations over the same Petri net substrate. A GameNet lens views state through turn structure and win conditions. A ResourceNet lens views state through conservation laws and capacity. The mathematical object is the same — places, transitions, arcs — but the lens determines what you see through it.

Complexity

What does "absorbing the search tree" buy in computational terms?

The evaluation has two phases: build the analysis net, then read the strategic values from it.

Reading values: O(p). The decoupled ODE ẋᵢ = 1 - nᵢxᵢ has an analytical solution — the equilibrium is just xᵢ = 1/nᵢ. We don't need a numerical solver. Scan the incidence matrix, count drain arcs per accumulator, invert. One pass over the places.

Building the net: O(k · m) where k is the number of winning patterns (constraints) and m is the pattern length. For tic-tac-toe: 8 win lines × 3 cells = 24 arc insertions. For Connect Four: 69 win lines × 4 cells = 276. For poker hand ranking: 10 hand types × variable size.

Compare this to search:

Method Complexity Tic-tac-toe
Minimax O(b^d) ~362,880 nodes
Alpha-beta O(b^(d/2)) ~thousands
Incidence reduction O(k·m) build + O(p) eval 8×3 + 9 = 33 ops

The gap is dramatic for tic-tac-toe. But the honest question is: what determines k?

For grid games with linear win conditions — tic-tac-toe, Connect Four, Hex — k grows polynomially in the board dimension. An n×n board with n-in-a-row wins has k = O(n) lines per direction × O(1) directions. The analysis net stays manageable.

For poker hand ranking, k is fixed at 10 hand types regardless of the number of players or betting rounds. The constraint structure is inherent to the card game's definition.

For games with emergent constraints — chess tactics, Go territory, complex strategy games — k becomes the problem. You can't enumerate checkmate patterns without effectively searching for them. The constraint structure isn't declared in the rules; it emerges from interaction. In these cases, k approaches the game tree size, and the technique loses its advantage.

The boundary is structural: incidence reduction is efficient when the win conditions are declared (lines on a grid, hand rankings, resource thresholds) rather than emergent (tactical combinations, positional advantage). Declared constraints give you a polynomial-sized analysis net. Emergent constraints push k back toward exponential, and you're better off searching.

This matches the boundary between counting and simulation from a different angle. Independent sinks with declared constraints → counting in O(k·m + p). Interacting resources with emergent structure → simulation, or search, at higher cost.

Dynamic Evaluation

The empty board is the base case. The real utility comes from injecting a live state and recomputing.

When a position is occupied, its token has been consumed — it drops out of the flow calculation. The incidence degree is now computed only against still-reachable win transitions. A corner that participates in 3 win patterns on an empty board might connect to only 1 if the opponent has blocked the other two.

Given any board state:

  1. Identify which win transitions are still reachable (not blocked by opponent pieces)
  2. For each empty position, count the number of reachable win transitions it connects to
  3. The highest count is the best move

For simple nets, this is integer counting. For complex nets where resource competition and multi-hop connectivity create non-trivial dynamics, the ODE simulation resolves what counting cannot. But the mechanism is the same — evaluate positions by their connectivity to terminals, weighted by the current state.

The Boundary Between Counting and Simulation

Tic-tac-toe reduces to {4, 3, 2} because the win transitions are independent sinks. No position competes with another for shared resources on the path to winning. The incidence structure has no dynamics — just a direct count.

The coffee shop is different. Multiple drink recipes compete for shared ingredients (beans, water, milk). The espresso and latte transitions both consume beans, creating a bottleneck the ODE must resolve. The steady-state values aren't integers — they're rational numbers reflecting the resource competition. The ODE earns its keep.

Texas Hold'em is different again. Multi-phase betting, role-based access, and conditional guards create a net where the connectivity to terminal states depends on dynamic state that can't be read off the static graph. The ODE must simulate the actual flow.

The boundary is clear: when terminal transitions are independent sinks with no shared upstream resources, incidence degree counting produces the same move recommendations as ODE simulation. When paths interact — shared resources, mutual exclusion, conditional guards — the dynamics matter and the ODE is doing real work.

Recognizing which side of this boundary a model falls on tells you whether you need a solver or a graph traversal. For model builders, the uniform-rate probe is the diagnostic: run it, check if positions with the same arc count get the same score, and you know immediately whether the model's strategic structure is purely topological or genuinely dynamic.

Implications for Model Building

The incidence reduction is a practical tool for model development:

Validation. If you build a game model and the uniform-rate ODE groups positions correctly by incidence degree — same degree, same score; higher degree, higher score — your terminal transitions are correctly independent. If the grouping breaks, there's unintended resource coupling somewhere.

Rate discovery. For models extracted from data (via process mining or manual construction), the uniform-rate probe gives you a starting point for rate constants that reflects the topology's inherent structure. You can then adjust from this baseline using observed data, rather than guessing rates from scratch.

Model simplification. If the ODE ranking matches the incidence degree ranking, you may not need the ODE at all for move selection. Replace the simulation with a graph query — faster, exact, and easier to verify. Reserve the solver for the parts of the model where dynamics genuinely matter.

The Petri net is the domain knowledge. The topology encodes the rules. The incidence structure encodes the strategy. And sometimes, three integers are all you need.

Further Reading

For how ODE analysis combines with tropical and zero-knowledge proofs to discover structural boundaries: Earned Compression

For the categorical structure that makes all of this work: Symmetric Monoidal Categories

×

Follow on Mastodon