The term model barely appears without a prefix anymore. "Large Language Models" have reshaped how we think about software. But there's another kind of model — smaller, executable, and arguably more useful for building reliable systems.
A model is an abstract representation of a process: behaviors (actions) plus attributes (data). A good model makes a system predictable and easier to understand. A great model reduces code complexity.
LLMs are impressive at generating text and code. But they're black boxes. We can't inspect their reasoning. We can't prove properties about their behavior. We can't compose them predictably.
Small formal models — like Petri nets — offer the opposite tradeoff:
| Property | LLMs | Petri Nets |
|---|---|---|
| Inspectable | No | Yes |
| Composable | Awkward | Native |
| Provable properties | No | Yes |
| Executable | Via code gen | Directly |
| Human-readable | Sometimes | Always |
| Size | Billions of parameters | Dozens of nodes |
Petri nets hit a rare intersection: visual enough for humans to reason about, mathematical enough for rigorous analysis, directly executable, and naturally compositional.
Consider a coffee shop model. The entire system — inventory tracking, recipe constraints, capacity limits — fits in a handful of declarations:
cell beans 1000, water 10000, milk 5000, cups 100
func brew_espresso, brew_latte, brew_americano
arrow beans --(18)--> brew_espresso
arrow water --(30)--> brew_espresso
arrow cups --(1)--> brew_espresso
That's the model. From this topology, we get ODE simulation for capacity planning, conservation laws that prove nothing is created or destroyed, and deadlock detection for when resources run out. The generated app is a working interactive demo — derived from a model small enough to fit on an index card.
When we treat the model as the artifact — not the code — everything changes. We sketch the topology, validate behavior with simulation, and extend monotonically. Code becomes a derived output.
Unlike LLM outputs, Petri nets give us formal properties we can check:
Reachability: Given an input state, can the system reach a target state? This answers "is this workflow completable?"
Boundedness: Does the system stay within limits? Can a queue overflow? Is this a closed system?
Liveness: Can the network deadlock? Under what conditions does it halt?
ODE Signatures: The math reveals structure that humans miss. In the tic-tac-toe model, the ODE solver produces scores of 4, 3, 2 for center, corner, and edge positions — from pure net topology, with zero game knowledge encoded. The structure is the strategy.
These aren't abstract concerns — they're the bugs that kill production systems. With Petri nets, we can check them before writing implementation code.
Here's what we've learned building the pflow ecosystem: LLMs are most useful when constrained by formal structure. Not LLMs or models — LLMs for models.
Code-to-flow: Paste existing code in any language, get a validated Petri net. Claude reads the structure, identifies states and transitions, and outputs a formal model. The LLM handles the messy pattern recognition; the Petri net formalism enforces correctness.
Petri-Pilot: Go the other direction. Start from a model and generate a full interactive application — frontend, backend, state management. Claude writes the code, but the model constrains what it can produce. No hallucinated states. No impossible transitions.
The book: 12 chapters teaching Petri net techniques, with every example backed by executable models. The writing uses LLMs; the content is grounded in formal nets.
The model is the checkpoint between human intent and machine execution. It's small enough to inspect, formal enough to verify, and structured enough to constrain code generation.
The ecosystem now supports both directions:
Model-first (design new systems):
pflow.xyz (edit model) --> ODE simulation (validate) --> petri-pilot (generate app)
Code-first (understand existing systems):
existing code --> code-to-flow (extract model) --> pflow.xyz (inspect/edit) --> generate new code
Both pipelines keep the model as the source of truth. The model is where humans verify correctness. Code flows from it; code flows back to it.
Small models won't write your marketing copy. LLMs won't prove your system is deadlock-free. The interesting question was never which one wins — it's how they fit together.
LLMs are powerful pattern matchers constrained by nothing. Petri nets are precise formal systems that constrain everything. Put an LLM inside a formal model's guardrails and we get something better than either alone: systems that are both expressive and correct.
The model stays small. The guarantees stay real. The LLM does what it's good at — bridging the gap between informal human intent and formal machine specification — without being trusted to get the hard parts right on its own.