← Marvin Gabler

Cheap physics changes everything

10 Apr 2026

Human progress is an iteration loop. Observe, hypothesize, build, test, fail, repeat. The bottleneck has never been ideas. It’s how long each cycle takes when you’re working with atoms instead of bits.

Testing a new jet engine design means building physical prototypes, booking wind tunnel time, running flight tests over years. Developing a new alloy means synthesizing compounds, running stress tests, and waiting months for failure analysis. Designing a drug candidate means simulating molecular interactions that take billions of compute hours, then still spending a decade in clinical trials.

The constraint is always the same: we cannot simulate reality fast enough to iterate quickly.

Why traditional simulation is stuck

Physics simulation runs on methods developed in the 1970s and 1980s. Finite element methods, finite volume methods, spectral methods. You take the governing equations, Navier-Stokes for fluids, Maxwell for electromagnetism, Schrodinger for quantum mechanics, discretize them on a grid, and solve numerically, step by step. Finer grid, smaller time step, more accuracy, more compute.

This creates a tradeoff that has not meaningfully improved in forty years. You can have accuracy or speed, not both.

In practice, engineers fill their simulations with approximations. They assume uniform resolution everywhere, same detail for empty sky and complex thunderstorms. They coarsen grids, parameterize small-scale processes, and insert trial-and-error constants that work but nobody can fully explain. A professor I know once said: “The biggest crimes are committed inside weather models.” He was only half joking.

Consider: a single global weather forecast on a traditional supercomputer consumes roughly 8,400 kWh of energy, takes one to two hours, and costs between €1,000 and €20,000. A learned model on a single GPU does the same job in about ten minutes for under $15, using around 0.25 kWh. Four orders of magnitude.

A different approach works

A cat jumping between rooftops does not solve the Navier-Stokes equations for every air molecule in her path. She uses some internal, higher-level representation that captures the essential dynamics of the physical world, enough to land exactly where she intends. Sometimes you need molecular detail. Sometimes you don’t. The trick is knowing when to go deep and when to stay abstract.

This is, roughly, what learned physics models do. Instead of encoding equations by hand and solving them on a grid, you train a model on observed or simulated data, and it discovers what matters for each prediction. It represents the physics internally at whatever level of abstraction the problem requires.

LLMs proved this works for text. Instead of encoding grammar rules, syntax trees, and knowledge graphs by hand, you train on a large corpus and the model learns the structure. The resulting model is fast, it generalizes, and it improves with scale.

The same principle applies to physical systems. A neural network trained on decades of atmospheric observations can learn the governing dynamics of the atmosphere without anyone encoding the equations. And these models transfer across domains. Architectures trained on atmospheric dynamics handle airfoil flows and shock waves with minimal finetuning. The physics transfers because the governing equations, different families of partial differential equations, share deep structural similarities.

The evidence is already here:

What happens when forward physics becomes cheap

If a simulation that used to take a week takes ten seconds, the workflow changes qualitatively.

An engineer who tests 10 designs because each simulation takes a week will test 10,000 when the simulation takes seconds. They let an optimization algorithm explore the design space continuously.

And the problem flips. Instead of asking “what will happen if I do X?” you search for the answer to “what must I do to achieve Y?”

You search the space of possibilities computationally instead of physically. Entire R&D cycles that take years compress into hours.

Weather forecasting stops being a twice-daily batch job and becomes a continuous probabilistic stream. Where traditional models afford fifty ensemble runs, learned models generate thousands in the same time. Drug discovery stops looking like a decade-long pipeline and starts looking like a search problem. Materials science moves from trial and error to systematic exploration.

The limitations are real

Learned models are data-hungry. They can struggle with rare events and out-of-distribution scenarios. Interpretability is harder than with numerical solvers. Enforcing conservation laws and physical constraints exactly, not approximately, is an active research problem.

When a physics model is wrong, it is wrong in quantifiable, measurable ways. You can check energy conservation, compare against observations, bound the error. Language models hallucinate in domains where truth is often subjective. Physics models are wrong in domains where you can measure how wrong.

Numerical methods will not get 1000x faster. Hardware follows Moore’s law at best. Learned models benefit from better architectures and more data simultaneously. The amount of physical data being generated, from satellites, sensors, simulations, instruments, is growing exponentially.

The shift is obvious. Which domains it reaches first and how fast it propagates is the open question.