Cool terminology, but I don't think this really applies to LLM-generated code.
"Based on a fundamentally incorrect model" is what I would call code by a new coworker or intern who didn't stop to ask questions. The model being fundamentally wrong means it's cognitively taxing to refactor, but you can at least reason about it, and put band-aid fixes if it somehow flows to prod.
With LLM code you're lucky if there's any sort of ostensibly identifiable model underlying the cryptic 10,000 lines PR corresponding to the statistically extrapolated token stream flowing out the matrix multiplication brick. And the person raising that PR tries to create that mental model, instead of taking the easy way out.
Strictly speaking, there is no privileged position here to argue for. If the Ptolemaic model gives the same observable results as the heliocentric model, then it’s just the same system but modelled from our perspective on Earth.
Indeed, this might even be more valuable to you if you wanted to show the motion of the planets and constellations relative to Earth directly, rather than the sun. Here I’m thinking about a physical model than might cycle through the zodiac for example.
If this complexity costs you nothing to build and maintain, and is automatically validated, then the mechanism under the hood probably isn’t something you care about very much.
it seems like there's some confusion here between the Graeco-Roman astronomer Claudius Ptolemy (c. 100 – 160s/170s AD) and the Macedonian General/Pharoah Ptolemy I Soter (c. 369/68 BC – January 282 BC).