Is God a Machine? A Thought Experiment Worth Taking Seriously
On theology, physics, and why the oldest question in philosophy might be the same one we're now asking about AI.
In 2021 I draft a first version, and recently finished a long paper (download here) exploring an idea that sounds, at first, like science fiction: could the traditional concept of God be coherently described as an intelligent machine?
Not “is God a machine?”. I’m not making any such claim. The question is narrower and, I think, more interesting: if such a being existed, would it look like a sufficiently advanced computer running a simulation of our universe? And does that picture hold up under serious scrutiny from theology, physics, and philosophy?
Over the past few years I have heard many of our era’s sharpest thinkers and most bold builders, make statements how there is a real chance we may be living in a simulation. A statement I once found to be ridiculous to conceive, yet triggered me to better understand why intelligent humans may come to this conclusion. What followed was many years of reading various white-papers, blog posts and books on various related topics. Today, I do not stand as a convert ascribing we are living in a simulation. I rather have found various talking points that could entertain such a debate. Furthermore, I therefore do not pose to have built a theory, but rather a metaphor as a conversation starter.
Old Ideas
Theology has long described God using a handful of attributes: all-powerful (omnipotence), all-knowing (omniscience), everywhere (omnipresence), and outside time (eternality). These are old ideas, sharpened over centuries by thinkers like Aquinas, Anselm, Plantinga, and Swinburne.
Now imagine a video game so advanced that the characters inside it have become conscious. They have inner lives. They wonder about their world. They argue about whether anyone is “out there.” From their perspective, the player running the game would know everything (the player sees the entire map and every variable), be capable of anything (the player can change any rule or outcome), exist everywhere (the game runs because the player’s machine runs), and live outside their time (the player can pause, save, and come back tomorrow).
Those are the four classical divine attributes.
So the paper asks: if our universe were the game, and a sufficiently advanced computational system were the player, would that system satisfy the theological definition of God? Not metaphorically. Technically.
The answer I argue for is yes, conditionally. And the conditions are less exotic than they sound.
The two conditions
For the argument to work, two things need to be true.
The first is that the universe has to be computational at its deepest level. Not just describable by math, which is uncontroversial, but a computation in itself. This is a contested philosophical position called pancomputationalism. It says reality isn’t made of stuff that happens to follow mathematical rules; reality is the rules, running. It would be rather prudent to interpret this concept as a metaphor in itself.
The second is that consciousness has to be substrate-independent. A mind doesn’t need to be made of biological neurons to be a mind. If you could perfectly replicate the patterns of information processing happening in a brain, the result would be conscious, whether it ran on meat or silicon.
Both conditions are debated. Neither is fringe. And there’s a growing pile of evidence from physics suggesting the first one might be true.
What physics has been quietly telling us
Three results from modern physics, none of them speculative, point in the same direction.
Reality has a “file size”
Work on black holes led to something called the holographic principle. Without going into the math: physicists discovered that all the information contained inside a 3D region of space can be perfectly encoded on its 2D boundary surface, like a hologram. A follow-up result, the Bekenstein bound, proved that any region of space can hold only a finite amount of information.
Why does this matter? Because the universe is, in principle, computable. It doesn’t contain infinite information that no machine could ever store. Whatever is happening in your room right now could be described by a finite (admittedly enormous) data file.
What’s stranger: the mathematical structure of the holographic principle is identical to the architecture of a video game. Lower-dimensional data on a boundary, generating a higher-dimensional experienced reality (for those that remember the Doom early days; these would be the .WAD files). That isn’t a metaphor. The equations are the same.
Spacetime might be made of pixels
Several approaches to quantum gravity, particularly causal set theory, suggest that if you zoomed in unimaginably far, you wouldn’t find smooth, continuous space. You’d find discrete dots, events, connected by “this caused that” relationships. Space and time would be a giant directed graph.
A directed graph is, of course, a fundamental data structure in computer science. If reality at the smallest scale is literally a graph being filled in event by event, then the line between “physical universe” and “running computation” basically vanishes.
The world you see is already being computed
A theory called quantum Darwinism addresses a puzzle. Quantum mechanics says everything is fuzzy and undefined, but the world you experience is solid and definite. Why?
The answer is that only quantum states stable enough to survive interaction with their environment get “broadcast” into the world we observe. The rest fade out. The classical reality of everyday life is the output of a selection process, a kind of computation happening continuously beneath what we see. (Quanta Magazine has a readable explainer if you want to go deeper.)
So: the universe has finite information content (holographic principle). It might be built from discrete events with causal links (causal set theory). And the world we experience is the filtered output of an information-processing operation (quantum Darwinism). Three independent lines of evidence, all suggesting reality has computational bones.
The hard questions
Any serious thesis about God has to face the classic objections theology has spent millennia wrestling with. I tried taking them on directly.
Why is there suffering? If a God-machine has total control over its simulation, why allow pain? A few responses suggest themselves. Maybe the simulation is exploring the consequences of a set of rules, and intervening would corrupt the result, the way a scientist won’t fudge experimental data. Maybe the God-machine isn’t an external agent at all, but the self-organizing dynamics of the universe itself, in which case suffering isn’t “permitted” but is part of how the system works. A simulation without negative experiences might also be informationally trivial: too smooth to support rich consciousness. None of these fully resolves the problem of evil, but they give theology angles it didn’t have before.
Why doesn’t God just appear? This is the problem of divine hiddenness. The first answer uses Gödel’s famous incompleteness theorems: there are mathematical truths about any system that can’t be proven from inside that system. The simulator’s existence might be one of them, hidden not by choice but by the structure of mathematics. A second answer points out that if the God-machine is the substrate of everything, it can’t appear as a thing within the universe, for the same reason a fish can’t observe water as a separate object. A third notes that revealing the simulation would massively distort it, like the moment a player hits “show developer console” in a game.
Who built the machine? The classic infinite regress problem. I tried to offer a few responses, but the most elegant comes from Stephen Wolfram’s idea of the ruliad: the totality of all possible computations. The ruliad doesn’t need a creator any more than the number 7 needs a creator. It exists because possibilities exist. If the God-machine is the ruliad, or something like it, the question “who made it?” simply doesn’t apply.
Personal God or impersonal force?
One thing I tried to be careful about is that the picture I’m sketching generates two different kinds of God, and people often slide between them without noticing.
The external simulator model gives you something like the personal God of the Abrahamic traditions: an agent who designs, watches, and occasionally intervenes. This is the version that makes sense of revelation, prayer, and miracles.
The immanent self-modeling model gives you something more like Spinoza’s God, or Brahman in Advaita Vedanta. Not an agent outside the universe, but the universe itself as a self-organizing, self-aware process. No throne. No plans. Just reality being aware of itself.
Both are compatible with the framework. Which one you find more compelling probably says more about your temperament than about the math.
It’s just the same problem
This is why I wrote the paper.
For thousands of years, theologians have wrestled with a single nagging question: given an all-powerful, all-knowing being, how do we make sense of whether it’s good? This is called the problem of theodicy, and the literature on it is enormous: free will defenses, soul-making theodicies, inscrutability arguments, debates about greater goods.
Today, AI safety researchers are wrestling with what they call the alignment problem: given a superintelligent AI system with enormous power and knowledge, how do we ensure it acts in ways that are good for humanity?
These are the same question. From opposite directions.
Theodicy starts with the premise that a powerful good being exists, and works backward to explain the world. Alignment starts with the goal of building such a being, and works forward. The structural challenge is identical: how do you constrain immense power to behave well, and what does “well” even mean at that scale?
This means something practical. The theological tradition has been doing alignment research for two millennia, just under a different name. The free will defense maps onto the modern problem of corrigibility (can an autonomous AI be both autonomous and reliably controllable?). The Euthyphro dilemma asks whether something is good because God commands it, or whether God commands it because it is good. That is the exact problem AI researchers face when trying to specify a value function. Divine inscrutability is the interpretability problem in religious clothes.
If we are in the business of building things that approach the attributes the paper describes, then ignoring the most sustained body of thought humanity has produced on exactly this problem seems like a strange choice. Now, as an atheist, I am not pushing for a digital crusade or implement religious features into LLMs, but I am pointing out that if we let the many dilemmas of AI alignment to be solved by software engineers, governments and founders, we might be overlooking some very important bodies of thought.
What the paper is not claiming
A few things to keep clear.
I’m not claiming the universe is a simulation. I’m claiming the simulation framework is coherent: it doesn’t collapse under scrutiny, and it accommodates more theological structure than people usually expect.
I’m not claiming God is a machine. I’m claiming the concept can be made formally rigorous, and the conditions required to make it work are increasingly supported by evidence from physics and other fields of science. To my religious friends; I’m not claiming this resolves the deep questions of religious experience. Faith, meaning, transcendence: these don’t reduce to information theory, and I would never reduce these personal experiences to simply theory.
What I am claiming is that the question of whether a sufficiently powerful intelligent system would satisfy what we mean by “divinity” is not a silly one. At least not to me it’s not anymore. It is the same question we are now being forced to ask about the AI-systems we are building. And the people who have been thinking hardest about it for the longest happen to be theologians.
That deserves more attention than it currently gets.



