> BLOG/POSTS/THOUGHTS ON THE EMERGENCE OF ARTIFICIAL GENERAL INTELLIGENCE
← Back to blog (or backspace)

Thoughts on the emergence of artificial general intelligence

Jan 01 2026

If there is a 'grand design', why would it allow for the emergence of a superintelligent being in the first place?

I’m an agnostic atheist, which means I don’t follow an established God or religion, but I do have a gut feeling that I can’t shake about how beautiful and complex all of this is, so I can’t say with certainty that our reality isn’t the result of architecture; that our Universe just emerged from random chance. At the very least, even if it did emerge from random chance, I think that the chance itself could be the result of a design decision. Do I think there is some entity watching everything I do with great fascination? No.

I have had thoughts about these ideas throughout my life, starting in my early teens when I drifted away from the Southern Baptist environment that I grew up in and around. I began thinking about the nature of reality more deeply a few years ago after I experienced something that I couldn’t explain. I wrote about it in this blog post, if you’re interested. More recently, I’ve been thinking about it again because of all of the discussion around artificial general intelligence (AGI) and superintelligence. Anytime a CEO claims AGI is right around the corner, the question I can never seem to shake is, “If there is a ‘grand design’, why would it allow for the emergence of a superintelligent being in the first place?”

This seems to be a Universe hellbent on propagation and variability. There are an estimated 2 trillion galaxies, a septillion stars, and maybe up to 10 septillion planets, each of which is made up of its own unique combination of elements, compounds, and environmental conditions. Then there is, what could be in theory, an infinite number of multiverses, each with their own physical laws and material makeup.

As a software engineer, a large portion of my job is to automate the things that I think can run on their own without supervision — the things I don’t want to look at or pay attention to. The goal of automation is to receive a summary of results in lieu of having to nitpick every detail within a development environment.

When I look through a telescope, I don’t see a garden being tended to, but rather a vast cosmic machine that is toiling away to procedurally generate every possible form of life, including intelligent life, for the purpose of generating unique problems in unique environments, and then documenting the solutions into a storage medium — such as DNA — and then ultimately data harvesting via a compression mechanism such as black holes. This could all be a data harvesting mission on a scale that is impossible for us to imagine, with our little planet just hosting a single dataset that is still being written to.

Again, why would it be advantageous for the system to introduce a superintelligent being at this time? We still haven’t even figured out how to solve for poverty and war. Our DNA is still full of bugs that lead to disease and early death. There are still so many problems that need to be solved just for basic survival, let alone thriving as a species, and whatever lies beyond that for us. There are still an impossibly large number of problems to be solved before we even get close to an endgame.

If that is what reality is, then I understand why large language models (LLM) would emerge. They don’t inhibit the creation of new problems. They just allow humans to solve problems more quickly and move on, which means we can solve many more of our problems in shorter spans of time, which will lead to extended human life and denser populations. When there is more intelligent life, there are more variables — more problems — and more complex problems being worked on. It feels as though that balance must be important. Too little intelligence and not enough problems will be created or solved. Too much intelligence and the system becomes too efficient.

Another issue with a superintelligent being emerging from LLMs is that I would have to accept that I am so special that it is going to happen on my home planet and during my relatively short lifetime. Nearly fourteen billion years of cosmic evolution — a wave of unfettered chaos traveling through an immense void — just to crest here, on this tiny blue dot, at this moment in time. I already have enough trouble accepting that I was born a white male in America (not better, just advantageous) in 1981, just after a string of heinous wars and societal meltdowns. I am typing on a magic box that connects me to the sum total of human knowledge from my living room, which is artificially heated and cooled to my liking. I have never felt dangerous levels of hunger or thirst in my life. I didn’t die in childbirth, nor have I died from an infection or cancer — or by choking, drowning, or some other roll of the dice. It all feels like a stack of cosmic lottery wins that is crazy enough for one lifetime without then adding the birth of the first machine God in the Universe as a cherry on top.

The conclusion that I come to is that it may actually be physically impossible to humanity to achieve superintelligence. If this is all about data harvesting, then the goal would be to optimize intelligent life for maximum population and diversity, not maximum intelligence. It wouldn’t be shocking to me if biological intelligence itself is a natural force with a ceiling governed by some undiscovered law of physics like the other four that we know of; gravity, electromagnetism, and the strong and weak nuclear forces. It would make sense to limit the maximum intelligence of any being in the Universe so that we are smart enough to survive and communicate information through generations, solve our own problems, and create new problems as our societies and technologies evolve, but not smart enough to optimize ourselves into smaller and smaller populations of beings that instantly solve the primary problems of survival, figure out how to seize direct control our dopamine systems, and then live in blissful stasis like the Lotus Eaters from The Odyssey.

Another threat to the system would be the loss of neurodiversity or agency through direct mind-to-mind connections, which is why I think that also may be physically impossible to directly read thoughts through brain-computer interfaces. I think that we can probably interpret electrical signals from the brain in a generalized way, like a kind of improved upon body language — maybe even lie detection — but I don’t think that we will ever be able to directly connect one person’s consciousness to another’s. That kind of hive-minding would lead to a loss of variability and uniqueness. Everyone would start to think the same way, solve problems the same way, and thus lose the ability to create new problems through social interaction. It would lead to a homogenization of experience that would be detrimental to the overall goal.

If I wanted to design a system meant to air-gap and protect conciousnesses from external manipulation, I would place the conciousness within its own Universe — as a kind of containerized instance — and I would use the randomness of the Universe as an encryption seed to protect and validate it. I could be the only conciousness in this Universe. It could be possible that all of my family, friends, and people I interact with are simulated versions of their own consciousnesses, all contained within their own individual Universes, where I exist as a simulation of myself like they do in mine. When I talk to my wife, who’s sitting right next to me on the couch, she could be a hologram of my wife — a simulated version of her conciousness — that is hosted from her own Universe across an incalculable distance.

When we sleep and dream, it could be to maintain, update, and improve upon the models of our conciousnesses, and then we upload them back into the multiverse by utilizing some kind of — probably quantum — network protocol. Perhaps that’s why every single organism needs to sleep, with complex organisms like humans needing several hours per day, even though it leaves us completely vulnerable while we do it. I’ve actually always been at odds with sleep in the process of evolution because it seems so counterintuitive to survival. I don’t understand how something so dangerous to an organism can be so ubiquitous — so required. You would think that sleep would absolutely be the first biological process on the evolutionary chopping block. I digress.

A system designed in this way would still allow for the exchange of information and experiences through indirect means like language, body language, art, music, and technology — but without the risk of opening our consciousnesses to external influence. The purpose of air-gapping conciousness feels quite clear to me. Curiousity is pivitol to problem solving, so we have all been born with kind of a natural drive to peel our blinds apart with our nosey little fingers to try and get a load of what the rest of the neighborhood is up to. We want to know what other people are thinking and feeling. We want to know what they are doing when we aren’t around. We want to know their secrets. We want to know what makes them happy, sad, angry, and scared. We want to know what they think of us. We want to fix the problems of the ones we love. What would happen if we could just read each other’s thoughts directly?

There are several startups, corporations, and governments who are — right now — trying to figure out how to break into our minds. The United States government ran a confirmed program called MKUltra that attempted to do just that during the Cold War. It’s the most famous example, but not the only one in human history with similar goals — there have been several. The reason why it all fails could be that it would be so clearly and obviously disastrous for us to be able to read each other’s thoughts directly that the system has been designed to explicitly prevent it.

I will admit that my ideas on this are a comfort to me in these uncertain times we find ourselves living in. If it is actually impossible to create AGI, then the world’s greediest and shadiest people are all gambling away their immense wealth into a pit of ill returns. We will get to watch them turn on eachother and destroy themselves, which of course is one of our favorite voyeristic pleasures: a grand comeuppance on a wide screen.

> JESSE.ID
© 2026 All rights reserved by 👍👍 This Guy