I’ve been in the AI space since ChatGPT first dropped.

I’ve toyed around with a lot of Language Models, built random side projects, built a couple from scratch and I’ve spent hours looking at the math behind it all. I know how the weights work, I know how the matrix multiplications function. It’s just math and probabilities, a lot of it.

But seeing these exact same concepts working on literal human neurons? That is so profoundly dystopian to me.

If you’ve run into some of my work before, you know I have a thing for DOOM. I’ve spent days figuring out how to map out WADs to run it as a stateless engines or inside QR codes.

So a few months ago, when I came across a video from a company that grew neurons in a lab and trained them to play DOOM - honestly better than I do.

I saw it, read about it, nodded, and moved on.

Except I didn’t. It’s been months and I couldn’t put a finger on why it bothered me so much.

intro

We’ve discounted LLMs from being “conscious” because of the simple, slightly brutal reality that they’re next token predictors, so they’re really good at simulating the outputs of thought, but they have no inner life.

But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?

How do we know that isn’t conscious? Who gets to decide that?

To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way. When our brains interpret electrical signals from our optic nerves, we call it “seeing.”

So… are the neurons on that chip seeing?

We all desperately want to say no. We want to say it’s just a science experiment, that 200,000 neurons isn’t enough to be a “person.” But 200,000 is already more neurons than a jellyfish or a worm.

Where do we draw the line?

The commercial incentives exist, they obviously do - a human brain can store a lot more information with potentially better retrieval and fractions of the power that our silicon does.

and of course it’s hilarious to even imagine that we’d stop developing this, this was always a Pandora’s box, and even things we collectively call “wrong,” like mass surveillance or black markets, keep existing because someone profits. Why would this be different?

I don’t really have a conclusion here, and I’m not sure if one even exists yet - probably why the blog’s called MindDump, but I think I’m just uncomfortable that we made this and we’re not really talking about it.