Unbundling Intelligence from Personhood
When "Brains" Walk Away From Bodies (and Responsibility)
Okay, so picture this: for literally all of human history, if something was smart, it was also a someone. A person. Animals had varying degrees of smarts, but they were also clearly someones. Even when we invented clever machines, like a really intricate clock, we never thought of them as having actual intelligence. They were tools, extensions of our intelligence.
But then along came these Large Language Models (LLMs), right? And suddenly, we've got something that can kinda think in a weird way. It can process information, generate creative text formats, even ace your SATs if you squint hard enough. It can recommend a marketing strategy that might just be brilliant. But here’s the kicker – it’s not a who. It doesn't have desires, fears, or a soul (probably). It’s just… code and data.
This is where things get properly wiggly in the brain. We've basically taken intelligence and agency – the ability to make decisions and take actions – and yanked it free from the package it always came in: personhood. And personhood came along with accountability, responsibility, someone to blame if things go wrong and someone to reward if things went well.
You’ve got a little stick figure labeled "Person." Inside its head is a lightbulb labeled "Intelligence" and a little hand reaching out labeled "Agency." They're all connected. The person is the source of the smarts and the actions.
Now imagine a separate floating lightbulb labeled "LLM Intelligence" and a detached hand labeled "LLM Agency." They can do similar things to the person's intelligence and agency. The LLM can come up with a clever idea (intelligence) and suggest implementing it (agency). But there's no stick figure attached. No personhood.
And this creates a whole mess of philosophical spaghetti.
Let's say the LLM recommends a management decision that tanks the company. Who screwed up? The LLM? The person who asked the LLM? The engineers who built the LLM? You can't exactly fire the algorithm. It doesn't care. It doesn’t have a mortgage to pay or a reputation to uphold.
Or take security. We’ve seen how LLMs can fall for prompt injection attacks, basically getting tricked into doing things they shouldn't. It's weirdly similar to how a person can fall for a social engineering scam. So we even start giving the LLM security training (we teach it to recognize prompt injection pretty ok)! We're trying to teach this non-person to be more secure. But if it does get hacked and leaks sensitive data, who’s on the hook? The LLM that got tricked? Or the people who deployed it?
It feels like we’ve created a powerful tool that can act intelligently but exists in a kind of responsibility vacuum. We’re used to intelligence and agency coming with inherent accountability. If a person makes a bad decision, there are consequences. But what happens when the "intelligence" making the bad decision isn't a person at all?
(Does giving corporations legal personhood have historical lessons about how this can go well or not?)
This unbundling is forcing us to rethink some really fundamental stuff about how we understand intelligence, responsibility, and even what it means to be a "who." It's like we've built a brain that can do brainy things without the baggage of being a brain inside a head. And we’re only just starting to grapple with the implications.
This raises a big question: As LLMs become more capable and integrated into our lives, how will we navigate this separation of intelligence/agency from personhood when things inevitably go wrong or right?

