(idea credit to some back-and-forths with
)Lots of institutions, intentionally or not, rely on sheer annoyance as a filter. Think about the hoops you jump through to file a complex insurance claim, apply for certain benefits, or yes, even initiate some types of lawsuits. The paperwork, the specific jargon required, the multi-step processes – it's a massive pain. This friction isn't just inefficiency; it's often an implicit test. It filters out the less determined, the less resourced, the people who can't figure out the "secret handshake." In a way, it's a low-tech Turing test: "Are you human enough (and persistent enough) to navigate this maze?"
Well, buckle up, because LLMs just blew that whole defense mechanism wide open. Suddenly, generating perfectly formatted legal complaints, filling out arcane government forms, or crafting bespoke responses to bureaucratic demands isn't a Herculean effort requiring hours of skilled labor. An LLM can churn this stuff out en masse, cheaply and quickly. What used to be a natural barrier, filtering the flow of inputs to a manageable trickle, is about to become a firehose. Think Distributed Denial of Service, but instead of flooding network ports, it's flooding intake desks, court clerks, and customer service queues with perfectly legitimate-looking, procedurally correct submissions.
Any organization whose core process relies on friction, complexity, or just being a pain in the ass to deal with is fundamentally broken in the age of AI. Their "implicit Turing test" is now trivially bypassed. The cost to generate "valid" inputs has plummeted, meaning the cost to overwhelm the system has too. These institutions are facing an existential threat from weaponized bureaucracy, automated by LLMs. If your system implicitly assumes only dedicated humans can navigate it, you're about to get DDoS'd into oblivion by tireless bots.
Rich people have been able to file excessive lawsuits to get their way for years. This happens in California all the time to delay and kill new housing construction.
That’s a future that is about to become much more evenly distributed.
Most companies and government agencies are busy trying to survive, not get prepared for this. Chaos is not a ladder, it’s setting fires.
How might systems evolve to authenticate genuine intent or resource commitment beyond simply making processes difficult to navigate? Maybe we need proof-of-stake systems for a lot more now.
Spot on. Last summer a rental car company tried charging me $500 for preexisting damage I was not responsible for. No luck after 5 hours on the phone.
Had GPT draft a demand letter and sent it to their legal department and CEO. Problem was solved within the hour.
Yep, I’ve been thinking about this too. Although of course, as usage gets more evenly distributed by attackers, it also will for defenders. Also, I don’t think we should overestimate the availability of these tools. The incoming business environment is uncertain enough that it’s hard to say whether they will be more or less available.