Season of Systems W11: The Problems with Human Beings
QUOTE
Program Life, Death, and Revival
A main claim of the Theory Building View of programming is that an essential part of any program, the theory of it, is something that could not conceivably be expressed, but is inextricably bound to human beings.
Oh, howdy!
The last couple weeks have certainly been twisty. I know that there’s a lot of speculation about AI being a big disruption to folks’ work — especially in the general media, I’ve seen writing around this feel highly speculative.
Ask any software engineer right now, though, and you’ll get a pretty concrete answer — it has been a huge disruption. It keeps being a huge disruption. I have a lot of opinions about AI in the more general societal sense (image and video generation are a plague with no real point), but it’s pretty much undeniable at this point that code generation has come far enough to substitute what would’ve been a junior engineering role two years ago.
The last year (and, specifically, the last two weeks) have been finding literally any source of copium to help deal with two anxieties:
- Will my job exist in two years? and, if so
- What will it even look like?
Am I cooked? (ai and engineering)
Probably not.
My own opinions on that question, as well as the two above it, oscillate at a pretty high frequency day-to-day, and even hour-to-hour. But they’re settling — slowly but surely, they’re settling. This is for three reasons:
First, the models themselves have really started to slow down in terms of “what new things can they do.” This time last year, there were absolute rapid-fire releases of very capable models, one after another, for a period of about 8 months. Since then, the models really haven’t become anything transformative — just smaller improvements, like being able to better use tools.
Which does bring up the second part: the tools. For a bit of context — much of the time, when an LLM is able to do something other than type text into a chat interface, this is done through the Model Context Protocol, or MCP. This is what I mean when I say “tools” — they’re small bits of glue code that allow LLMs to interact with things outside the chat, like editing files or searching the web. MCP was released just about a year ago, and in that time, this has been the primary driver in how well the models have gotten at actually doing things. It forms the baseline distinction between a chatbot and an agent.
This is precisely why my current season is the Season of Systems. In the last year, LLM use in my field very quickly advanced from “Paste some code, ask a question, paste the response” to “Give an agent a single prompt and watch it spit out an answer within 5 to 30 minutes.” That’s a massive change, and requires a massive amount of consideration about to do it well. Frankly, every company in the world is trying to figure out how to do it right, so I’m one competitor in a heat of millions of other engineers and companies.
Third and finally: I have started to understand that there is an undeniable, intractable human component to the act of problem-solving. Engineering, to me, is simply a formalized, scaled desire to solve problems — and bots don’t have problems. I’ve come to feel that only a human being can truly experience the problems we’re trying to solve in the world, and only human beings can appreciate the solutions to those problems. To that end, I feel:
- LLMs are undeniably going to be the future of helping build solutions for problems; but
- LLMs will not solve problems.
My job isn’t writing code. My job is to feel problems, understand them, and then solve them. That’s human — and we’ll be alright.
Foldin’ Proteins (biology)
Alright! The last bit waxed pretty philosophical, but shake it off — we’re back to our regularly scheduled fuckin’ around, now.
In the last week, a friend from my partner’s biomedical engineering program asked for help setting up localcolabfold, a protein-folding program that you can run on your computer. We were able to get a version up and running on my homelab computer! However, new software always happens in two step:
- Get the software running; then
- Learn how to use it
There’s a reason why folks spend 6 years getting a PhD in this kind of thing — I am very unqualified to understand the outputs of what is even actionable out of a protein fold, but it’s due diligence to make sure I know how best to drive the software. In a best-effort, I have two new annotations available:
- An Annotated Guide to Hobbyist Protein Folding
- This is an annotation of the original ColabFold paper, in order to at least understand the basics of the software — especially around how to read the graphs; and
- Design of a mucin-selective protease for targeted degradation of cancer-associated mucins
- This is an attempt to replicate a couple of figures on a 2023 paper, where they used
localcolabfoldto fold up StcE, a… protein in… E. Coli? My partner didn’t want to dumb down the biology for me this weekend, but we’ll get around to it eventually.
- This is an attempt to replicate a couple of figures on a 2023 paper, where they used
Urban Development Policy (economics)
I’ve been mainly caught up in my AI copium and protein folding fun in the last little bit, so I haven’t released many new notes on A Pattern Language recently. However, I will get back into that this week. I want to have an actionable first-draft model done sometime soon, which will likely involve the first ~30-40 patterns from the book.