He definitely grew significantly as a writer over time, and I would agree that some of his early work isn't particularly strong (The Light Fantastic, for example, is relatively bog-standard comic fantasy without any of the depth his later work showed).
If you start reading at the very beginning of the Discworld, you're slogging through the weaker stuff, and it's easy to get discouraged. A smoother path is to pick one of the defined sub-series (the guards are very popular, but my vote goes to the witches) and start along just that track; you'll get to the strong stuff much faster.
My advice has always been to start with Small Gods. It is a standalone book that references but does not rely on any others, and is far enough into his career that it’s fair to say that if you don’t like it, you won’t like his work in general.
you can counter the context rot and requirement drift that is experienced here by many users by using a recursive, self-documenting workflow: https://github.com/doubleuuser/rlm-workflow
> Traditional Chinese relies on context: “Rain heavy, not go”, “雨大,不去了”.
> Modern Chinese demands explicit logic: “Because the rain is heavy, therefore I will not go.””因为雨下得很大,所以我决定不去了。”
I would say "下雨了,我不去“ or something like that. The second example is perhaps what a language learner would say in order to "speak correctly", but nobody actually speaks or writes like that.
Totally. I also feel such a disconnect with HSK material, no one speaks like that or even uses that vocabulary. But I guess thats the case with almost every language/language course.
What's gone unnoticed with the Gemma 4 release is that it crowned Qwen as the small model SOTA. So for the first time a Chinese lab holds the frontier in a model category. It is a minor DeepSeek model, because western labs have to catch up with Alibaba now.
depends on usage, Gemma 4 is better on visuals/html/css and language understanding (Which probably plays a role in prompting). But it's worse at code in general compared to Qwen 3.5 27B.
most codebases dont have traces to train on. if you use rlm-workflow you will build up rich traceability in the form of requirements, plans, implementation artifacts, along with worktree diffs. with these, you can then use self-distillation on models or use autoagent to improve your harness. https://github.com/doubleuuser/rlm-workflow
China can't get good chips. But I don't understand why they can't license their closed source models to US inference providers so we can get more than 80% reliability on their models on OpenRouter.
reply