I think you are correct, in so far that often N:M threading is overkill for the problem at hand. However, some IO bound problems truly do require it. I haven't kept up with the details, but AFAIK the fallout from Spectre and Meltdown also means context switches are more expensive than they were historically, which is another downside with regular threads.
I also want to address something that I've seen in several sub-threads here: Rust's specific async implementation. The key limitation, compared to the likes of Go and JS, is that Rust attempts to implement async as a zero-cost abstraction, which is a much harder problem than what Go and JS does. Saying some variant of "Rust should just do the same thing as Go", is missing the point.
There is work happening on keyword generics[0], which would let a function be generic over keywords like `async` and `const`.
For now the best option to write code that wants to live in both worlds is sans-io. Thomas Eizinger at Fireguard has written a good article about this[1] pattern. Not only does it nicely solve the sync/async issue, but it also makes testing easier and opens the door to techniques like DST[2]
I have my own writing on the topic[3], which highlights that the problem is wider than just async vs sync due to different executors.
Yes I hope in the future we can get to what OCaml 5 has with their algebraic effects system, and hopefully fix any flaws we see in there, so that async will just be syntactic sugar over the underlying effects system.
> For now the best option to write code that wants to live in both worlds is sans-io
Thanks for sharing!
Reading the articles, it looks at me, it is a kind of manual reimplementation of the state machine generated by async? This also makes the code harder to reason with. I am unsure if it is worth the complexity.
I may have missed something, but how does “sans-io” deal with CPU heavy code? For example, if there’s some heavy decoding/encoding required on the data? Does the event loop only drive the network side and the heavy part is done after the loop is finished?
This is a great question and there isn't a definitive answer provided in the sources I linked.
Broadly I think there are three approaches:
1. For frequent and small CPU heavy tasks, just run them on the IO threads. As long as you don't leave too long between `.await` points (~10ms) it seems to work okay.
2. Run your sans-io code on a dedicated CPU thread and do IO from an async runtime. This introduces overhead that needs to be weighed against the amount of CPU work.
3. Have the sans-io code output something like `Output::DoHeavyCompute { .. }` and later feed the result back as `Input::HeavyComputeResult { .. }`, in the middle run the work on a thread pool.
I built https://github.com/k0nserv/plid with Pgrx and had a great time. I did have to scale back some of the magic (dropping derive PostgresType etc), but even so the support pgrx provides is excellent. I also talked to the maintainers a bit in discord and they were super helpful.
The one downside of custom extensions is that you aren’t, AFAIK, able to use them with many hosted Postgres installs, notably AWS RDS.
Maybe one of the reasons why hosted postgres often disallows extensions is due to security concerns from loading arbitrary machine code on a shared host. I wonder if pgrx changes the calculus here.
Since it's a procedural language, you can't do things like create a new index implementation or something else super low level. But there's still a lot you _can_ do. Like implement a custom comparator for a custom type and then use that type in a btree index.
The US spends the most per capita[0] on healthcare in the world, all to receive a healthcare system that still requires lots of citizens to carry private insurance. I've never dug deep into why, but it sure is noteworthy.
The private insurance expenditure is part of that per capita number. US healthcare isn't "A system", its a number of interrelated systems that have lots of expensive hand-offs. We also spend a ton on lifestyles diseases because no one walks and culturally we eat like shit on average.
> We also spend a ton on lifestyles diseases because no one walks and culturally we eat like shit on average.
And there's a pretty straight line between that and government subsidies for sugar and processed foods in general, not to mention car-based infrastructure, although the latter doesn't stop other countries from not having crippling obesity rates.
> And there's a pretty straight line between that and government subsidies for sugar and processed foods in general
No there isn't. Sugar subsidy accounts for 1.7 cents per 12-ounce can of soda. Soda in the US is generally inelastic, and research has shown that a 10% increase in price results in lessss than a 5% decrease in consumption. Americans just like sugar and sitting, culturally.
And acknowledging the very obvious instances of regulatory capture that directly harm quality of life is political suicide for anyone with even the smallest amount of access to power.
It’s hard getting normies to admit that if soft drinks weren’t so heavily subsidized by the government at every step of manufacture and distribution, there would be less overall obesity.
The graph shows both public and private expenditure. If you only consider the public per-capita expenditure it's more than every other nation on the graphs public + private per-capita spending.
The data behind the graph is probably from OECD, which does not use a public/private classification. Mostly because in many OECD countries, "public" healthcare is largely funded by private insurance.
According to OECD data, US healthcare spending in 2023 was 28% from government schemes, 55% from health insurance, 11% out-of-pocket, and 5% from other sources. For most countries, the health insurance category is further split into compulsory and voluntary categories, but that distinction does not really exist in the US.
All US health insurance spending is reported in the compulsory health insurance category. Probably because the bulk of the spending is from employment-based insurance, which is effectively mandatory. (You usually can't opt out and take cash instead.) Naive aggregators then combine government spending and compulsory insurance and report that as public spending.
That’s because we pay people well. A low level pharmacy benefits admin makes more than head of cybersecurity for the UK or a doctor in Germany. When you people well, your spending goes up. You can’t pay people a lot and have low spending.
There are a few problems with how Trump is going about this:
1. The tariffs are too broad, they don't target a single or a few industries.
2. Trump has gone back and forth many times on them, using them as negotiating leverage, not as long term incentives.
3. They are on very shaky legal grounds and will likely end up getting reversed by either the Supreme Court or the next president.
If you want to use tariffs to encourage on-shoring you make them targeted and pass them with bipartisan support through congress. Companies need stability and long term guarantees for the kind of capital expenditure that is needed. Even better if you use a mix of carrot and stick, rather than all stick
I agree that's actually the problem. The problem with discourse in the US is that it comes in soundbites, division and confusion. This predates, arguably ENABLED Trump.
There could have been an argument for tariffs, done rationally and with a very specific program to rebalance trade. I'm not saying it's necessarily correct, but it could have entered as an option for voters to consider. But that's an alternative universe to people at this point, and we end up with an unpredictable waffling that scares businesses and doesn't appear to have obvious aims at this point beyond petty attacks.
And with China a key target in the Trump Tariff debacle, China is punching holes in these punitive tariffs. Besides shipping goods to intermediary countries that are not as heavily tariffed then exporting to the U.S., China is taking ownership stakes in American businesses, thus circumventing the whole tariff thing. And the beauty of this is, they can take advantage of U.S. taxpayer benefits, such as an R&D tax credit, to sweeten the deal.
What does the US gain from taking Greenland that it doesn't already have? If the US does invade an ally to acquire territory I think Canadians should be worried. In any case, what the US gains is the wrong perspective. This is about Trump and those around him wanting to build an empire and the American people, seemingly, letting them.
In C++ you do it the other way around, have a single class that is polymorphic over templates. The name of this technique within C++ is type-erasure (that term means something else outside of C++).
Examples of type erasure in C++ are classes like std::function and std::any, and normally you need to implement the type erasure manually, but there are some library that can automate it to a degree, such as [1], but it's fairly clumsy.
how do apis typically manage to actually « use » the « bar » of your example, such as storing it somewhere, without enforcing some kind of constraints ?
Depending on exactly what you mean, this isn't correct. This syntax is the same as <T: BarTrait>, and you can store that T in any other generic struct that's parametrized by BarTrait, for example.
> you can store that T in any other generic struct that's parametrized by BarTrait, for example
Not really. You can store it on any struct that specializes to the same type of the value you received. If you get a pre-built struct from somewhere and try to store it there, your code won't compile.
I'm addressing the intent of the original question.
No one would ask this question in the case where the struct is generic over a type parameter bounded by the trait, since such a design can only store a homogeneous collection of values of a single concrete type implementing the trait; the question doesn't even make sense in that situation.
The question only arises for a struct that must store a heterogeneous collection of values with different concrete types implementing the trait, in which case a trait object (dyn Trait) is required.
Yes, in part because the US outsourced a lot of their industry to China since. The US is still one of the principal per capita emitters, they need to cut emissions by two thirds to catch up with Europe and in half to reach China.
Having worked on a design system previously I think most people, especially non-frontend developers, discount how hard something like that is to build. LLMs will build stuff that looks plausible but falls short in a bunch of ways (particularly accessibility). This is for the same reason that people generate div-soup, it looks correct on the surface.
EDIT: I suppose what I'm saying is that "The paid products Adam mentions are the pre-made components and templates, right? It seems like the bigger issue isn't reduced traffic but just that AI largely eliminates the need for such thing." is wrong. My hunch is that AI has the appearance of eliminating the need for such things.
It's not that people care about quality, but that people expect things to "just work".
Regarding the point about accessibility, there are a ton of little details that must be explicitly written into the HTML that aren't necessarily the default behavior. Some common features of CSS and JS can break accessibility too.
None of this code would obvious to an LLM, or even human devs, but it's still what's expected. Without precisely written and effectively read-only boilerplate your webpage is gonna be trash and the specifics are a moving target and hotly debated. This back and forth is a human problem, not a code problem. That's why it's "hard".
I use the web every day as a blind user with a screenreader.
I would 100% of the time prefer to encounter the median website written by Opus 4.5 than the median website written by a human developer in terms of accessibility!
That's really interesting. Are you speaking from experience with websites where you know who authored them or from seeing code written by humans and Opus 4.5 respectively?
So I have been using the human-authored web since well... 1999 or so, starting with old AOL CDs. I've obviously seen a lot of human content.
Back in the old days you might have image links and other fun stuff. Then we entered the era of flash. Flash was great, especially the people who made their whole site out of it (2004 + not being able to order ... was it pizza? something really sticks in my memory here.)
Then we entered the era of early Bootstrap. Things got really bad for a while -- there was a whole Bootstrap-Accessibility library people ended up writing for it, and of course nobody actually used the damn thing. The most frustrating thing at this point (2010?) was any dropdown anywhere. Any bootstrap dropdown was completely inaccessible using typical techniques, and you'd have to do something tricky with ... mouse routing? Gods it's been 15 years.
CAPTCHAs for stupid things became huge there for a brief moment -- I remember needing to pass a CAPTCHA to download ... was it Creative drivers? That motivated me to make a service called CAPTCHA-Be-Gone for other blind people for a while.
Then we see ARIA start to really come into its own... except that's a whole new shitshow! So many times you'd get people who thought "Oh to add accessibility, we just add ARIA" and had no fucking idea what they were doing, to the point where the most-common A11y advice these days has become "Don't use ARIA unless you know you need it."
Oh then we had this brief flash (~10 years ago?) of "60 FPS websites!" -- let's directly render to the fucking canvas, that'll be great. Flutter? ... Ick!
Nowadays the issues are just the same as they ever were. People using divs for everything, onclick handlers instead of stuff that will be triggered with keyboard... Stuff that Opus just doesn't do!
I guess I've only been using Opus 4.5 for about a month but just ... Ask it to build something? Use it with a screen reader? Try it!
> Then we see ARIA start to really come into its own... except that's a whole new shitshow!
I am not blind, but my experience trying to write accessible web pages is that the screen readers are inconsistent with how they announce the various tags and attributes. I'm curious what you think about the screen readers out there such as NVDA, JAWS, VoiceOver, TalkBack, etc. and how devs should be testing their web pages.
Many of the larger corporate clients tend to standardize on the exact behavior of JAWS and I am not sure that is helpful. It's like the Internet Explorer of screen readers.
If you want to know why a page ends up riddled with ARIA overriding everything, that's why. In even the best cases, the people paying for this dev work are looking for consistency and then not finishing the job. It's never made the highest priority work either since testing eats up a ton of time.
To reinforce my original point, I just don't think LLMs can write anything but the most naive code and everyone has opinions and biases completely incompatible with standardization. It's never "done" and fundamentally fickle and political just like the rest of the web.
Satisfying constraints like these isn't merely about knowing the spec and having lots of examples. Accessibility requirements are even more subjective than ordinary requirements already are to begin with.
But accessiblity on the frontend is to a large extend patterns - if it looks like a checkbox it should have the appropriate ARIA tag, and patterns are easy for an LLM.
It's just… a lot of people don't see this on their bottom line. Or any line. My awareness of accessibility issues is the Web Accessibility Initiative and the Apple Developer talks and docs, but I don't think I've ever once been asked to focus on them. If anything, I've had ideas shot down.
What AI does do is make it cheap to fill in gaps. 1500 junior developers for the price of one, if you know how to manage them. But still, even there, they'd only be filling in gaps as well as the nature of those gaps have been documented in text, not the lived experience of people with e.g. limited vision, or limited joint mobility whose fingers won't perform all the usual gestures.
Even without that issue, I'd expect any person with a disability to describe an AI-developed accessibility solution as "slop": because I've had to fix up a real codebase where nobody before me had noticed the FAQ was entirely Bob Ross quotes (the app wasn't about painting, or indeed in English), I absolutely anticipate that a vibe-coded accessibility solution will do something equally weird, perhaps having some equivalent to "As a large language model…" or to hard-code some example data that has nothing to do with the current real value of a widget.
Accessibility testing sounds like something an LLM might be good at. Provide it with tools to access your website only through a screen reader (simulated, text not audio), ask it to complete tasks, measure success rate. That should be way easier for an LLM than image-based driving a web browser.
I think perhaps the nuance in the middle here is that for most projects, the quality that professional components bring is less important.
Internal tools and prototypes, both things that quality components can accelerate, have been strong use-cases for these component libraries, just as much as polished commercial customer-facing products.
And I bet volume-wise there's way more of the former than the latter.
So while I think most people who care about quality know you can't (yet) blindly use LLM output in your final product, it's completely ok for internal tools and prototyping.
The Tailwind Team's Refactoring UI book was a big eye opener for me. I had no idea how many subtle insights are required to create truly effective UX.
I think people vastly underestimate just how much work goes into determining the correct set of primitives create a design system like Tailwind, let alone a full blown component library like TailwindUI.
While I believe you, its an argument that artists bring forward since the beginning of art, so even many hundred years before the internet on average humankind did not value this work.
It's not really a refutation of my point about how building a good component library is hard, to suggest using another component library. Of course, if you use one it's easier, that was my entire point.
shadcn ui is not a component library but the basis for a component library that has great accessibility built-in from the start, so yes, it is a refutation.
Maybe we're arguing semantics, but I think calling shadcn a "basis for a design system" is more accurate than a traditional component library. The difference to me is that shadcn lives inside your codebase and you can fully customize it as you please. You cannot customize a component library like MUI nearly to that extent.
Everything that's been said publicly is just pretence, just like Maduro's/Venezuela's supposed drug trafficking. This is about Trump being and old man in his waning days who wants to create a legacy. Those around him have ambitions of empire.
I also want to address something that I've seen in several sub-threads here: Rust's specific async implementation. The key limitation, compared to the likes of Go and JS, is that Rust attempts to implement async as a zero-cost abstraction, which is a much harder problem than what Go and JS does. Saying some variant of "Rust should just do the same thing as Go", is missing the point.
reply