I always thought this was an implicit request to forgive obvious typos and autocorrect mistakes. Sent from a mobile device (iPhone, Samsung Galaxy, Blackberry, Windows Phone, etc.) with a tiny keyboard and in a setting in which proofreading may not be as rigorous as normal.
It seems like it would be most reasonable to consider porcelain vs. plumbing command details in deciding if something is logically distinct to Git. git-commit has --message and --trailer options, git-commit-tree has a --message option. I take that as trailer is a convenience option to provide a consistent way to append those details to the commit message. But that doesn't mean it's not part of the commit message, nor that the user shouldn't see it while reviewing the commit message.
> Should a security researcher that identifies a vulnerability in electron.js need to identify _every_ possible project using electron.js to communicate with them the vulnerability exists? No. That's absurd.
But this is a false comparison, right? The scope of "Linux distributions" and "electron apps" are orders of magnitude different. If the reporter spot checked one or two of the most popular distributions to see if fixes had been adopted, that seems like an extra level of nice diligence before publicizing the details.
It doesn't seem "insane" as much as "not the most efficient path" as has already been well argued. But it also doesn't seem unreasonable to think in a project of the scope of the Linux kernel, with the potential impact of fairly effective(?) privilege escalation, some extra consideration is reasonable--certainly not "insane" at the very least?
They embargoed their vulnerability for 30 days after Linux landed a kernel patch. They did their part. You will always be able to come up with other things they could do for you, and they will always at first blush sound reasonable because of how big and important Linux is, but none of those things will be responsibilities of the vulnerability researcher. Their job is to bring information to light, not to manage downstreams.
About half the thread we're on reads as if the commenters believe Xint made this vulnerability. They did not: they alerted you to it. It was already there.
I realize you've been championing this idea in the thread, and I admire it because I also recognize the misdirected blame. Please understand I do not harbor "blame" for the researchers.
> Their job is to bring information to light, not to manage downstreams.
The researchers are also members of a community in which more harm than is necessary may be dealt by their actions. Nuance must exist in evaluating "reasonable" and "responsible" in the context of actions.
I strongly disagree. I want the information. I don't want to wait longer to find out about critical vulnerabilities so that researchers can fully genuflect to whatever Linux distribution norms people on message boards have. Their "actions" were to disclose a vulnerability that already existed and was putting people at risk. It's an absolute good.
If it helps you out any, even though my logic was absolutely the same and just as categorical in 2012 as it is today: there are now multiple automated projects that run every merged Linux commit through frontier models to scope them (the status quo ante of the patch) out for exploitability, and then add them to libraries of automatically-exploitable bugs.
People here are just mad that they heard about the bug. Serious attackers had this the moment it hit the kernel. This whole debate is kind of farcical. It's about a "real time" response this week to a disaster that struck a month ago.
I do get that, this era of automation is too responsive to not go public to provoke action. I think I might just be wistful of an era in which the alternate path might have made a difference. Sorry to pile on.
It strikes me as strange that the article links to [1] which appears to be the same board, absent the "Viavi" logo on the main RF can, as the Microchip product you linked. I couldn't tell with a brief look if the Viavi product is offering something like software, configuration, tuning, etc. on top of GPS-2700 product.
The photo of the device on the article says "Jackson Labs" which seems to have been the previous name of "Viavi Solutions" and a review video [2] mentioned using Symmetricom atomic clock modules, which was acquired first by Microsemi (2013) and subsequently Microchip (2018)[3].
There are some subtle differences. The Jackson Labs and Microchip boards both have a diagonal "swoosh" and a "do not touch" icon on the metal clock casing, a u-blox branded GPS receiver, and partially-filled mounting holes. The Viavi board has a blank clock casing, unbranded GPS receiver, and fully drilled-out mounting holes. But yes, all three are using a virtually-identical PCB.
Judging by the misaligned capacitors(?) on the Viavi board, it is almost like the Viavi one is an early prototype, with the Jackson Labs one being an early production version and the Microchip one being the current production version. I have no idea how that would work out acquisition timeline wise.
But yeah, hardware companies are rather acquisition happy. When designing hardware it is very common to come across datasheets with an "X is now known as Y" cover page stapled onto it. Heck, every once in a while you'll even come across a datasheet which is obviously scanned-in, for a brand which hasn't existed in three decades - and the chip will still be in production!
I did dig into this a bit more the other day and learned that the "main RF can" is the cesium oscillator module. The history there was pretty interesting! The early ones I found were the Symmetricom Quantum SA.45s[1,2] which included a pretty entertaining thread here[3]. There were several levels of quality and function in the family of products which have been discontinued in favor of the MAC-SA55[4], and I wish I could find where I saw that recommendation... It's a rubidium, instead of cesium oscillator, not that I know enough about these things to concluded one should be better than the other but my impression was cesium was higher precision.
The article points out that HTTP and FastCGI are both options for reverse proxies to communicate to the downstream server. I didn't find a reference to them being interchangeable outside of that context. If there is or was one please quote it.
> I agree with the article, FastCGI is better than HTTP for these things.
If this is what you mean to identify as claiming FastCGI and HTTP are generally interchangeable, and are rising to correct, I'll also offer that "agree with the article" and "these things" narrow the context to "for reverse proxy communication" and do not suggest the broader meaning you've interpreted.
While not formally reviewing code like this, I read a lot of it for fun. When it's clear and understandable, it's more educational and enjoyable. If the PoC code can also serve as a means of communication, that seems like an extra win.
It seems like "jargon" fits the need for a way to label the more specific meaning intended, like "property from objected oriented programming jargon." I think programmers might differ without the more specific description on if OOP, or say, the abstract algebra meaning, of property would be intended, since both seem relevant to different contexts of programming.
I think it's a good introduction to quantization generally and specifically in how it applies to reducing LLMs. But I also think it should say something about LLMs or "AI" in the title (as even the article is tagged AI on the author's site) because despite that being an easy assumption to make given the zeitgeist, including the detail would be more clear.
Is there a way to visualize this on a running system or some documentation that describes it? I'm not familiar with the plumbing here but did try to find some documentation.
"WSL 2 uses virtualization technology to run a Linux kernel inside of a lightweight utility virtual machine (VM). Linux distributions run as isolated containers inside of the WSL 2 managed VM. Linux distributions running via WSL 2 will share the same network namespace, device tree (other than /dev/pts), CPU/Kernel/Memory/Swap, /init binary, but have their own PID namespace, Mount namespace, User namespace, Cgroup namespace, and init process."
"WSL 2 runs all distros in the same utility VM, sharing the same Kernel."
If you run multiple distros take a look at the process manager and find the single vmmem or vmmemWSL (newer versions have the latter). That single instance is all of the instances, and all of the docker containers you might be running as well, each with namespace isolation (with WSL2 having intentional bridging between them for convenience). Visualise it by doing something intensive in any of them and seeing the single process react, because that's the single utility VM responsible for all of them. Further while starting up the first WSL2 instance or Docker container is expensive, requiring the initialisation of all of the resources for the utility VM and the memory to support it, subsequent iterations are much less expensive.
Thanks, it wasn't out of doubt that I asked, but it seemed having a reference to point at would help resolve the contention. The Docker blog post covered a lot more detail, even about WSL2, which was really informative and I hadn't seen.
I wonder exactly how much work "container" is doing in that Microsoft blog post's description, because it doesn't seem like it's the same kind of environment as a runc or containerd container?
I also wasn't quite sure how much detail to infer from the behavior of vmmemWSL or vmcompute.exe, because my casual understanding is that there's some adaptation layer that handles mapping Linux calls to Windows calls. It seems reasonable to allow for process mapping or accounting shenanigans for any number of good reasons.
>there's some adaptation layer that handles mapping Linux calls to Windows calls
This was how WSL1 functioned. It used a shim layer, and honestly it was pretty neat for a lot of the basic stuff. It fell apart if you were doing more complex/advanced stuff, however, as there were many missing cases and exceptions.
WSL2 instead uses that utility VM, with a couple of special Microsoft kernel drivers to interact with the host system.
reply