Hey folks, I'm the developer working on Blogs Are Back. WakaTime has me clocked in at over 900 hours on this project so far...
If CORS weren't an issue, it could've been done in 1/10th of that time. But if that were the case, there would've already been tons of web-based RSS readers available.
Anyway, the goal of this project is to help foster interest in indie blogs and help a bit with discovery. Feel free to submit your blog if you'd like!
If anyone has any questions, I'd be happy to answer them.
In my opinion, that’s a bigger problem than CORS. Proxyless web feed reader is a lost cause, you’re wasting your time because only a small minority are ever going to support it. But that opacity and transition nonsense gratuitously slows down page loading for everyone, and hides content completely for those that aren’t running JS.
(What I would also like to know is: how come this is the third time I’ve seen exactly this—each block of content having this exact style attribute—in the past month, when I don’t remember encountering exactly it before?)
The entire web app is JS based. It's a requirement I'm ok with.
And to answer your question, you're seeing that kind of styling so frequently because it's likely part of Framer Motion, an extremely popular animation library
Is the website machine generated? Besides the hard-dependency on JavaScript, this also causes the exact same problem I've seen on another[1] machine generated site: https://postimg.cc/TyMBfVZ6, https://postimg.cc/n9j1X5Dk. This happens randomly on refresh on Firefox 148.0-1.
Is the fade effect really worth having parts of your site disappear at random?
Hey, this is very interesting! As someone working on an extension that works as an ActivityPub client, I don't have to deal with CORS issues so much (most servers configure CORS properly, and the extension can bypass CORS issues anyway) but I just spent a good chunk of my weekend working on a proxy that could deal with Mastodon's "authorized fetch".
So, basically, any URI that I need to resolve goes tries first to fetch directly and it falls back to making the request through the proxy if I get any type of authentication error.
Hey! Blogs Are Back is cool! Nice to see more modern RSS readers, and also thematic blog collections. If you seek more curated blogs to share with your users, check out my project https://minifeed.net/
Just wanted to comment to see if I can help answer any questions as well as mentioning that we improved the instructions in the README based on some of the points Rob made a few weeks back.
There really are a large number of us out there that know Tahoe would be a downgrade to their current setup
If you have any ideas on how to improve the resilience of the workarounds, please connect on the GitHub, or just starring the repo would help, as the project would get more attention and hopefully more solutions offered as a result.
It's frustrating to feel like your computer isn't.. yours anymore when you're pushed so insistently like with this "upgrade". Hopefully we can figure out some sustainable ways to get some autonomy back.
I just wanted to thank you for this work. I wouldn’t have known where to start. Reading about all the hoops to jump through I can’t help but think that macOS is getting ever closer to being malware, just like Windows. An OS you have to fight to stay productive. I’ve been a Mac user since 1995, but the way this has been going over so many years now, I can’t imagine my next computer to be yet another Mac any more. I have been forced to view Linux as the last refuge. It was nice while it lasted, but eventually Stallman was right the whole time.
If you can deal with known vulnerabilities and cross-reference all of Apple's CVE notes, more power to you. I can't say I have that much free time (Liquid Glass sucks, though).
I never suggested that. But Apple itself prioritizes patches by severity when deciding what to backport.
Some issues are so severe that Apple occasionally releases a new security update for previous OS versions that no longer receive security updates otherwise.
A lot of issues are merely privilege escalation, which is not necessarily a big problem on a personal computer.
You’ll be disappointed to learn that the deferral is 90 days from the release of the major OS version, not 90 days from when the configuration is set. There appears to be a bug in the delay logic in 15.7.3, but you really shouldn’t be running that — there are some important security fixes in 15.7.4.
Does anyone have a good method for avoiding accidentally accepting an "upgrade" notification from Sequoia to Tahoe?
With the potential to set off the installation flow with the wrong click (when its being shown over-and-over again), it makes me anxious and feel like I'm not even in control of my own computer anymore.
For the time being, I've installed a management profile to defer updates, disabled the Settings options for automatic updates, and used "Quiet You!" to try and keep the notifications at bay.
But the maximum deferral time for profiles is 90 days, so if anyone knows of a better solution or work-around, please let me know
This API wrapper was initially made to support a particular use case where someone's running, say, Open WebUI or AnythingLLM or some other local LLM frontend.
A lot of these frontends have an option for using OpenAI's TTS API, and some of them allow you to specify the URL for that endpoint, allowing for "drop-in replacements" like this project.
So the speech generation endpoint in the API is designed to fill that niche. However, its usage is pretty basic and there are curl statements in the README for testing your setup.
Anyway, to get to your actual question, let me see if I can whip something up. I'll edit this comment with the command if I can swing it.
In the meantime, can I assume your local text files are actual `.txt` files?
Hey — just pushed a big update that adds an (opt-in) frontend to test the API
For now, there's just a textarea for input (so you'll have to copy the `.txt` contents) — but it's a lot easier than trying to finagle into a `curl` request
(Didn't carefully read your reply. What follows are the results of cat-ing a text file in the CLI. Will give the new textbox a whirl in the morning PDT. A truly heartfelt thanks for helping me work with Chatterbox TTS!)
Absolutely blown away.
I fed it the first page of Gibson's "Neuromancer" and your incantation worked like a charm. Thanks for the shell script pipe mojo.
Some other details:
- 3:01 (3 mins, 1 sec) of generated .wav took 4:28 to process
- running on M4 Max with 128GB RAM
- Chatterbox TTS inserted a few strange artifacts which sounded like air venting, machine whirring, and vehicles passing. Very odd and, oddly, apropos for cyberpunk.
- Chatterbox TTS managed to enunciate the dialog _as_ dialog, even going so far as to mimick an Australian accent where the speaker was identified as such. (This might be the effect of wishful listening.)
What did your `it/s` end up looking like with that setup? MLX is fascinating to me. Apple made a really smart decision with the induction of its M-series.
With regard to the artifacts — this is definitely a known issue with Chatterbox. I'm unsure of where the current investigation on fixing it is at (or what the "tricks" are to avoid this), but it's definitely something that is eery among other things.
Spent an hour trying to get it running with a RTX 50 series, no luck, tried with PyTorch 2.7.
Seems built for 2.6.
"chatterbox-tts 0.1.2 requires torch==2.6.0, but you have torch 2.7.0+cu128 which is incompatible.
chatterbox-tts 0.1.2 requires torchaudio==2.6.0, but you have torchaudio 2.7.0+cu128 which is incompatible."
It can definitely run on CPU — but I'm not sure if it can run on a machine without a GPU entirely.
To be honest, it uses a decently large amount of resources. If you had a GPU, you could expect about 4-5 gb memory usage. And given the optimizations for tensors on GPUs, I'm not sure how well things would work "CPU only".
If you try it, let me know. There are some "CPU" Docker builds in the repo you could look at for guidance.
If CORS weren't an issue, it could've been done in 1/10th of that time. But if that were the case, there would've already been tons of web-based RSS readers available.
Anyway, the goal of this project is to help foster interest in indie blogs and help a bit with discovery. Feel free to submit your blog if you'd like!
If anyone has any questions, I'd be happy to answer them.
reply