Hacker Newsnew | past | comments | ask | show | jobs | submit | travisvn's commentslogin

Hey folks, I'm the developer working on Blogs Are Back. WakaTime has me clocked in at over 900 hours on this project so far...

If CORS weren't an issue, it could've been done in 1/10th of that time. But if that were the case, there would've already been tons of web-based RSS readers available.

Anyway, the goal of this project is to help foster interest in indie blogs and help a bit with discovery. Feel free to submit your blog if you'd like!

If anyone has any questions, I'd be happy to answer them.


> style="opacity:0;transform:translateY(20px)"

In my opinion, that’s a bigger problem than CORS. Proxyless web feed reader is a lost cause, you’re wasting your time because only a small minority are ever going to support it. But that opacity and transition nonsense gratuitously slows down page loading for everyone, and hides content completely for those that aren’t running JS.

(What I would also like to know is: how come this is the third time I’ve seen exactly this—each block of content having this exact style attribute—in the past month, when I don’t remember encountering exactly it before?)


The entire web app is JS based. It's a requirement I'm ok with.

And to answer your question, you're seeing that kind of styling so frequently because it's likely part of Framer Motion, an extremely popular animation library

https://www.npmjs.com/package/framer-motion https://www.npmjs.com/package/motion


Would also be great if the animations respected the `prefers-reduced-motion` setting, instead of forcing down animations that reduces accessibility.

Is the website machine generated? Besides the hard-dependency on JavaScript, this also causes the exact same problem I've seen on another[1] machine generated site: https://postimg.cc/TyMBfVZ6, https://postimg.cc/n9j1X5Dk. This happens randomly on refresh on Firefox 148.0-1.

Is the fade effect really worth having parts of your site disappear at random?

[1] https://news.ycombinator.com/item?id=46675669


I think cooler heads will agree that a middle ground where the content is available on the initial request is best. But what do I know /s

This is something Opus 4.6 likes to generate a LOT for some reason.

Seriously. This page terrible with multiple annoying rendering delays, and I'm supposed to care about helping their RSS feeds load faster?

Hey, this is very interesting! As someone working on an extension that works as an ActivityPub client, I don't have to deal with CORS issues so much (most servers configure CORS properly, and the extension can bypass CORS issues anyway) but I just spent a good chunk of my weekend working on a proxy that could deal with Mastodon's "authorized fetch".

So, basically, any URI that I need to resolve goes tries first to fetch directly and it falls back to making the request through the proxy if I get any type of authentication error.


Hey! Blogs Are Back is cool! Nice to see more modern RSS readers, and also thematic blog collections. If you seek more curated blogs to share with your users, check out my project https://minifeed.net/


You need to put a screenshot of the app on your page.

How can someone add platforms to the guide? I want to add Caddy

Hey everyone, I'm the owner of the repo that Rob references in his blog post (https://github.com/travisvn/stop-tahoe-update)

Just wanted to comment to see if I can help answer any questions as well as mentioning that we improved the instructions in the README based on some of the points Rob made a few weeks back.

There really are a large number of us out there that know Tahoe would be a downgrade to their current setup

If you have any ideas on how to improve the resilience of the workarounds, please connect on the GitHub, or just starring the repo would help, as the project would get more attention and hopefully more solutions offered as a result.

It's frustrating to feel like your computer isn't.. yours anymore when you're pushed so insistently like with this "upgrade". Hopefully we can figure out some sustainable ways to get some autonomy back.


I just wanted to thank you for this work. I wouldn’t have known where to start. Reading about all the hoops to jump through I can’t help but think that macOS is getting ever closer to being malware, just like Windows. An OS you have to fight to stay productive. I’ve been a Mac user since 1995, but the way this has been going over so many years now, I can’t imagine my next computer to be yet another Mac any more. I have been forced to view Linux as the last refuge. It was nice while it lasted, but eventually Stallman was right the whole time.

Not all security fixes are backported so unfortunately if you’re concerned about vulnerabilities, updating to the current release OS is a requirement.

https://support.apple.com/guide/deployment/software-update-p...


On the other hand, not all new security vulnerabilities are backported either.

Do you think that new major OS versions introduce only fixes and not bugs?

I think version N-1 is a good balance between getting the fixes and avoiding the new bugs.


If you can deal with known vulnerabilities and cross-reference all of Apple's CVE notes, more power to you. I can't say I have that much free time (Liquid Glass sucks, though).

> cross-reference all of Apple's CVE notes

I never suggested that. But Apple itself prioritizes patches by severity when deciding what to backport.

Some issues are so severe that Apple occasionally releases a new security update for previous OS versions that no longer receive security updates otherwise.

A lot of issues are merely privilege escalation, which is not necessarily a big problem on a personal computer.


At the 90 days, err do we (can we?) run it again and get another 90 days?

Would be good to clarify this in the README. So appreciate your work btw.


You’ll be disappointed to learn that the deferral is 90 days from the release of the major OS version, not 90 days from when the configuration is set. There appears to be a bug in the delay logic in 15.7.3, but you really shouldn’t be running that — there are some important security fixes in 15.7.4.

Thanks for your work!

I'm the developer of Blogs Are Back. Thanks for posting! If anyone has any questions or has any issues with the site, I'm here to help.


Does anyone have a good method for avoiding accidentally accepting an "upgrade" notification from Sequoia to Tahoe?

With the potential to set off the installation flow with the wrong click (when its being shown over-and-over again), it makes me anxious and feel like I'm not even in control of my own computer anymore.

For the time being, I've installed a management profile to defer updates, disabled the Settings options for automatic updates, and used "Quiet You!" to try and keep the notifications at bay.

But the maximum deferral time for profiles is 90 days, so if anyone knows of a better solution or work-around, please let me know


Make sure to keep a backup of your Sequoia install


That's just for their demo.

If you want to run it without size limits, here's an open-source API wrapper that fixes some of the main headaches with the main repo https://github.com/travisvn/chatterbox-tts-api/


Chatterbox is fantastic.

I created an API wrapper that also makes installation easier (Dockerized as well) https://github.com/travisvn/chatterbox-tts-api/

Best voice cloning option available locally by far, in my experience.


> Chatterbox is fantastic.

> I created an API wrapper that also makes installation easier (Dockerized as well) https://github.com/travisvn/chatterbox-tts-ap

Gave your wrapper a try and, wow, I'm blown away by both Chatterbox TTS and your API wrapper.

Excuse the rudimentary level of what follows.

Was looking for a quick and dirty CLI incantation to specify a local text file instead of the inline `input` object, but couldn't figure it.

Pointers much appreciated.


This API wrapper was initially made to support a particular use case where someone's running, say, Open WebUI or AnythingLLM or some other local LLM frontend.

A lot of these frontends have an option for using OpenAI's TTS API, and some of them allow you to specify the URL for that endpoint, allowing for "drop-in replacements" like this project.

So the speech generation endpoint in the API is designed to fill that niche. However, its usage is pretty basic and there are curl statements in the README for testing your setup.

Anyway, to get to your actual question, let me see if I can whip something up. I'll edit this comment with the command if I can swing it.

In the meantime, can I assume your local text files are actual `.txt` files?


This is way more of a response than I could have even hoped for. Thank you so much.

To answer your question, yes, my local text files are .txt files.


Ok, here's a command that works.

I'm new to actually commenting on HN as opposed to just lurking, so I hope this formatting works..

  cat your_file.txt | python3 -c 'import sys, json; print(json.dumps({"input": sys.stdin.read()}))' | curl -X POST http://localhost:5123/v1/audio/speech \
    -H "Content-Type: application/json" \
    -d @- \
    --output speech.wav

Just replace the `your_file.txt` with.. well, you get it.

This'll hopefully handle any potential issues you'd have with quotes or other symbols breaking the JSON input.

Let me know how it goes!

Oh and you might want to change `python3` to `python` depending on your setup.


> Just replace the `your_file.txt` with.. well, you get it.

> This'll hopefully handle any potential issues you'd have with quotes or other symbols breaking the JSON input.

> Let me know how it goes!

Wow. I'm humbled and grateful.

I'll update once I'm done with work and back in front of my hone nachine.


Hey — just pushed a big update that adds an (opt-in) frontend to test the API

For now, there's just a textarea for input (so you'll have to copy the `.txt` contents) — but it's a lot easier than trying to finagle into a `curl` request

Let me know if you have any issues!


(Didn't carefully read your reply. What follows are the results of cat-ing a text file in the CLI. Will give the new textbox a whirl in the morning PDT. A truly heartfelt thanks for helping me work with Chatterbox TTS!)

Absolutely blown away.

I fed it the first page of Gibson's "Neuromancer" and your incantation worked like a charm. Thanks for the shell script pipe mojo.

Some other details:

  - 3:01 (3 mins, 1 sec) of generated .wav took 4:28 to process
  - running on M4 Max with 128GB RAM
  - Chatterbox TTS inserted a few strange artifacts which sounded like air venting, machine whirring, and vehicles passing. Very odd and, oddly, apropos for cyberpunk.
  - Chatterbox TTS managed to enunciate the dialog _as_ dialog, even going so far as to mimick an Australian accent where the speaker was identified as such. (This might be the effect of wishful listening.)
I am astounded.


An M4 Max with 128GB RAM? drools

What did your `it/s` end up looking like with that setup? MLX is fascinating to me. Apple made a really smart decision with the induction of its M-series.

With regard to the artifacts — this is definitely a known issue with Chatterbox. I'm unsure of where the current investigation on fixing it is at (or what the "tricks" are to avoid this), but it's definitely something that is eery among other things.

I appreciate your feedback through all of this!

Would love to have you on the Discord to keep in touch https://chatterboxtts.com/discord


I'll follow up on Discord!

For those following along at home: frontend works (and is quite nice) after updating `vite.config.ts` with a proxy

  server: {
    proxy: {
      // Proxy all API requests to the FastAPI backend
      '/v1': 'http://localhost:4123',
    },
  },


Spent an hour trying to get it running with a RTX 50 series, no luck, tried with PyTorch 2.7.

Seems built for 2.6.

"chatterbox-tts 0.1.2 requires torch==2.6.0, but you have torch 2.7.0+cu128 which is incompatible. chatterbox-tts 0.1.2 requires torchaudio==2.6.0, but you have torchaudio 2.7.0+cu128 which is incompatible."


Would this be usable on a PC without a GPU?


It can definitely run on CPU — but I'm not sure if it can run on a machine without a GPU entirely.

To be honest, it uses a decently large amount of resources. If you had a GPU, you could expect about 4-5 gb memory usage. And given the optimizations for tensors on GPUs, I'm not sure how well things would work "CPU only".

If you try it, let me know. There are some "CPU" Docker builds in the repo you could look at for guidance.

If you want free TTS without using local resources, you could try edge-tts https://github.com/travisvn/openai-edge-tts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: