Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
mehdibl
5 days ago
|
parent
|
context
|
favorite
| on:
Show HN: Timber – Ollama for classical ML models, ...
Ollama is quite a bad example here. Despite popular, it's a simple wrapper and more and more pushed by the app it wraps llama.cpp.
Don't understand here the parallel.
help
kossisoroyce
4 days ago
|
next
[–]
TBVH I didn't think about naming it too much. I defaulted to Ollama because of the perceive simplicity and I wanted that same perceived simplicity to help adoption.
reply
eleventyseven
4 days ago
|
prev
|
next
[–]
This is the vLLM of classic ML, not Ollama.
reply
ekianjo
5 days ago
|
prev
[–]
I guess the parallel is "Ollama serve" which provides you with a direct REST API to interact with a LLM.
reply
sieve
4 days ago
|
parent
[–]
llama-cpp provides an API server as well via llama-server (and a competent webgui too).
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search:
Don't understand here the parallel.