Tag: ollama
-
Local LLMs with Ollama: RAG over internal docs without handing them to a third party
A field-tested take on RAG over internal docs without handing them to a third party with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Caching embeddings locally for a private retrieval layer
A field-tested take on caching embeddings locally for a private retrieval layer with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: When a local 7B beats a cloud 70B in latency-sensitive loops
A field-tested take on when a local 7B beats a cloud 70B in latency-sensitive loops with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Thermal budget of a MacBook pretending to be a server
A field-tested take on the thermal budget of a MacBook pretending to be a server with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Running evals locally before trusting a new model
A field-tested take on running evals locally before trusting a new model with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Pairing Ollama with Continue or Cline for a private IDE
A field-tested take on pairing Ollama with Continue or Cline for a private IDE with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Keeping a small local model honest with tight prompts
A field-tested take on keeping a small local model honest with tight prompts with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.
-
Local LLMs with Ollama: Picking a local model for the specific task, not for the benchmark
A field-tested take on picking a local model for the specific task, not for the benchmark with Local LLMs with Ollama: what it rewards, where it breaks, and how to keep the workflow honest.