•8 min read
Building a Local RAG Chatbot with Ollama and ChromaDB
How I built a Retrieval-Augmented Generation (RAG) chatbot that runs locally using Ollama for LLM inference and ChromaDB for vector search.
RAGOllamaChromaDBLLMChatbotPython
Thoughts on software, technology, and building products.
How I built a Retrieval-Augmented Generation (RAG) chatbot that runs locally using Ollama for LLM inference and ChromaDB for vector search.
A deep learning-based system for comparing fashion items and determining their similarity using EfficientNet-B4 and cosine similarity.