One of the projects that genuinely gives me hope about the future of AI is llama.cpp. At a time when most discussions revolve around burning money on massive closed models, I’ve increasingly found...
These are just some random collections of ideas/thoughts that come to my mind as I introspect the work I have done in past 3 years in Gen AI space. LLMs are inheritly probablistic machines Any...
Jumping onto the fine-tuning ship often feels like the natural next step when prompts hit a corner case. With the rise of SaaS tools simplifying fine-tuning, it's tempting—but it’s often unnecessary....
Find the paper here RAG The Retrieval-Augmented Generation (RAG) technique offers a promising approach when leveraging large language models like LLMs to build knowledge bases. Envision creating a...
I've been working with large language models from across the spectrum, including OpenAI GPT, Anthropic, LLAMA, etc., for quite some time. For most of this journey, I was leaning towards selecting...
In this blog, we will discuss prompt engineering techniques used in working with large language models: zero-shot, one-shot, and few-shot learning. These methods aim to enable models to perform and...