← Back to posts

Working with LLMs a Review

These are just some random collections of ideas/thoughts that come to my mind as I introspect the work I have done in past 3 years in Gen AI space.

LLMs are inheritly probablistic machines

Any attempt made to make them work as a deterministic machine requires too much duck tapping. And hence should be avoided. But i see a lot of GenAI products leaning in this direction.

It's an overestimation of LLM capablities. Instead product should work in that direction not the LLM. The components of the larger system should stir the LLMs output to final result.

LLM application level protocol

I can totally forsee a strong usecase for an http like protocol for LLM applications. MCP seems to be a strong and promising candidate.

Obeolete vs Upgrade

While building your product think in terms of how and what to build such that each upgrade in LLM capabilties, will give a boost or upgrade to your product rather than making it obsolete

Some future expectations from LLMS

  • Inference cost must become even cheaper
  • Models which are not reliant on only reasoning steps, more practicial would be LLMs which follows system prompts more rigidly.